Is AI Emotion Detection Ready for Prime Time?

Artificial Intelligence (AI) solutions that aim to recognize human emotions can provide useful insights for hiring, marketing, and other purposes. But their use also raises serious questions about accuracy, bias, and privacy. To learn about three common barriers that need to be overcome for AI emotion detection to become more mainstream, read on.

By using machine learning to mimic human intelligence, AI can execute everything from minimal and repetitive tasks to those requiring more “human” cognition. Now, AI solutions are popping up that go as far as to interpret human emotion. In solutions where AI and human emotion intersect, does the technology help, or deliver more trouble than value?

While we are starting to see emotion detection using AI in various technologies, several barriers to adoption exist, and serious questions arise as to whether the technology is ready to be widely used. AI that aims to interpret or replace human interactions can be flawed because of underlying assumptions made when the machine was trained. Another concern is the broader question of why anyone would want to have this technology used on them. Is the relationship equal between the organization using the technology and the individual on whom the technology is being used? Concerns like these need to be addressed for this type of AI to take off.

Let’s explore three common barriers to emotion detection using AI:

Barrier #1: Is AI emotion detection ethical for all involved?

Newly launched AI-based solutions that track human sentiment for sales, human resources, instruction, and telehealth can help provide useful insights by understanding people’s reactions during virtual conversations.

While talking through the screens, the AI tracks the sentiment of the person, or people, who are taking the information in, including their reactions and feedback. The person being tracked could be a prospective customer, employee, student, patient, etc., where it’s beneficial for the person leading the virtual interaction to better understand how the individual receiving the information is feeling and what they could be thinking.

This kind of AI could be viewed as ethical in human resources, telehealth, or educational use cases because it could benefit both the person delivering the information and those receiving the information to track reactions, such as fear, concern, or boredom. In this situation, the software could help deliver a better outcome for the person being assessed. However, few other use cases are available where it is advantageous for everyone involved to have one person get a “competitive advantage” over another in a virtual conversation by using AI technology.

Barrier #2:  Can discomfort and feelings of intrusion with AI emotion detection be overcome?  

This brings us to the next barrier – why should anyone agree to have this software turned on during a virtual conversation? If someone knows of an offset in control during a virtual conversation, the AI software comes across as incredibly intrusive. If people need to agree to be judged by the AI software in some form or another, many could decline just because of its invasive nature.

People are becoming more comfortable with technology and what it can do for us; however, people still want to feel like they have control of their decisions and emotions.

Barrier #3: How do we know if the results of emotion detection using AI are accurate?

We put a lot of trust in the accuracy of technology today, and generally, we don’t always consider how technology develops its abilities. The results for emotion-detecting AI depend heavily on the quality of the inputs that are training the AI. For example, the technology must consider not only how human emotion varies from person to person but the vast differences in body language and non-verbal communication from one culture to another. Users also will want to consider the value and impact of the recommendations that come out of the analysis and if it drives the desired behaviors that were intended.

Getting accurate data from using this kind of AI software could help businesses better meet the needs of customers and employees, and health and education institutions deliver better services. AI can pick up on small nuances that may otherwise be missed entirely and be useful in job hiring and other decision making.

But inaccurate data could alter what would otherwise have been a genuine conversation. Until accuracy improves, users should focus on whether the analytics determine the messages correctly and if overall patterns exist that can be used for future interactions. While potentially promising, AI emotion detection may still have some learning to do before it’s ready for prime time.

Contact us for questions or to discuss this topic further.

Learn more about recent advances in technology in our webinar, Building Successful Digital Product Engineering Businesses. Everest Group experts will discuss the massive digital wave in the engineering world as smart, connected, autonomous, and intelligent physical and hardware products take center stage.

Subscribe to our monthly newsletter to get the latest expert insights and research.

How can we engage?

Please let us know how we can help you on your journey.

Contact Us

"*" indicates required fields

Please review our Privacy Notice and check the box below to consent to the use of Personal Data that you provide.