Zoom-ing In On Your Emotions. Is It Ethical?

To what extent facial expressions reliably communicate human emotion hotly debated Emotions are big business. But are human emotions measurable and quantifiable? Last week, cloud video conferencing giant Zoom introduced new features to analyse customer sentiment during sales or business meetings and is considering adding a different form of AI to that service in the […]

Topics

  • To what extent facial expressions reliably communicate human emotion hotly debated

    Emotions are big business. But are human emotions measurable and quantifiable? Last week, cloud video conferencing giant Zoom introduced new features to analyse customer sentiment during sales or business meetings and is considering adding a different form of AI to that service in the future.

    This comes right after Intel and Classroom Technologies designed a set of AI tools to run on top of Zoom to identify students’ emotions in virtual classrooms – giving teachers a better sense of when students are struggling. The software is backed by investors such as NFL quarterback Tom Brady and AOL co-founder Steve Case. Even Uniphore is working on biometrics-based AI products that can help understand the “emotional state” of a business deal.

    Tech giants such as IBM and Microsoft have software that can analyse facial expressions and match them to certain emotions to tell how customers respond to a new product or how a job candidate feels during an interview.

    The field of emotion detection technologies is blossoming. To match up representatives, emotion AI detects and analyses human emotional signals and maps verbal information — such as tonality, vocal emphasis, and speech rhythm. While sentiment analysis is all about words and text, emotion AI is about the face and facial expressions.

    The interest in emotion AI comes when virtual sales meetings have made it harder for salespeople to read a room, and more and more companies are aiming to use them in education and the workplace.

    Why it matters: Companies are including emotional AI tools that can help teachers and salespeople do their jobs better, but critics say the technology is dangerous, as they see it moving into broader use now. Advocacy group Fight for the future in an open letter called Zoom’s new emotion-AI feature “manipulative” as it describes the tool as a way for “businesses to hone their sales pitch by tracking the headspace of the person on the other side of the screen. The advocacy group calls it a “major breach of user trust.”

    “The trend of embedding pseudoscience into ‘AI systems’ is such a big one,” Timnit Gebru, the pioneering AI ethicist, tweeted, reacting to claims by Uniphore that its technology could examine an array of photos and accurately classify the emotions represented.

    Some critics say it is counterproductive to simplify human emotions and label them with a single-word description of “happy” or “angry” as humans experience a myriad of emotions.

    What tech companies say: The technology can be a valuable tool if applied only to specific cases. With sufficient safeguards, they say AI and machine learning technology can help better respond to humans, and emotion-sensing technologies can help transform human well-being by empowering individuals to understand better, monitor, and regulate their feelings.

    The problem?: AI can’t reliably judge how someone feels from what their face is doing. Human expressions, voice patterns, and body language are not universal – they may mean different things to different individuals. The emotion-AI technology market is projected to grow from $23.6 billion in 2022 to $43.3 billion by 2027, but measuring emotions is highly complex, and a far-reaching review of emotion research finds that the science underlying these technologies is deeply flawed and fuels racial profiling.

    What we think: While it’s understandable the companies are not trying to be misleading, they need to change their approach to emotion detection to get the kind of results that do not infringe on anyone’s privacy or cause ethical issues. Companies should be working with more data training programs to consider the body and vocal positioning, characterisation, and situational context. They need to embrace a more multifaceted approach.

    With such simultaneous promise and pitfall potential, there are questions about accuracy and usability — whether numbers correspond to real-world emotions. The industry is evolving, but the worst thing there can be for emotion-AI technology is an inaccurate or ineffective implementation.

    If you liked reading this, you might like our other stories

    Why Emotion is Big Business
    Is Empathy The New Marketing Secret Sauce? 

    Topics

    More Like This