New Delhi: Can artificial intelligence (AI) know if you are happy, sad, angry or frustrated? According to technology companies that offer AI-based emotion recognition software, the answer to this question is yes.
But this statement does not compare to the growing scientific evidence.
What’s more, emotion recognition technology poses a number of legal and social risks, especially when implemented in the workplace.
For these reasons, the European Union’s AI Law, which came into effect in August, prohibits AI systems used to infer a person’s emotions in the workplace, except for “medical” or “safety” reasons. .
In Australia, however, there is still no specific regulation for these systems. As I argued in my submission to the Australian Government in its most recent round of consultations on high-risk AI systems, this must urgently change.
A new and growing wave
The global market for AI-based emotion recognition systems is growing. It was valued at $34 billion in 2022 and is expected to reach $62 billion in 2027.
These technologies work by making predictions about a person’s emotional state based on biometric data, such as their heart rate, skin moisture, tone of voice, gestures or facial expressions.
Next year, Australian tech startup inTruth Technologies plans to launch a wrist-worn device that it claims can track a user’s emotions in real time through their heart rate and other physiological metrics.
inTruth Technologies founder Nicole Gibson has said employers can use this technology to monitor a team’s “performance and energy” or their mental health to predict problems such as post-traumatic stress disorder.
He has also said that inTruth can be an “AI emotion coach that knows everything about you, including what you feel and why you feel it.”
Emotion recognition technologies in Australian workplaces
There is little data on the implementation of emotion recognition technologies in Australian workplaces.
However, we know that some Australian companies used a video interview system offered by a US-based company called Hire Vue which incorporated face-based emotion analysis.
This system used facial movements and expressions to evaluate the suitability of job applicants. For example, applicants were evaluated based on whether they expressed enthusiasm or how they responded to an angry customer.
HireVue removed emotion analysis from its systems in 2021 following a formal complaint in the United States.
Emotion recognition may be on the rise again as Australian employers adopt AI-powered workplace surveillance technologies.
Lack of scientific validity
Companies like inTruth claim that emotion recognition systems are objective and based on scientific methods.
However, academics have expressed concern that these systems involve a return to the discredited fields of phrenology and physiognomy. That is, the use of a person’s physical or behavioral characteristics to determine their abilities and character.
Emotion recognition technologies rely heavily on theories that assert that internal emotions are measurable and universally expressed.
However, recent evidence shows that the way people communicate their emotions varies widely across cultures, contexts, and individuals.
In 2019, for example, a group of experts concluded that “there are no objective measures, either individually or as a standard, that identify emotional categories in a reliable, unique and replicable way.” For example, a person’s skin moisture may go up, down, or stay the same when they are angry.
In a statement to The Conversation, inTruth Technologies founder Nicole Gibson said that “it’s true that emotion recognition technologies have faced significant challenges in the past,” but that “the landscape has changed significantly in recent years.”
Violation of fundamental rights
Emotion recognition technologies also endanger fundamental rights without adequate justification.
They have been found to discriminate on the basis of race, gender and disability.
In one case, an emotion recognition system interpreted black faces as angrier than white faces, even when both were smiling to the same degree. These technologies may also be less accurate for people from demographic groups not represented in the training data.
Gibson acknowledged his concerns about bias in emotion recognition technologies. But he added that “the bias is not inherent in the technology itself but in the data sets used to train these systems.” He said inTruth is “committed to addressing these biases” by using “diverse and inclusive data sets.”
As a surveillance tool, workplace emotion recognition systems pose serious threats to privacy rights. Such rights may be violated if confidential information is collected without the employee’s knowledge.
A breach of privacy rights will also occur if the collection of such data is not “reasonably necessary” or by “fair means.”
Opinions of workers
A survey released earlier this year found that only 12.9% of Australian adults support face-based emotion recognition technologies in the workplace. The researchers concluded that respondents considered facial analysis to be invasive. Respondents also considered the technology to be unethical and highly prone to errors and biases.
In a US study also published this year, workers expressed concern that emotion recognition systems would harm their well-being and impact their work performance.
They feared that inaccuracies could create false impressions about them. In turn, these false impressions could prevent promotions and salary increases or even lead to dismissal.
As one participant stated: I just don’t see how this could actually be anything other than destructive to minorities in the workplace.