Home > Latest News > Microsoft Ditches Emotion-Reading AI, Citing Public Responsibility

Microsoft Ditches Emotion-Reading AI, Citing Public Responsibility

Microsoft has published its Responsible AI Standard, the framework around how the tech giant will build AI systems.

“The Responsible AI Standard sets out our best thinking on how we will build AI systems to uphold these values and earn society’s trust,” the company explains.

“It provides specific, actionable guidance for our teams that goes beyond the high-level principles that have dominated the AI landscape to date.”

As part of this, the company has ditched AI capabilities that “infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup.” This software has proven problematic for the company, in short due to the inherent stereotyping involved.

“Taking emotional states as an example, we have decided we will not provide open-ended API access to technology that can scan people’s faces and purport to infer their emotional states based on their facial expressions or movements,” the company explains.

“Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of ’emotions,’ the challenges in how inferences generalise across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.

“We also decided that we need to carefully analyse all AI systems that purport to infer people’s emotional states, whether the systems use facial analysis or any other AI technology.”

 



You may also like
Good Guys Halt Facial Recognition As Watchdog Investigates
Judge Who Dismissed Big Tech Child Labour Suit Profited From Decision
Microsoft Entices Children To Use Edge With Minecraft
Scandal Plagued Sony PlayStation Rolls Out New Gaming Subs Service, Despite System Crashes Overseas
Microsoft-Activision Merger Raises Labour Concerns