Alexa Accused Of Sexism But Bigger AI Problems Afoot
Amazon’s Alexa has been accused of being powered by sexist algorithms after its voice assistant could not answer who won a specific game for the Women’s World Cup.
According to Academic Joanne Rodda, when the question was asked, “for the result of the England-Australia football match today”, Alexa responded that there was no match. She says the answer exposed that “sexism in football was embedded in Alexa”.
Dr. Rodda said she could only get Alexa to respond when she said women’s football. She highlighted this issue to BBC, and the news organisation said it could replicate the same error.
An Amazon spokesperson responded to the claims by saying: “This was an error that has been fixed.”
In response to Amazon’s statement, Dr. Rodda said the primary problem lies not with the specific question and it not being answered but at Alexa’s core, which is its algorithm.
She said to BBC that it’s “pretty sad that after almost a decade of Alexa, it’s only today that the AI algorithm has been ‘fixed’ so that it now recognises woman’s World Cup football as ‘football'”.
According to Amazon, every time a user asks Alexa a question, intelligence is aggregated from many resources, such as Amazon, websites, and licensed content providers, and that the automated systems use AI to come up with appropriate responses.
Amazon asserted that the systems will evolve further and that entire Amazon teams are committed to avoiding other situations like this in the future.
After Amazon’s response and the company claiming they fixed the problem, Dr. Rodda said she posed a similar question to Alexa about the Women’s Super League and that the same problem arose.
“It replied with information about the men’s team and wasn’t able to give an answer when I asked specifically about women’s fixtures,” she said.
Cases such as these underscore the problem of relying on AI and the underlying bias lurking in systems powered by the growing AI sector.
In the documentary Coded Bias, M.I.T. Media Lab computer scientist Joy Buolamwini found that the creators of algorithms typically could not fully explain where their data is aggregated from, especially when asking the question of how to keep bias out of the algorithms. She also found that sexism and racism within the algorithms were rampant.
Beyond Buolamwini, many other critics have rung the alarm when it comes to AI having the power to approve credit or acting as a filter over a job applicant pool, such as EU’s competition chief Margrethe Vestager advocating that AI can imbed and augment existing prejudices.
As it becomes increasingly embedded into the human experience and the workplace, the responsibility lies at the feet of developers and any company touting AI. They must be held accountable for weeding out bias more thoroughly to ensure that it does not exist within algorithms as much as possible because, in some cases, these tools have the power to decide our fate and a variety of important outcomes such as new jobs, credit, and healthcare and car premiums.