Amazon’s AI Chatbot Q Has Accuracy & Privacy Problems
According to leaked documents, Amazon’s AI Chatbot Q is claimed to be “experiencing severe hallucinations and leaking confidential data” with some staff suggesting it could “potentially induce cardiac incidents in Legal.”
Amazon only recently announced the business-focused Q but from the reports, it looks like the new AI chatbot might need a little more time in incubation before it launches commercially.
The leaks are said to be from AWS data centre locations showing internal discount programs, and even unreleased features.
By “experiencing severe hallucinations”, the leaks suggest Q is providing erroneous information as if it were accurate.
Amazon has said that the information leaked is incorrect.
“Some employees are sharing feedback through internal channels and ticketing systems, which is standard practice at Amazon. No security issue was identified as a result of that feedback. We appreciate all of the feedback we’ve already received and will continue to tune Q as it transitions from being a product in preview to being generally available,” an Amazon spokesperson said.
The new AI bot is meant to be used by businesses to aid workers with a array of job functions like generating content, summing up pages of content, group calls, and support other communication augment overall productivity.
Amazon said during the Q announcement that several businesses were already using the chatbot including Amazon itself and Accenture, Mission Cloud, Orbit Irrigation, BMW Group, Gilead, and Wunderkind.
Q differentiates itself from other bots by taking information from the “company’s information repositories, code bases, and enterprise systems,” which can be sticky if that data is false or is disclosed to unauthorised users.
The new Amazon chatbot is the tech giant’s answer to the OpenAI’s ChatGPT and other bots already debuted.