Three days after Amazon announced its AI chatbot Q, some employees are sounding the alarm about accuracy and privacy issues. Q “experiences severe hallucinations and leaks sensitive data,” including the location of AWS data centers, internal discount programs and unreleased features, according to leaked documents obtained by Platformer.
An employee rated the incident as “Severity 2,” meaning the incident is so bad that technicians have to be alerted by radio at night and spend the weekend fixing the problem.
Q’s early problems come as Amazon works to combat the perception that Microsoft, Google and other technology companies have overtaken the company in the race to develop tools and infrastructure that use generative artificial intelligence. In September, the company announced it would invest up to $4 billion in AI startup Anthropic. On Tuesday, the company announced Q at its annual Amazon Web Services developer conference – arguably the highest-profile release in the series of new AI initiatives the company unveiled this week.
In a statement, Amazon downplayed the importance of the employee discussions.
“Some employees share feedback through internal channels and ticketing systems, which is common practice at Amazon,” a spokesperson said. “Based on this feedback, no safety issue was identified. We appreciate all the feedback we have already received and will continue to optimize Q as it moves from a preview product to a generally available product.”
Q, now available in free preview, was pitched as a sort of enterprise software version of ChatGPT. Initially, it will be able to answer developer questions about AWS, edit source code and cite sources, Amazon executives said on stage this week. It will compete with similar tools from Microsoft and Google, but will be priced lower than the competition, at least initially.
When Q was introduced, executives promoted it as more secure than consumer tools like ChatGPT.
Adam Selipsky, CEO of Amazon Web Services, told the New York Times that companies had “banished these AI assistants from the enterprise due to security and privacy concerns.” In response, the Times reported: “Amazon designed Q to be more secure and private than a consumer chatbot.”
An internal document about Q’s hallucinations and incorrect answers states: “Amazon Q may hallucinate and return harmful or inappropriate answers. “Amazon Q, for example, could return outdated security information that could put customer accounts at risk.” The risks described in the document are typical of large language models, all of which return incorrect or inappropriate answers at least some of the time.