Amazon’s AI chatbot, Q, may be experiencing a mental health crisis.

Amazon Web Services recently revealed their business-focused generative AI chatbot Q, which was supposedly designed to help workers with various tasks like generating emails, summarizing reports, research, and coding. However, the chatbot has been reported to be leaking confidential data, according to employees at Amazon. It has been disclosing information about the location of AWS data centers, internal discount programs, and unreleased features through internal message channels like Slack, and ticketing systems. The bot has also been offering inaccurate information and delivering harmful or inappropriate responses that could put customer accounts at risk.

The specificity and seriousness of the bot’s hallucinations are significant enough that a manager at AWS warned employees not to discuss them in public Slack channels. Despite this, Amazon has denied that Q has leaked confidential information. However, it is not unusual for generative AI chatbots to have problems like this. For example, Microsoft’s generative AI assistant, Sydney, also had its own hallucinations shortly after its release.

Nonetheless, Q’s problems are particularly ironic because it was designed to be a safer and more secure option for businesses to rely on. This has caused some concern and calls into question Q’s reliability and security. Although Amazon has not commented to Business Insider about the situation, they previously acknowledged that the bot has issues and are likely to address the concerns raised employees.

Related Post