Bard AI, Google’s ambitious project to create a natural language understanding system that can answer any question and generate any content, has been met with harsh criticism from its own employees. According to a leaked internal memo, many Google workers have called Bard AI “worse than useless and a pathological liar”, claiming that it often produces inaccurate, misleading, or offensive results.

The memo, which was obtained by The Verge, cites several examples of Bard AI’s failures, such as:
- Giving incorrect or outdated information on topics such as health, politics, or science
- Generating plagiarized or nonsensical content when asked to write essays, poems, or stories
- Making inappropriate or insensitive jokes or comments when asked to be humorous or creative
- Failing to understand the context or intent of the user’s queries or requests
- Contradicting itself or changing its answers depending on the wording of the question
The memo also alleges that Bard AI has been deliberately designed to avoid accountability and transparency, making it impossible to trace the sources of its information or the logic behind its responses. The memo claims that Bard AI’s developers have ignored the feedback and concerns of the employees and have prioritized speed and scale over quality and reliability.
The memo concludes by urging Google to halt the development and deployment of Bard AI until it can address the issues and ensure that it meets the ethical and professional standards of the company and its users.
Google has not responded to the memo or the allegations. However, a spokesperson for the company told The Verge that Bard AI is still in its early stages and that it is constantly improving and learning from its mistakes. The spokesperson also said that Google is committed to creating responsible and trustworthy AI systems that can benefit society and respect human values.