As the race on AI chatbots roll out heats up, a factual error was made by Google’s AI chatbot, Bard, in a video preview released just prior to its high-profile launch event in Paris on Wednesday. Although Bard is still in the testing phase, it is being marketed as a rival to ChatGPT, a well-known chatbot by OpenAI that also has its own set of challenges.
The video, which was shared by Google on Monday, showcases a user asking Bard about the new discoveries made by the James Webb Space Telescope (JWST) that could be explained to a 9-year-old. Bard provided several answers, including a statement that the telescope was responsible for capturing the very first images of a planet outside of our solar system.
However, this information was incorrect, as noted by astrophysicist Grant Tremblay on Twitter. He stated that the first such image was actually taken by the European Southern Observatory’s Very Large Telescope in 2004, and added that while AI chatbots can be “spookily impressive,” they are “often very confidently wrong.”
The mistake made by Bard was quickly detected prior to its official launch by Google at an event in Paris on Wednesday morning, causing the company’s value to drop by 8%.
As AI systems like Bard comb through vast amounts of data created by humans, they generate responses based on what they predict to be the most likely answer to a question or prompt. Microsoft acknowledged this limitation when it announced on Tuesday that its Bing search engine would incorporate ChatGPT, which was developed on the OpenAI platform, funded by Microsoft.
“Please note that Bing is powered by AI and thus, mistakes and unexpected results may occur,” a disclaimer from Microsoft stated.
The advancement of conversational AI has faced allegations of political bias among developers. Recently, it was discovered that ChatGPT would not make positive comments about fossil fuels or former US president Donald Trump, but would praise a meat-free lifestyle and write poems in honor of President Biden, who succeeded Trump as a Democrat.
Bard is likely to face similar politically influenced restrictions, as Google CEO Sundar Pichai stated on Monday that it would abide by the company’s “responsible” AI principles. These principles dictate that Google’s AI products should “prevent unjust harm to individuals, particularly in areas such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”
This advancement merely suggests that while AI chatbots can assist in some capacities, users of these technologies must exercise caution, as there are tasks, such as certain research projects, that necessitate a certain level of due diligence.