Quick News Bit

Google Search boss’ ChatGPT warning would’ve worked best before Bard’s big fumble

0

The popularity of OpenAI’s ChatGPT shocked Google and forced it to show off its own artificial intelligence (AI) product ahead of schedule. That ChatGPT will not “kill” Google’s search dominance. But Google rushed the debut of its ChatGPT rival “Bard” in response to the current craze. The Bard chatbot quickly fumbled its first demo, leading to a massive sell-off that trimmed $100 billion in market capitalization from the company’s stock.

We already explained why such a massive error is actually a significant step forward for the nascent technology of generative AI. The bots aren’t ready to deliver dependable answers and can be manipulated. You might miss ChatGPT’s mistakes, which the chatbot is prone to. But when Google’s Bard makes mistakes, the world will pay attention.

It turns out that Google already has a warning about the convincing yet fictitious results that ChatGPT and Bard can offer. They’re AI “hallucinations,” according to Google Search chief Prabhakar Raghavan. Of course, his warning might’ve been taken more seriously before Bard produced its own hallucinations.

Google’s first rumored reaction to ChatGPT happened during an all-hands meeting with employees. The company’s top execs said that Google couldn’t deploy ChatGPT-like products in Search as the chatbots can’t match the accuracy of traditional search results. Any errors would impact Google’s reputation.

OpenAI never claimed perfect accuracy for ChatGPT. But the AI firm doesn’t have a tight grip on the online search market like Google does.

Just as Google’s first reaction leaked, the world got wind of Google’s “code red” crisis. CEO Sundar Pichai reportedly prioritized work on ChatGPT-like services. Word of Bard got out before Google officially unveiled the product last week. And then Bard made an innocent mistake that cost Google $100 billion in market cap.

ChatGPT photo illustration
In this photo illustration, the ChatGPT (OpenAI) logo is displayed on a smartphone screen. Image source: Rafael Henrique/SOPA Images/LightRocket via Getty Images

The market’s impulsive reaction proves that the general public hasn’t figured something out yet. Generative AI is great, but it’s a work in progress. The mistake made by Bard isn’t better or worse than the results we see all the time from ChatGPT. Instead, it should be an eye-opener on the actual dangers of trusting AI chatbots that are still works in progress. You’ll have to question all the responses you get before the likes of Google, OpenAI, and Microsoft can guarantee that chatbots deliver reliable answers.

That’s why Raghavan’s comments are necessary, even though somewhat late relative to the Bard blunder. He told German-language Welt Am Sonntag that AI will deliver incorrect information very convincingly, even when the answers are flat-out wrong. This is what he refers to as AI hallucinations.

“This kind of artificial intelligence that we are currently talking about can sometimes lead to something we call hallucination,” the Google Search exec said. “This is then expressed in such a way that a machine provides a convincing but completely fictitious answer.”

Put differently, Google’s Bard hallucinated at the worst possible time last week during its first demo.

Yet Google still felt the “urgency” to reveal Bard to the public. “But we also feel the great responsibility,” Raghavan added. “We definitely don’t want to mislead the public. We are considering how to integrate these possibilities into our search functions, especially for the questions to which there is not only a single answer.”

How long will it take for Google Bard and ChatGPT to get rid of hallucinations? We might be in for a long wait for that answer.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment