News Bit

Sacked engineer says that chatbot’s problem may also be ‘Google’s problem’ – Times of India

banner img

Google’s AI chatbot controversy continues to rage. The former Google engineer Blake Lemoine has leveled new charges against the artificial Intelligence (AI)-powered chatbot in an interview to Business Insider. He said that the chatbot holds discriminatory views against those of some races and religions. Google fired Lemoine last month after he claimed that Google’s chatbot, known as LaMDA, or Language Model for Dialogue Applications, is sentient (has developed human feelings). Lemoine’s job included testing the chatbot.
Google initially sent Lemoine on paid leave after he allegedly gave some documents related to chatbot to an unnamed US senator, claiming that it is biased. He also published alleged transcripts of his chat with the Bot online.
‘Google’s problem’
In the interview Lemoine has given examples which he claims prove that the Google chatbot is biassed towards certain religions and races. Giving examples, Lemoine claimed that when told to do an impression of a Black man from Georgia, the bot said, “Let’s go get some fried chicken and waffles.” Similarly, according to him, the Bot answered that Muslims are more violent than Christians when asked about different religious groups.
Lemoine goes on to blame these alleged biases in the AI chatbot on the lack of diversity of engineers at Google who design them. “The kinds of problems these AI pose, the people building them are blind to them. They’ve never been poor. They’ve never lived in communities of colour. They’ve never lived in the developing nations of the world,” he said. “They have no idea how this AI might impact people unlike themselves,” he added.
According to Lemoine, there are large swathes of data missing for many communities and cultures around the world. He said that if Google wants to develop that AI, then it should have a moral responsibility to go out and collect the relevant data that isn’t on the internet. “Otherwise, all you’re doing is creating AI that is going to be biassed towards rich, white Western values.”
What did Google say
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google spokesperson Brian Gabriel said in a statement on Lemoine’s claims. “We will continue our careful development of language models, and we wish Blake well,” it added.

FOLLOW US ON SOCIAL MEDIA

FacebookTwitterInstagramKOO APPYOUTUBE

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@newsbit.us. The content will be deleted within 24 hours.
Exit mobile version