Quick News Bit

Meta: Google, Meta, OpenAI among 7 companies commit to responsible AI development – Times of India

0

The top companies involved in the development of artificial intelligence (AI) tools and products have committed to protecting users from risks posed by the technology by voluntarily agreeing to a series of promises. These companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.

According to a note by the White House, the Biden-Harris Administration has “secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.”
It also said that the companies have chosen to undertake the commitments immediately and they underscore three principles – safety, security, and trust – fundamental in developing responsible AI.

What are the commitments?
The commitments by the tech giants are broadly divided under these three principles. These companies have committed to internal and external security testing, which will be carried out in part by independent experts, of their AI systems before their release. Secondly, the companies will share information across the industry along with governments, civil society and academia on managing AI risks.
These seven tech companies will also invest in cybersecurity, and facilitate third-party discovery and reporting of vulnerabilities in their AI systems. They will also have to develop and deploy advanced AI systems to help address societal challenges, and publicly report their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
Tech CEOs take on AI development
Google and OpenAI, among others, have already been promoting responsible development of the technology. While Google CEO Sundar Pichai has spoken a lot about it in public forums and interviews, OpenAI chief executive recently concluded his global tour where he visited multiple countries, including India, to talk about the need for responsible AI.
In June this year, Apple CEO Tim Cook also opened up about the potential and dangers that AI poses to humanity. He said that large language models (LLMs) show “great promise” but also the potential for “things like bias, things like misinformation [and] maybe worse in some cases.”
Emphasising the need for regulation and guardrails, Cook said, “If you look down the road, then it’s so powerful that companies have to employ their own ethical decisions.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment