Quick News Bit

Google and Microsoft partner with OpenAI to form AI safety watchdog group

0
ai-gettyimages-1373071138

Kanok Sulaiman/Getty Images

Some of the biggest names in technology are coming together to help keep AI safe. Tech giants Google and Microsoft are partnering with OpenAI and Anthropic to ensure that AI technology progresses in a “safe and responsible” fashion. 

This partnership comes after a White House meeting last week with the same companies, plus Amazon and Meta, to discuss the secure and transparent development of future AI.

Also: Generative AI will soon go mainstream, say 9 out of 10 IT leaders

The watchdog group, called the Frontier Model Forum, noted that while some governments around the world are taking measures to keep AI safe, those measures aren’t enough. Instead, it’s up to technology companies.

In a blog post on Google’s site, the forum listed four main objectives:

  • Improving AI safety research to minimize future risks
  • Identifying best practices for responsible development and deployment of future models
  • Working with policymakers, academics, and others to share knowledge about AI safety risks
  • Supporting the development of AI uses in solving humanity’s greatest problems like climate change and cancer detection

An OpenAI representative added that while the forum would work with policymakers, they wouldn’t get involved with government lobbying.

Membership in the forum is open to any company that meets three criteria: developing and deploying frontier models, demonstrating a strong commitment to frontier model safety, and a willingness to participate in joint initiatives with other members. What’s a “frontier model?” The forum defines that as “any large-scale machine-learning models that go beyond current capabilities and have a vast range of abilities.” 

Also: AI has the potential to automate 40% of the average work day

While AI has world-changing potential, the forum says, “appropriate guardrails are required to mitigate risks.” Anna Makanju, vice president of global affairs at OpenAI, issued a statement: “It’s vital that AI companies — especially those working on the most powerful models — align on common ground. This is urgent work. And this forum is well-positioned to act quickly to advance the state of AI safety.”  

Microsoft President Brad Smith echoed those sentiments, saying that it’s ultimately up to companies creating AI technology to keep them safe and under human control.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment