AI leaders sign an open letter to openly acknowledge the dangers of AI
In March, an open letter spearheaded by tech industry experts sought to halt the development of advanced AI models out of fear the technology could pose a “profound risk to society and humanity”.
This week, a statement cosigned by OpenAI CEO Sam Altman, the “godfather” of AI Geoffrey Hinton, and others seeks to reduce the risk AI poses to push humanity to extinction. The statement’s preface encourages industry leaders to discuss AI’s most severe threats openly.
Also: How to use ChatGPT to write code
According to the statement, AI’s risk to humanity is so severe that it’s comparable to global pandemics and nuclear war. Other cosigners are researchers from Google DeepMind, Kevin Scott, Microsoft’s chief technology officer, and Bruce Schneier, an internet security pioneer.
Today’s large language models (LLMs) popularized by Altman’s ChatGPT cannot yet achieve artificial general intelligence (AGI). However, industry leaders are worried about LLMs progressing to that point. AGI is a concept that defines an artificially intelligent being that can equate to or surpass human intelligence.
Also: How does ChatGPT work?
AGI is an achievement that OpenAI, Google DeepMind, and Anthropic hope to reach one day. But each company recognizes there could be significant consequences if their technologies achieve AGI.
Altman said in his testimony in front of Congress earlier this month that his greatest fear is that AI “cause[es] significant harm to the world”, and that this harm could occur in multiple ways.
Just a few weeks earlier, Hinton had abruptly resigned from his position where he worked on neural networks at Google, telling CNN that he’s “just a scientist who suddenly realized that these things are getting smarter than us.”
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.