Quick News Bit

We will see a completely new type of computer, says AI pioneer Geoff Hinton

0
Geoffrey Hinton with earbuds in front of a background of bookshelves

Conventional digital computers, by prioritizing reliability, have missed out, said Turing Award winner Geoffrey Hinton, on “all sorts of variable, stochastic, flakey, analog, unreliable properties of the hardware, which might be very useful to us.”

NeurIPS 2022

Machine-learning forms of artificial intelligence are going to produce a revolution in computer systems, a new kind of hardware-software union that can put AI in your toaster, according to AI pioneer Geoffrey Hinton.

Hinton, offering the closing keynote Thursday at this year’s Neural Information Processing Systems conference, NeurIPS, in New Orleans, said that the machine learning research community “has been slow to realize the implications of deep learning for how computers are built.”

He continued, “What I think is that we’re going to see a completely different type of computer, not for a few years, but there’s every reason for investigating this completely different type of computer.”

All digital computers to date have been built to be “immortal,” where the hardware is engineered to be reliable so that the same software runs anywhere. “We can run the same programs on different physical hardware … the knowledge is immortal.”

Also: AI could have 20% chance of sentience in 10 years, says philosopher David Chalmers

Slide: A new type of computer

Geoffrey Hinton

That requirement means digital computers have missed out, Hinton said, on “all sorts of variable, stochastic, flakey, analog, unreliable properties of the hardware, which might be very useful to us.” Those things would be too unreliable to let “two different bits of hardware behave exactly the same way at the level of the instructions.”

Future computer systems, said Hinton, will be take a different approach: they will be “neuromorphic,” and they will be “mortal,” meaning that every computer will be a close bond of the software that represents neural nets with hardware that is messy, in the sense of having analog rather than digital elements, which can incorporate elements of uncertainty and can develop over time. 

“Now, the alternative to that, which computer scientists really don’t like because it’s attacking one of their foundational principles, is to say we’re going to give up on the separation of hardware and software,” explained Hinton. 

Also: LeCun, Hinton, Bengio: AI conspirators awarded prestigious Turing prize

“We’re going to do what I call mortal computation, where the knowledge that the system has learned and the hardware, are inseparable.”

These mortal computers could be “grown,” he said, getting rid of expensive chip fabrication plants.

“If we do that, we can use very low power analog computation, you can have trillion way parallelism using things like memristors for the weights,” he said, referring to a decades-old kind of experimental chip that is based on non-linear circuit elements. 

“And also you could grow hardware without knowing the precise quality of the exact behavior of different bits of the hardware.”

Also: Deep-learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws

The new mortal computers won’t replace traditional digital computers, Hilton told the NeurIPS crowd. “It won’t be the computer that is in charge of your bank account and knows exactly how much money you’ve got,” said Hinton.

“It’ll be used for putting something else: It’ll be used for putting something like GPT-3 in your toaster for one dollar, so running on a few watts, you can have a conversation with your toaster.”

Hinton above a slide about Mortal computation

NeurIPS 2022

Hinton was asked to give the talk at the conference in recognition of his paper from a decade ago, “ImageNet Classification with Deep Convolutional Neural Networks,” written with his grad students Alex Krizhevsky and Ilya Sutskever. The paper was awarded the conference’s “Test of Time” award for a “huge impact” on the field. The work, published in 2012, was the first time a convolutional neural network competed at a human level on the ImageNet image recognition competition, and it was the event that set in motion the current era of AI.

Hinton, who is a recipient of the ACM Turing award for his achievements in computer science, the field’s equivalent of the Nobel Prize, made up the Deep Learning Conspiracy, a group that resuscitated the moribund field of machine learning with his fellow Turing recipients, Meta’s Yann LeCun and Yoshua Bengio of Montreal’s MILA institute for AI. 

Also: AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun

In that sense, Hinton is AI royalty in his standing in the field.

In his invited talk, Hinton spent most of his time talking about a new approach to neural networks, called a forward-forward network, which does away with the technique of backpropagation used in almost all neural networks. He offered that by removing back-prop, forward-forward nets might more plausibly approximate what happens in the brain in real life. 

A draft paper of the forward-forward work is posted on Hinton’s homepage (PDF) at the University of Toronto, where he is an emeritus professor. 

The forward-forward approach might be well-suited to the mortal computation hardware, said Hinton.

“Now, if anything like that is to come to pass, we’ve gotta have a learning procedure that will run in a particular piece of hardware, and we learn to make use of the specific properties of that particular piece of hardware without knowing what all those properties are,” explained Hinton. “But I think the forward-forward algorithm is a promising candidate for what that little procedure might be.”

Also: The new Turing test: Are you human?

One obstacle to building the new analog mortal computers, he said, is that people are attached to the reliability of running a piece of software on millions of devices. 

“You’d replace that by each of those cell phones would have to start off as a baby cell phone, and it would have to learn how to be a cell phone,” he suggested. “And that’s very painful.”

Even the engineers most adept at the technology involved will be slow to relinquish the paradigm of perfect, identical immortal computers for fear of uncertainty.

“Among the people who are interested in analog computation, there are very few still who are willing to give up on immortality,” he said. That is because of the attachment to consistency, predictability, he said. “If you want your analog hardware to do the same thing each time… You’ve got a real problem with all these stray electric things and stuff.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment