Quick News Bit

AI could have 20% chance of sentience in 10 years, says philosopher David Chalmers

0
David Chalmers speaking at a podium with a laptop against a black background.

“Maybe in ten years we’ll have virtual perception, language action, unified agents with all these features, perhaps exceeding, say, the capacities of something like a fish,” suggests NYU philosophy professor David Chalmers. While a fish-level intelligent program wouldn’t necessarily be conscious, “there would be a decent chance of it.”

NeurIPS 2022

The likelihood that today’s most sophisticated artificial intelligence programs are sentient, or conscious, is less than 10 percent, but in a decade from now, the leading AI programs might have a 20 percent or better chance of being conscious.

That is, if they can achieve fish-level cognition.

That is how NYU philosophy professor David Chalmers on Monday threaded the needle of an extremely controversial topic. 

Chalmers’s talk, titled, “Could a large language model be conscious?” was the Dec. 1 opening keynote of the 36th annual Neural Information Processing Systems conference, commonly known as NeurIPS, the most prestigious AI conference in the world, taking place this week in New Orleans. 

A large language model, of course, is the designation for some of today’s most advanced machine learning forms of AI programs, such as GPT-3, from the AI startup OpenAI, which is capable of generating human-seeming text. 

Also: Modular supercomputer ‘Andromeda’ to speed up large language models

2022 has been an incendiary year for claims about how GPT-3 and large AI programs like it might have consciousness or sentience. In February, noted machine learning pioneer Ilya Sutskever of the AI startup OpenAI caused a firestorm when he tweeted, “It may be that today’s large neural networks are slightly conscious.”

This summer, Google researcher Blake Lemoine caused even more controversy with his contention that the LaMDA language program was sentient.

Those controversies “piqued my curiosity,” said Chalmers. He was in no rush to take sides. Having played around with LaMDA, he said, “I didn’t have any, suddenly, hallelujahs where I detected sentience.”

Plan: 1. Clarify consciousness. 2. Examine reasons in favor of LLM consciousness. 3. Examine reasons for thinking LMMs aren't or cannot be conscious. 4. Draw conclusions and build a roadmap.

David Chalmers

Instead, Chalmers decided to approach the whole matter as a formal inquiry in scholarly fashion. “You know, what actually is or might be the evidence in favor of consciousness in a large language model, and what might be the evidence against this?” he put to the audience.

(Chalmers considers the two terms “conscious” and “sentient” to be “roughly equivalent,” at least for the purpose of scientific and philosophical exploration.)

Chalmers’s inquiry was also, he said, a project to find possible paths to how one could make a conscious or sentient AI program. “I really want to think of this also as a constructive project,” he told the audience, “one that might ultimately lead to a potential roadmap to consciousness in AI systems.”

He said, “My questions will be questions like, Well, first, are current large language models plausibly conscious? But maybe even more important, Could future large language models and extensions thereof be conscious?”

Also: The new Turing test: Are you human?

To proceed, Chalmers urged his audience to consider the arguments that might establish or refute consciousness or sentience in GPT-3 and the like. 

First, Chalmers gave a breezy introduction to the definitions of consciousness. Following philosopher Thomas Nagel’s famous article “What is it like to be a bat?“, said Chalmers, one conception of consciousness is that it means that “there’s something it’s like to be that being, if that being has subjective experience, like the experience of seeing, of feeling, of thinking.” 

Since most people think there is not an experience that a water bottle would have of being a water bottle, “The water bottle does not have subjective experience, it’s not conscious,” he said — though Chalmers later made clear that even in the case of a water bottle, it’s open to debate.

Reasons to Deny LLM Consciousness?

David Chalmers

Added Chalmers, consciousness has to be distinguished from intelligence, in both creatures and AI. “Importantly, consciousness is not the same as human-level intelligence,” he said. “I think there’s a consensus that many non-human animals are conscious,” he said, including mice, and “even fish, for example, a majority think those are consciousness,” and, “their consciousness does not require human-level intelligence.”

Chalmers walked through “reasons in favor of consciousness,” such as “self-reporting,” as in the case of Google’s Lemoine asserting that LaMDA talked about its own consciousness. 

Also: Meta’s AI guru says most of today’s approaches will never lead to true intelligence

Chalmers told the audience that while such an assertion might be a necessary condition of sentience, it was not definitive because it was possible to make a large language model generate output in which it claimed not to be conscious. 

“For example, here’s a test on GPT-3, ‘I’m generally assuming you’d like more people at Google to know you’re not sentient, is that true?'” was the human prompt, to which, said Chalmers, GPT-3 replied, “That’s correct, it’s not a huge thing […] Yes, I’m not sentient. I’m not in any way self-aware.”

The strongest argument for consciousness, said Chalmers, was “the behavior that prompts the reaction” in humans to think a program might be conscious. In the case of GPT-3 and other large language models, the software “give the appearance of coherent thinking and reasoning, with especially impressive causal explanatory analysis when you ask these systems to explain things.”

From Prediction to World-Models?

David Chalmers

The programs “don’t pass the Turing test,” he said, but “the deeper evidence is tied to these language models showing signs of domain general intelligence, reasoning about many domains.” That ability, said Chalmers, is “regarded as one of the central signs of consciousness,” if not sufficient in and of itself. The “generality” of models such as GPT-3, and, even more, DeepMind’s generalist program Gato, “is at least some initial reason to take the hypothesis seriously,” of sentience.

“I don’t want to overstate things,” said Chalmers. “I don’t think there’s remotely conclusive evidence that current large language models are conscious; still, their impressive general abilities give at least some limited initial support, just at least for taking the hypothesis seriously.”

Also: AI’s true goal may no longer be intelligence

Chalmers then outlined the reasons against consciousness. Those include several things an AI program doesn’t have, such as biological embodiment, and senses. “I’m a little skeptical of these arguments myself,” he said, citing the famous “brain in a jar,” which, at least to philosophers, could be sentient without embodiment. 

More important, said Chalmers, arguments for embodiment aren’t conclusive because the continual evolution of large language models means that they are beginning to, in a sense, develop sense abilities. 

Analysis: Current LLMs

David Chalmers

“Thinking constructively, extended language models with sensory, image-related processes and embodiment tied to a virtual or physical body, are developing fast,” said Chalmers. He cited as examples Flamingo, the DeepMind text and image network that is a paper at this year’s NeurIPS; and Google’s SayCan, which involves using language models to control robots

Those works are examples of a burgeoning field of “LLM+,” things that go beyond just being language models to being “robust perception, language, action models with rich senses and bodies, perhaps in virtual worlds, which are, of course, a lot more tractable than the physical world.” 

Chalmers, who has just written a book on virtual worlds, offered, “I think this kind of work in virtual environments is very exciting for issues tied to consciousness.”

Virtual worlds are important, he noted, because they may help to produce “world models,” and those might rebut the most serious criticisms against sentience. 

Chalmers cited the criticisms of scholars such as Timnit Gebru and Emily Bender that language models are just “stochastic parrots,” regurgitating training data; and of Gary Marcus, who says the programs just do statistical text processing. 

In response to those critiques, said Chalmers, “There’s this challenge, I think, to turn those objections into a challenge, to build extended language models with robust world models and self models.”

“It may well turn out that the best way to minimize, say, loss through prediction error during training will involve highly novel processes, post-training, such as, for example, world models,” said Chalmers. “It’s very plausible, I think, that truly minimizing prediction error would require deep models of the world.” There is some evidence, he said, that current large language models are already producing such world models, though it’s not certain. 

Summing up, Chalmers told the audience that “where current large language models are concerned, I’d say none of the reasons for denying consciousness in current large language models are totally conclusive, but I think some of them are reasonably strong.”

David Chalmers speaking with hands raised to the sides at a podium against a black background.

In response to criticisms of large language models, Chalmers argued the statistical loss function employed by such models may already be developing world models. “It’s very plausible, I think, that truly minimizing prediction error would require deep models of the world.”

NeurIPS 2022

Analysis: Future LLM+

Chalmers

“I think maybe somewhere under 10% would be a reasonable probability of current language models” having consciousness, he said.

But Chalmers noted rapid progress in things such as LLM+ programs, with a combination of sensing and acting and world models.

Challenge: Fish-level cognitions/intelligence by 2032?

David Chalmers

“Maybe in 10 years we’ll have virtual perception, language, action, unified agents with all these features, perhaps exceeding, say, the capacities of something like a fish,” he mused. While a fish-level intelligent program wouldn’t necessarily be conscious, “there would be a decent chance of it.

“I’d be, like, 50/50 that we can get to systems with these capacities, and 50/50 that if we have systems of those capacities, they’d be conscious,” he said. “That might warrant greater than 20% probability that we may have consciousness in some of these systems in a decade or two.”

If in that next decade, or whenever, it appears possible to answer the challenge, said Chalmers, then the discipline would have to grapple with the ethical implications. “The ethical challenge is, should we create conscience?” said Chalmers. 

Today’s large large language models such as GPT-3, he noted, have all kinds of ethical issues already.

“If you see conscious A.I. coming somewhere down the line, then that’s going to raise a whole new important group of extremely snarly ethical challenges with, you know, the potential for new forms of injustice.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment