Quick News Bit

Meet the Humans Trying to Keep Us Safe From AI

0

A year ago, the idea of holding a meaningful conversation with a computer was the stuff of science fiction. But since OpenAI’s ChatGPT launched last November, life has started to feel more like a techno-thriller with a fast-moving plot. Chatbots and other generative AI tools are beginning to profoundly change how people live and work. But whether this plot turns out to be uplifting or dystopian will depend on who helps write it.

Thankfully, just as artificial intelligence is evolving, so is the cast of people who are building and studying it. This is a more diverse crowd of leaders, researchers, entrepreneurs, and activists than those who laid the foundations of ChatGPT. Although the AI community remains overwhelmingly male, in recent years some researchers and companies have pushed to make it more welcoming to women and other underrepresented groups. And the field now includes many people concerned with more than just making algorithms or making money, thanks to a movement—led largely by women—that considers the ethical and societal implications of the technology. Here are some of the humans shaping this accelerating storyline. —Will Knight

About the Art

“I wanted to use generative AI to capture the potential and unease felt as we explore our relationship with this new technology,” says artist Sam Cannon, who worked alongside four photographers to enhance portraits with AI-crafted backgrounds. “It felt like a conversation—me feeding images and ideas to the AI, and the AI offering its own in return.”


Rumman Chowdhury

PHOTOGRAPH: CHERIL SANCHEZ; AI Art by Sam Cannon

Rumman Chowdhury led Twitter’s ethical AI research until Elon Musk acquired the company and laid off her team. She is the cofounder of Humane Intelligence, a nonprofit that uses crowdsourcing to reveal vulnerabilities in AI systems, designing contests that challenge hackers to induce bad behavior in algorithms. Its first event, scheduled for this summer with support from the White House, will test generative AI systems from companies including Google and OpenAI. Chowdhury says large-scale, public testing is needed because of AI systems’ wide-ranging repercussions: “If the implications of this will affect society writ large, then aren’t the best experts the people in society writ large?” —Khari Johnson


Sarah BirdPhotograph: Annie Marie Musselman; AI art by Sam Cannon

Sarah Bird’s job at Microsoft is to keep the generative AI that the company is adding to its office apps and other products from going off the rails. As she has watched text generators like the one behind the Bing chatbot become more capable and useful, she has also seen them get better at spewing biased content and harmful code. Her team works to contain that dark side of the technology. AI could change many lives for the better, Bird says, but “none of that is possible if people are worried about the technology producing stereotyped outputs.” —K.J.


Yejin ChoiPhotograph: Annie Marie Musselman; AI art by Sam Cannon

Yejin Choi, a professor in the School of Computer Science & Engineering at the University of Washington, is developing an open source model called Delphi, designed to have a sense of right and wrong. She’s interested in how humans perceive Delphi’s moral pronouncements. Choi wants systems as capable as those from OpenAI and Google that don’t require huge resources. “The current focus on the scale is very unhealthy for a variety of reasons,” she says. “It’s a total concentration of power, just too expensive, and unlikely to be the only way.” —W.K.


Margaret MitchellPhotograph: Annie Marie Musselman; AI art by Sam Cannon

Margaret Mitchell founded Google’s Ethical AI research team in 2017. She was fired four years later after a dispute with executives over a paper she coauthored. It warned that large language models—the tech behind ChatGPT—can reinforce stereotypes and cause other ills. Mitchell is now ethics chief at Hugging Face, a startup developing open source AI software for programmers. She works to ensure that the company’s releases don’t spring any nasty surprises and encourages the field to put people before algorithms. Generative models can be helpful, she says, but they may also be undermining people’s sense of truth: “We risk losing touch with the facts of history.” —K.J.


Inioluwa Deborah RajiPhotograph: AYSIA STIEB; AI art by Sam Cannon

When Inioluwa Deborah Raji started out in AI, she worked on a project that found bias in facial analysis algorithms: They were least accurate on women with dark skin. The findings led Amazon, IBM, and Microsoft to stop selling face-recognition technology. Now Raji is working with the Mozilla Foundation on open source tools that help people vet AI systems for flaws like bias and inaccuracy—including large language models. Raji says the tools can help communities harmed by AI challenge the claims of powerful tech companies. “People are actively denying the fact that harms happen,” she says, “so collecting evidence is integral to any kind of progress in this field.” —K.J.


Daniela AmodeiPhotograph: AYSIA STIEB; AI art by Sam Cannon

Daniela Amodei previously worked on AI policy at OpenAI, helping to lay the groundwork for ChatGPT. But in 2021, she and several others left the company to start Anthropic, a public-benefit corporation charting its own approach to AI safety. The startup’s chatbot, Claude, has a “constitution” guiding its behavior, based on principles drawn from sources including the UN’s Universal Declaration of Human Rights. Amodei, Anthropic’s president and cofounder, says ideas like that will reduce misbehavior today and perhaps help constrain more powerful AI systems of the future: “Thinking long-term about the potential impacts of this technology could be very important.” —W.K.


Lila IbrahimPhotograph: Ayesha Kazim; AI art by Sam Cannon

Lila Ibrahim is chief operating officer at Google DeepMind, a research unit central to Google’s generative AI projects. She considers running one of the world’s most powerful AI labs less a job than a moral calling. Ibrahim joined DeepMind five years ago, after almost two decades at Intel, in hopes of helping AI evolve in a way that benefits society. One of her roles is to chair an internal review council that discusses how to widen the benefits of DeepMind’s projects and steer away from bad outcomes. “I thought if I could bring some of my experience and expertise to help birth this technology into the world in a more responsible way, then it was worth being here,” she says. —Morgan Meaker


This article appears in the Jul/Aug 2023 issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at [email protected].

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment