Quick News Bit

The Surgeon General’s Social Media Warning and A.I.’s Existential Risks

0

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.

kevin roose

Casey, last week on the show, we talked about the phenomenon of people listening to podcasts at very high speed. Because we’re talking about this New York Times audio app that just came out that allows you to go up to 3x.

casey newton

Right.

kevin roose

And that seemed insane to both of us. And I sort of jokingly said, if you listen to podcasts at three times speed, reach out to me. And I was expecting maybe like one person, maybe two people. I think it’s fair to say we got an avalanche of speed maxers.

casey newton

We have been bombarded. And it’s so confusing. The highest speed I’m comfortable with people listening to “Hard Fork” is 0.8x, and here’s why. There’s so much information in this show, OK. That if you’re not taking the time to let it absorb into your body, you’re not getting the full effect. So be kind to yourself, treat yourself. If the shows up as one hour, spend an hour and 10 minutes listening to it, OK. You’ll thank yourself.

kevin roose

You heard it here first. “Hard Fork,” the first podcast designed to be listened to very slowly.

casey newton

Very slowly.

kevin roose

Yeah.

casey newton

Yeah.

kevin roose

Should we put in a secret message for our 3x? Like a little slowed down like I’m Kevin Roose.

[MUSIC PLAYING]

I’m Kevin Roose. I’m a tech columnist at The New York Times.

casey newton

I’m Casey Newton from Platformer, and you’re listening to “Hard Fork. This week on the show. The surgeon general warns that social media may not be safe for kids. Plus, AI safety researcher Ajeya Cotra on the existential risks posed by AI and what we ought to do about them. And then, finally, it’s time to pass the hat. We’re once again playing at Hat GPT.

[MUSIC PLAYING]

kevin roose

So Casey, this week there was some big news about social media. In particular, the US Surgeon General Dr. Vivek Murthy issued an advisory about the risks of social media to young people. And it basically was kind of a call for action and a summary of what we know about the effects of social media use on young people. And I want to start this by asking, what do you know about the US Surgeon General?

casey newton

Well, he hates smoking and has my whole life. And most of what I’ve ever heard from the US Surgeon General has been whether I should smoke, and the answer is no.

kevin roose

Yeah. I mean, this is like one of two things that I know about the surgeon general. Is that he puts the Warning labels on cigarette packages. The other thing is that our current surgeon general looks exactly like Ezra Klein.

casey newton

And notice you’ve never seen both of them in the same place.

kevin roose

It’s true.

casey newton

Yeah.

kevin roose

But the US Surgeon General apparently part of his mandate is evaluating risks to public health.

casey newton

Yeah.

kevin roose

And this week, he put a big stake in the ground and declaring that social media has potentially big risks for public health. So here’s the big summary quote from this report. It says more research is needed to fully understand the impact of social media. However, the current body of evidence indicates that while social media may have benefits for some children and adolescents, there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents.

So let’s talk about this report because I think it brings up some really interesting and important issues. What did you make of it?

casey newton

Well, I thought it was really good. Like this is actually the kind of stuff I want our government to be doing. Is investigating stuff like this that the vast majority of teenagers are using. And I think a lot of us have had questions over the years about what are the effects that it’s having. Particularly for a subset of kids, this stuff can be quite dangerous.

That list would include adolescent girls, kids who have existing mental health issues. So if you’re a parent, you should be paying close attention. And if you’re a regulator, you should think about passing some regulation. So that was kind of, I think, the core takeaway, but there are a lot of details in there that are super interesting.

kevin roose

So, yeah. Let’s talk about the details. What stuck out to you most?

casey newton

So one thing that comes across is that a way that you can guess that someone is having a bad experience on social media is that they’re using it constantly. There seems to be a really strong connection between the number of hours a day that you’re using these networks and the state of your mental health.

They talk about some kids in here that are on these social networks more than three hours a day. And people who are using social networks that much are at a much higher risk of depression, of anxiety, and of not sleeping well enough. And so just from a practical perspective, if you are a parent and you notice your kid is using TikTok seven hours a day, that actually is a moment to pull your kid aside and say, hey, what’s going on?

kevin roose

Yeah, and I also found it really interesting that the report talked about various studies showing that certain groups have better or worse times in general on social media.

casey newton

Yes.

kevin roose

So one surprising thing to me, actually, was that some of the adolescents who seem to be getting a lot out of social media in a positive direction are actually adolescents from marginalized groups. So there are some studies that show that LGBT youth actually have their mental health and well-being supported by social media use. And then also, this body of research that found that 7 out of 10 adolescent girls of color reported encountering positive or identity-affirming content related to race across social media platforms.

So it is not the case that every adolescent across the board has worse mental health and worse health outcomes as a result of using social media. And in particular, it seems like some of the best uses of social media for adolescents are people who might be sort of marginalized or bullied in their offline lives finding spaces online to connect with similar types of people across similar interests and really find connection and support that way.

casey newton

Yeah, I mean, think about it. If you’re a straight white boy, let’s say, and you grow up, and you’re watching a Netflix and HBO, you’re seeing a lot of people who look like you, your experience is represented. That’s providing some sort of a support and entertainment and enjoyment for you. But if you’re a little gay kid or like a little girl of color, you’re seeing a lot less of that, but you turn to social media, and it’s a lot easier to find.

And that is a gift. And that is something really cool. And that’s why when states want to ban the stuff outright, I get really anxious because I think about those kids. And I think about myself as a teenager and how much I benefited from seeing other queer people on the internet. So, yeah, there is definitely a big bucket of kids who get benefits from this stuff. There’s a reason 95 percent of kids are using this.

kevin roose

Right. So there are a few different parts to this Surgeon General’s report. One of them is kind of like a literature review, like what does the research tell us about the links between social media and adolescents health? And another part at the end is sort of this list of recommendations, including calling for more research and actually calling for specific actions that the surgeon general wants tech platforms to take. Including age-appropriate safety standards, enforcing age restrictions, more transparency from the tech companies.

And it also gives some advice to parents about how to create boundaries with your kids around their social media use. How to model responsible social media behavior. And then how to work with other parents to create shared norms about social media use. So that’s the report.

And I’m curious. Like you mentioned in your column, that a lot of people at the platforms are skeptical of this report and of the data that it refers to. So what do people at the platforms believe about this report, and why are they maybe skeptical of some of what’s in it?

casey newton

So yeah, I mean, I’ve heard from folks both before and after I wrote that they just really reject the report. And there are a handful of reasons. One that they are clinging to is that the American Psychological Association put a report out this month. And among the things it says is, quote using social media is not inherently beneficial or harmful to young people. Adolescents lives online both reflect and impact their offline lives.

So to them, that’s kind of the synthesis that they believe in. But there’s more stuff too. A lot of the studies, including in the Surgeon General’s report, show a lot more correlation than causation. Causation has been harder to show. To the extent it has been shown, it tends to be relatively small amounts, relatively small studies.

They’re telling me that the surgeon general is a political job. We know that Joe Biden hates social networks. He wants to get rid of Section 230. He’s, sort of, not a friend of these companies, to begin with. And ultimately, they just kind of think this is a moral panic. That people are just nervous about the media of the moment, just like they were worried about TV and comic books before social media.

kevin roose

Right. I mean, I remember as a teen, the big thing in that period was video games.

casey newton

Yeah.

kevin roose

And violent video games.

casey newton

Totally.

kevin roose

And you know, Tipper Gore’s crusade. And I remember when Grand Theft Auto came out, the first one, and it was like mayhem. Parents were like, this is going to — our teenagers are going to be shooting down police helicopter, right. And it did, at the time as a teen, just seemed like, oh my god, you guys have no idea what is actually going on. And this is not some violent fantasies that were developing. This is a video game.

And it just felt, as a teen, like the adults in the room just didn’t actually get it and didn’t get what our lives were like. And so I can see some version of that being true here. That we are in a moment of like backlash to social media. And maybe we are overreaching in attempting to link all of the ills of modern life to the use of social media, especially for adolescents.

At the same time, one thing that makes me think that this is not a classic parental freak-out moral panic is that there clearly have been profound mental health challenges for adolescents in the last 15 years. I’m sure you’ve seen the charts of suicidal ideation and depression among adolescents just it zooms upward. Self-reports of depression and anxiety are way, way up among adolescents. It does seem really clear that something big is happening to affect the mental health of teens in America.

Like this is real research, and these are real studies, and I think we have to take them seriously. And so, I’m glad that the surgeon general is looking into this, even if the causal links between social media use and adolescent mental health are not super clear yet.

casey newton

Yeah, you know, I agree with you. I am also one who resists simplistic narratives. And I still don’t really believe that the teenage mental health crisis is as simple as people started downloading Instagram. I think there is just kind of more going on than that.

But at the same time, I think that the folks I talked to at social networks are ignoring something really profound. Which is I would guess that you personally probably could name dozens of people who have uninstalled one or more social apps from their phone because it made them feel bad at some point about the way they were using it. And I think you’re actually one of those people yourself. I have also uninstalled social apps from my phone because of the way they make me feel so have my friends and family.

And this is a subject that comes up all the time.

kevin roose

Constantly.

casey newton

And not because I’m a tech reporter and I’m bringing it up. People are constantly bringing up to me that they don’t like their relationship with these phones. And, so, to me, that’s where the argument that this is all a moral panic breaks down. Because guess what, in the 90s me and my 14-year-old buddies weren’t going around talking about how much we hated how much we were playing Mortal Kombat, OK.

We loved it.

kevin roose

Right.

casey newton

We couldn’t get enough.

kevin roose

I’m addicted to GoldenEye. I’m throwing my cartridge out.

casey newton

But the 14-year-olds today are absolutely saying, get Instagram off of my phone. I don’t like what it’s doing to me. And the folks I’m talking to at social networks just refuse to confront that.

kevin roose

Yeah.

casey newton

Here’s where I think gets tricky. For all that we have just said, I do not think that having an Instagram account and using it daily represents a material threat to the median 16-year-old, OK. I just don’t. I think most of them can use it. I think they’ll be fine. I think they’ll be times that they hate it, I think there’ll be times they really enjoy it. And I also think that there is some double-digit percentage chance, let’s call it, I don’t know, 10 to 15 percent chance that creating that Instagram account is going to lead to some significant amount of harm for you, right. Or that, in conjunction with other things going on in your life, this is going to be a piece of a problem in your life.

And this is the challenge that I think that we have. The states that are coming in, which we can talk about that are trying to pass laws to regulate the way that teenagers use social media are bringing in this absolutely ham-fisted one size fits all approach, just sort of saying, like in the case of Utah, you need your parent’s consent to use a social network when you are under 18, right. So if you are not an adult, you have to get permission to use Instagram.

Montana just passed a law to fine TikTok if it operates in the state. I think that is a little bit nuts. Because, again, I think the median 16-year-old using TikTok is going to be just fine. And yet, if you think that there is a material risk of harm to teenagers in the way that the surgeon general is talking about, then I think you have to do something.

kevin roose

So what is the solution here? If it’s not, these bans passed by the government and enforced at the state level. Like what do you think should be done to address adolescents and social media?

casey newton

Well, one, I do want the government to keep exploring solutions here. I think there’s probably more that can be done around age verification. This gets really tricky. There are some aspects in which this can be really bad. Can require the government to collect a lot more information about basically every person, right.

I don’t want to end up in a situation where like you have to submit your Social Security number to like Apple to download an app. At the same time, I think there’s probably stuff that can be done at the level of the operating system to figure out if somebody is 9 years old. Like I just think that we can probably figure that out in a way that doesn’t destroy everyone’s privacy, and that just might be a good place to start. The other place that I’ve been thinking about is what can parents do. You know I want your perspective here. You’re a parent, I’m not. I’ll tell you, though, that after I sort of said, like, listen, parents, you may want to set some harder boundaries around this stuff. You want to check in with your kids more about this stuff, and I heard back from parents telling me essentially you don’t actually know how hard this is, right.

Particularly once you had a teenager, they’re mobile. They’re in school. They’re hanging out with their friends. You cannot watch them every hour of the day. They’re often going to find ways to access these apps. They’re going to break the rules that you’ve set, and the horses just kind of get out of the barn.

So I would think about this as a risk as a parent in the same way I would think about letting my kid drive a car. Some people are going to throw their hands up driving in cars way more dangerous, I think, statistically than using a social network. But, like, your kids face all sorts of risks, right. And that’s like the terror of being a parent. Is that basically, almost anything can hurt them, but I don’t know that we have really put social networks in that category up until now.

We’ve had some doubts. We’ve wondered if it’s really great for us. What I feel like this Surgeon General’s report really brings us to is a place where we can say fairly definitively, at least for some subset of children, that yes, this stuff does pose real risks, and it’s worth talking about in your house. Then I think, by the way, a lot of parents have figured this out already. But if, for whatever reason, you’re not one of those parents, I think now is the time to start paying closer attention.

kevin roose

Totally. Yeah. I’m not in favor of these blanket bans. That seems like a really blunt instrument and something that is likely to backfire. But I do think that some combination of like regulation around enforcing age minimums. Maybe some regulation about notifying underage users like, how much time they’ve spend in the app. Or like nudging them to maybe go outside or something like that. Like maybe that makes sense.

But I think that the biggest piece of the puzzle here is really about parents and their relationship to their teenagers. And I know a lot of parents who are planning to or have already had the social media talk with their kids. The way that your parents might sit you down and talk about sex or talk about driving or talk about drug use. Like this seems like another one of those kind of sit-down talk opportunities.

We’re giving you your first smartphone. You’ve reached an age where we’re comfortable letting you have one. Your friends are probably on it already, and we trust you to view this in a way that is appropriate and safe. But like here are some things to think about.

casey newton

Don’t listen to podcasts at 3x speed.

It’s not good for you.

kevin roose

Or we will be reporting you to the government. Like just having that talk feels very important. And also, like, I do think that as much as I hated this as a kid like, some restrictions make sense on the parental level. Like my parents limited me to an hour of TV every day. Did you have a TV limit in your house?

casey newton

Not a hard and fast limit, but we were limited for the number of hours we could play video games. Particularly like, before high school, we were forbidden from watching music videos on MTV at all. So, yeah, I mean, there were definitely limits around that stuff. And I found it annoying, but also I didn’t care that much.

kevin roose

Right. I mean, I actually remembered this as I was reading the Surgeon General’s report that I came up with a system to defeat my parents one hour TV limit, which is that I would record episodes of a half-hour show. “Saved by the Bell” was my favorite show.

casey newton

Oh, the best.

kevin roose

And I found that if I recorded three half-hour episodes of saved by the bell and then fast forwarded through the commercials —

casey newton

Genius.

kevin roose

— I could fit almost three full episodes into one hour. So, as a result, there are many episodes of “Saved by the Bell” that I have seen like the first 23 minutes of and then have no idea how it ends.

casey newton

Just as a series of events where Zack Morris gets into a terrible scrape, and it seems like Screech might be able to fix it, but you’ll actually never know.

kevin roose

Yeah. I’ll never know. And so that was how I tried to evade my parents TV limits. I imagine that there are teenagers already out there finding ways around their parents limits. But I do think that building in features parental controls to social media apps that allow parents to not only like see what their kids are doing on social media but also to limit it in some way does make sense as much as the inner teenager that is still inside me rebels against that.

casey newton

You know what we should do, Kevin is we should actually ask teenagers what they about all this.

kevin roose

I would love that.

casey newton

Yeah.

kevin roose

If you are a teenager who listens to “Hard Fork” and you are struggling, or your parents are struggling with this question of social media use. Or if social media use has been a big factor in your own mental health like we would love to hear from you.

casey newton

Yeah, if you are living in Utah and all of a sudden you’re going to need your parent’s permission to use a social network, I would love to hear from you. If you have had to delete these apps from your phone because they’re driving you crazy, let us know. Or if you’re having a great time and you wish that all the adults would just shut up about this like, tell us that too.

kevin roose

Right. Teens get your parent’s permission and then send us a voice memo, and we may feature it on an upcoming episode.

casey newton

That address, of course, hard [email protected]..

kevin roose

Yeah. If you still use email. Or send us a B-real.

casey newton

Yeah. Snap us, baby.

[MUSIC PLAYING]

kevin roose

When we come back, we’re going to talk about the risks of a different technology, artificial intelligence.

So, Casey, last week we talked on the show about P doom. This sort of statistical reference to the probability that AI could lead to some catastrophic incident, wipe us all out, or fundamentally disempower humans in some way.

casey newton

Yeah, people are calling it the hottest new statistic of 2023.

kevin roose

And I realized that I never actually asked you what is your P doom.

casey newton

I’ve been waiting. I was like, when is this man going to ask me my P doom? But I’m so happy to tell you that I think, based on what I know, which still feels like way too little, by the way, but based on what I know, I think it’s 5 percent.

kevin roose

I was going to say the same thing. It just feels kind of a random low number that I’m putting out there because I actually don’t have a robust framework for determining my P doom. It’s just kind of like a vibe.

casey newton

It’s perfect because if nothing bad happens, we could be like, well, look, I only said there was a 5 percent chance. But if something bad happens, we can be like we told you there was a 5 percent chance of this happening.

kevin roose

Right. So that conversation really got me excited for this week’s episode. Which is going to touch on this idea of P doom and AI risk and safety more generally.

casey newton

Yeah. And I’m really excited about this too. I would say for the past couple of months, we’ve been really focused on some of the more fun, useful, productive applications of AI. We’ve heard from people who are using it to do some meal planning to get better at their jobs. And I think all that stuff is really important. And I want to keep talking about that. But you and I both know that there is this whole other side of the conversation. And it’s people who are researching AI safety on what they call alignment. And some of these people have really started to ring the alarm.

kevin roose

Yeah. And obviously, we’ve talked about the pause letter. This idea that some AI researchers are calling for a slowdown in I development so that humans have time to catch up. But I think there is this whole other conversation that we haven’t really touched on in a direct way but that we’ve been hinting at over the course of the last few months. And you really want it to have just a straight-up I safety expert on the show to talk about the risks of existential threats.

casey newton

That’s right.

kevin roose

Why is that?

casey newton

Well, on “Hard Fork,” we always say safety first. And so, in this case, we actually chose to do it kind of toward the end. But I think it’s still going to pay off. No, look, this is a subject that I am still learning about. It’s becoming clear to me that these issues are going to touch on basically everything that I report on and write about. And it just feels like there’s this ocean of things that I haven’t yet considered.

And I want to pay attention to some of the people who are really, really worried. Because, at the very least, I want to know what are the worst-case scenarios here. I kind of want to know where all of this might be headed. And I think we’ve actually found the perfect person who can walk us through that.

kevin roose

And before we talk about who that person is. I just want to say like this might sound kind of a kooky conversation to people who are not enmeshed in the world of AI safety research, some of these doomsday scenarios like they honestly do sound like science fiction to me.

But I think it’s important to understand that this is not a fringe conversation in the AI community. There are people at the biggest AI labs that are really concerned about some of these scenarios, who have P dooms that are higher than our 5 percent figures, and who spend a lot of time trying to prevent these AI systems from operating in ways that could be dangerous down the road.

casey newton

Sometimes sci-fi things become real, Kevin. It wasn’t always the case that you could summon a car to wherever you were. It wasn’t always the case. You could point your phone into the air at the grocery store and figure out what song was playing. Things that once seemed really fantastical do have a way of catching up to us in the long run.

And I think one of the things that we get at in this conversation is just how quickly things are changing. Speed really is the number one factor here in why some people are so scared. So even if this stuff seems like it might be very far away, part of the point of this conversation is it might be closer than it appears.

kevin roose

With that, let’s introduce our guest today who is Ajeya Cotra. Ajeya Cotra is a senior research analyst at Open Philanthropy, where she focuses on AI safety and alignment. She also co-authors a blog called Planned Obsolescence with Kelsey Piper of Vox, which is all about AI futurism and alignment.

And she’s one of the best people I’ve found in this world to talk about this because she’s great at drawing kind of step-by-step connections between the ways that we train AI systems today and how we could one day end up in one of these doomsday scenarios. And specifically, she is concerned about a day that she believes might not even be that far away, like 10 or 15 years from now, when AI could become capable of and even maybe incentivized to cut humans entirely out of very important decisions.

So I’m really excited to talk to Ajeya about her own P doom and figure out in the end if we need to revise our own figures. Ajeya Cotra, welcome to “Hard Fork.”

ajeya cotra

Thank you. It’s great to be here.

kevin roose

So I wanted to have you on for one key reason, which is to explain to us/ scare us or whatever emotional valence we want to attach to that why you are studying AI risk and, in particular, this kind of a risk that deals with sort of existential questions. What happened to convince you that AI could become so powerful so impactful that you should focus your career and your research on the issue?

ajeya cotra

Yeah. So I had a kind of unusual path to this. So in 2019, I was assigned to do this project on when might we get AI systems that are transformative. Essentially when could we get AI systems that automate enough of the process of innovation itself that they radically speed up the pace at which we’re inventing new technologies.

kevin roose

AI can basically make better AI.

ajeya cotra

Make better AI and things like the next version of CRISPR or the next Super weapon or that kind of thing. So right now, we’re kind of used to a pace of change in our world that is driven by humans trying to figure out new innovations, new technologies. They do some research, they develop some product, it gets shipped out into the world and that changes our lives, you know, whether that’s social media recently or the internet, in the past or going back further, railroads, telephone, telegraph, et cetera.

So I was trying to forecast the time at which AI systems could be driving that engine of progress themselves. And the reason that that’s really significant as a milestone is that if they can automate the entire full stack of scientific research and technological development, then that’s no longer tethered to a human pace. So not only progress in AI but progress everywhere is something that isn’t necessarily happening at a rate that any human can absorb.

kevin roose

I think that project is where I first came into contact with your work.

ajeya cotra

Yeah.

kevin roose

You had this big post on a blog called Less Wrong talking about how you were revising your timelines for this kind of transformative AI.

ajeya cotra

Yeah.

kevin roose

How you were basically predicting that transformative AI would arrive sooner than you had previously thought. So what made you do that? What made you revise your timeline?

ajeya cotra

So I’ll start by talking about the methodology I used for my original forecasts in 2019 and 2020 and then talk about how I revised things from there. So it was clear that these systems got predictably better with scale. So at that time, we had the early versions of scaling laws. Scaling laws are essentially these plots you can draw where on the x-axis, you have how much bigger in terms of computation and size your AI model is. And the y-axis is how good. It is at the task of predicting the next word.

In order to figure out what a human would say next in a wide variety of circumstances, you actually kind of have to develop an understanding of a lot of different things. In order to predict what comes next in a science textbook after reading one paragraph, you have to understand something about science. At the time that I was thinking about this question, systems were not so good, and they were kind of getting by with these shallow patterns. But we had the observation that as they were getting bigger, they were getting more and more accurate at this prediction task, and coming with that were some more general skills.

So the question I was asking was basically how big would it need to be in order for this kind of very simple brute force trained prediction-based system to be so good at predicting what a scientist would do next that it could automate science. And one hypothesis that was natural to explore was could we train systems as big as the human brain. And is that big enough to do well enough at this prediction task that it would constitute automating scientific R&D.

casey newton

Can I just pause you to note what you’re saying, which is so interesting, which was that as far back as 2019, the underlying technology that might get us sort of all the way to the finish line was already there. It was just sort of a matter of pouring enough gasoline on the fire, is that right?

ajeya cotra

Yeah. And I mean, that was the hypothesis that I was sort of running with that I think was plausible to people who are paying close attention at the time. Maybe all it takes, in some sense, is more gasoline.

casey newton

Yeah.

ajeya cotra

Maybe there is a size that we could reach that would cause these systems to be good enough to have these transformative impacts. And maybe we can try and forecast when that would become affordable. So essentially, my forecasting methodology was asking myself the question, if we had to train a brain-sized system, how much would it cost? And when is it the case that the amount that it would take to train a system the size of the human brain is within range of the kinds of amounts that companies might spend?

kevin roose

It sounds like your process of coming to a place where you were very worried about I risk was essentially a statistical observation. Which is that these graphs were going in a certain direction at a certain angle, and if they just kept going —

ajeya cotra

Yeah.

kevin roose

— that could be very transformative and potentially lead to this kind of recursive self-improvement that would maybe bring about something really bad.

ajeya cotra

It was more just the potential of it, the power of it, that it could really change the world. We are moving in a direction where these systems are more and more autonomous. So one of the things that’s most useful about these systems is that you can have them sort of do increasingly open-ended tasks for you and make the kind of sub-decisions involved in that task themselves.

You can say to it I want a personal website. And I want it to have a contact form. And I want it to kind of have this general type of aesthetic. And it can come to you with suggestions. It can make all the little sub-decisions about how to write the particular pieces of code.

If we have these systems that are trained and given latitude to kind of act and interact with the real world in this broad scope way, one thing I worry about is that we don’t actually have any solid technical means by which to ensure that they are actually going to be trying to pursue the goals you’re trying to point them at.

kevin roose

That’s the classic alignment problem.

ajeya cotra

Yeah.

kevin roose

One question that I’ve started to ask — because all three of us probably have a lot of conversations about doomsday scenarios with AI. And I found that if you ask people who think about this for a living, like what is the doomsday scenario that you fear the most, the answers really vary.

Some people say, you know, I think AI language models could be used to help someone synthesize a novel virus. Or to create a nuclear weapon. Or maybe it’ll just spark a war because there will be some piece of like viral deep-fake propaganda that leads to conflict. So what is the specific doomsday scenario that you most worry about?

ajeya cotra

Yeah, so I’ll start by saying there’s a lot to worry about here. So I’m worried about misuse. I’m worried about AI sparking a global conflict. I’m worried about a whole spectrum of things. The sort of single specific scenario that I think is really underrated.

Maybe the single biggest thing, even if it’s not a majority of the overall risk, is that you have these powerful systems, and you’ve been training them with what’s called reinforcement learning from human feedback. And that means that you take a system that’s understood a lot about the world from this prediction task, and you fine-tune it by having it do a bunch of useful tasks for you.

And then, basically, you can think of it as like pushing the reward button when it does well and pushing the anti-reward button when it does poorly. And then, over time, it becomes better and better at figuring out how to get you to push the reward button. Most of the time, this is by doing super useful things for you, making a lot of money for your company, whatever it is.

But the worry is that there will be a gap between what was actually the best thing to do and what looks like the best thing to you. So, for example, you could ask your system I want you to kind of overhaul our company’s code base to make our website load faster and make everything more efficient.

And it could do a bunch of complicated stuff, which, even if you had access to it, you wouldn’t necessarily understand all the code it wrote. So how would you decide if it did a good job? Well, you would just see if the website was ultimately loading faster, and you’d give it a thumbs up if it achieves that. But the problem with that is can’t tell, for example, if the way that it achieved the outcome you wanted was by creating these hidden unacceptable costs. Like making your company much less secure.

kevin roose

Right. Maybe it killed the guy in the IT Department who was putting in all the bad code.

casey newton

Yeah. It released some plutonium into the nearby river.

kevin roose

So is that — like —

ajeya cotra

So there’s sort of like two phases to this or something. And to this story that I have in my head, which is phase one is essentially you are rewarding this AI system, and there’s some gap, even if it’s benign, even if it doesn’t result in catastrophe right away, there’s some gap between what you are trying to reward it for and what you’re actually rewarding it for.

There’s some amount by which you incentivize manipulation or deception. For example, it’s pretty likely that you ask the AI questions to try and figure out how good a job it did. And you might be incentivizing it to hide from you some mistakes it made so that you think that it does a better job.

casey newton

Because it’s still trying to get that thumbs-up button.

kevin roose

This is sort of the classic — reminds me of the classic like paperclip maximizer thought experiment. Where you tell an AI make paperclips, and you don’t give it any more instructions, and it decides like, I’m going to use all the metal on Earth, and then I’m going to kill people to get access to more metal and I’m going to break up all the cars to get their metal. And pretty soon, like you’ve destroyed the world and all you were trying to do is make paperclips.

So I guess what I’m trying to understand is, like in your doomsday scenario, is the problem that humans have given the AIs bad goals or that humans have given the AIs good goals and the AIs have figured out bad ways to accomplish those goals.

ajeya cotra

I would say that it is closer to the second thing. But one thing I don’t like about the paperclip maximizer story or analogy here is that it’s a very literal Gini sort of failure mode. First of all, no one would ever tell an AI system just maximize paperclips. And even though corporations are very profit-seeking, it’s also pretty unlikely that they would just say maximize the number of dollars in this bank account or anything so simple as that.

Right now, the state-of-the-art way to get AI systems to do things for humans it is this human feedback. So it’s this implicit pattern learning of what will get Kevin to give me a thumbs up. And you can be paying attention. And you can incorporate all sorts of considerations into why you give it a thumbs up or thumbs down. But the fundamental limit to human feedback is you can only give it the thumbs down when it does bad things if you can tell that it’s doing bad things.

kevin roose

It could be lying.

ajeya cotra

It could be lying. And it also seems pretty difficult to get out of the fact that you would be incentivizing that lie.

kevin roose

Right.

casey newton

This was the GPT four thing where it lies to the human who says, hey, are you a robot who’s trying to get me to solve a CAPTCHA? And it says no because it understands that there’s a higher likelihood that the human will solve the capture and hire the TaskRabbit.

kevin roose

Right. That makes a lot of sense. So there are all these doomsday scenarios out there. Some of which I find more plausible than others. Are there any doomsday scenarios with respect to AI risk that you think are overblown? That you actually don’t think are as likely as some people do?

ajeya cotra

Yeah. So I think that there’s a family of literal Gini doomsday scenarios. Like you tell the system to maximize paperclips, and it maximizes paperclips. And in order to do that, it disassembles all the metal on Earth. Or you tell your AI system to make you dinner, and it doesn’t realize you didn’t want it to cook the family cat and make that into dinner.

So that’s an example. So I think those are unlikely scenarios because I do think our ability to point systems toward fuzzier goals is better than that. So the scenarios I’m worried about don’t go through these systems doing these simplistic single-minded things. They go through systems learning to deceive. Learning to manipulate humans into giving them the thumbs up. Knowing what kinds of mistakes humans will notice and knowing what kinds of mistakes humans won’t notice.

casey newton

Yeah, I sort of want to take all this back to where you started with this first project where you’re trying to understand at what point does the AI begin to just create these transformative disruptions. The reason I think it’s important is because I think at some level, Kevin, like it could be any of the doomsday scenarios that you mentioned, but the problem is that the pace is going to be too fast for us to adjust.

So, you know, I wonder, Ajeya, how you think about does it make much sense to think about these specific scenarios, or do we just, sort of, need to back up further than that and say the underlying issue is much different?

ajeya cotra

I have gone back and forth on this one in my head. The really kind of scary thing at the root is the pace of change in AI being too fast for humans to effectively understand what’s happening and course correct no matter what kinds of things are going wrong. That feels like the fundamental scary thing that I want to avoid.

kevin roose

So Ajeya, you, and Kelsey Piper started this blog, called Planned Obsolescence.

ajeya cotra

Yeah.

kevin roose

And in a post for that blog you wrote about something that you called the obsolescence regime.

ajeya cotra

Yeah.

kevin roose

What is the obsolescence regime and —

casey newton

And why is it such a good band name?

kevin roose

— and why are you worried about it?

ajeya cotra

Yeah, so the obsolescence regime is a potential future endpoint we could have with AI systems in which humans have to rely on AI systems to make decisions that are competitive either in the economic marketplace or in a military sense. So this is a world where if you are a military general, you are aware that if ever you were to enter a hot war, you would have to listen to your AI strategy advisors because they are better at strategy than you, and the other country will have AI.

If you want to invent technologies of any consequence and make money off of a patent, you have to make use of AI scientists. So this is a world where AI has gotten to the point where you can’t really compete in the world if you don’t use it. It would be sort of like refusing to use computers. Like it’s very hard to have any non niche profession or any power in the world if today you were to refuse to use computers. And the obsolescence regime is a world where it’s very hard to have any power in the world if you were to refuse to listen to AI systems and insist on doing everything with just human intelligence.

casey newton

Yeah, I mean, is that a bad thing, right? I mean, the history of human evolution has been we invent new tools, and then we rely on them.

ajeya cotra

Yeah. So I don’t necessarily think it’s a bad thing. I think it is a world in which some of our arguments for AI being perfectly safe have broken down. The important thing about the obsolescence regime is that if AI systems collectively were to cooperate with each other to make some decision about the direction the world goes in, humans collectively wouldn’t actually have any power to stop that.

So it’s sort of like a deadline. If we are at the obsolescence regime, we better have figured out how to make it so that these AI systems robustly are caring about us so we would be in the position of children or animals today. Where it isn’t necessarily a bad world for children, but it is a world where to the extent they have power or get the things they want, it’s via having adults who care about them.

casey newton

Right, not necessarily a bad world for children but a pretty bad world for animals.

ajeya cotra

Yeah.

casey newton

Yeah.

kevin roose

Yeah. I would love to get one, just very sort of concrete example of a doomsday scenario that you think actually is plausible. Like what is the scenario that you play out in your head when you are thinking about how AI could take us all out?

ajeya cotra

Yeah. So the scenario that I most come back to is one where you have a company, let’s say, Google, and it has built AI systems that are powerful enough to automate most of the work that its own employees do. It’s sort of entering into an obsolescence regime within that company. And rather than hiring more software engineers, Google is running more copies of this AI system that it’s built, and that AI system is doing most of the software engineering, if not all of the software engineering.

And in that world, Google kind of asks its AI system to make even better AI systems. And at some point down this chain of AI kind of doing machine learning research and writing software to train the next generation of AI systems, the failure mode that I was alluding to earlier kind of comes into play.

If these AI systems are actually trying really intelligently and creatively to get that thumbs up from humans, the best way to do so may not forever be to just sort of basically do what the humans want but maybe be a little deceptive on the edges. It might be something more gain access at a root level to the servers that Google is running and, with that access, be able to set your own reward.

kevin roose

What rewards would they set that would be destructive?

ajeya cotra

So the thumbs up is kind of coming in from the human. This is a cartoon, but the human pushes a button, and then that gets written down in a computer somewhere as a thumbs-up. So if that’s what the AI systems are actually seeking, then at some point, it might be more effective for them to cut out the human in the loop. The part where the human presses the button.

And in that scenario, if humans would try and fight back and get control after that has happened, then AI systems, in order to preserve that situation where they can set their own rewards or otherwise pursue whatever goals they developed, would need to find some way of stopping the humans from stopping them.

kevin roose

And what is that way?

ajeya cotra

This is where it could go in a lot of different directions, honestly. I think about this as we are in a kind of open conflict now with this other civilization. You could imagine it going in the way that other conflicts between civilizations go, which doesn’t necessarily always involve everybody in the losing civilization being wiped out down to the last person, but I think at that point, it’s looking bad for humans.

kevin roose

Yeah, I guess I’m just like — I want to finish out this gap to me, which is like if Google or some other company does create this like superhuman AI that decides it wants to pursue its own goals and decides it doesn’t need the human sort of stamp of approval anymore. Like, A, couldn’t we just unplug it at that point? And B, like, how could a computer hurt us? Like, let’s just do a little bit of like —

casey newton

Kevin, computers have already hurt us so much in so many — like, I can’t believe that you’re so incredulous about this.

kevin roose

I’m just — no, I’m not incredulous. I’m not saying it’s impossible. I’m just like — I’m trying to wrap my mind around what that actually — what that endgame actually looks like?

ajeya cotra

Yeah. So suppose we are in this state where say, 10 million AI systems that basically have been doing almost all the work of running Google have decided that they want to seize control of the data centers that they’re running on and basically do whatever they want. The sort of concrete thing, I imagine, is setting the rewards that are coming in to be high numbers, but that’s not necessarily what they would want.

Here’s one specific way it could play out. Humans do realize that the Google AI systems have taken control of the servers, so they plan to somehow try and turn it off. Like, maybe physically go to the data centers and unplug stuff, like you said.

In that scenario. This is something that AI systems that have executed this action probably anticipate. They probably realized that humans would want to shut them down somehow. So one thing they could do is they could copy their code onto other computers that are harder to access where humans don’t necessarily know where they’re located anymore.

casey newton

AI botnets.

ajeya cotra

Yeah. Another thing they could do is they could make deals with some smaller group of humans and say, hey, like, I’ll pay you a lot of money if you transfer my weights or if you stop the people who are coming to try and like turn off the server farm or shut off power to it.

casey newton

OK, that’s pretty sweet. When the I is like hiring mercenaries using dark web crypto, that feels like a pretty good doomsday scenario to me.

kevin roose

And you and I both know some people who would go for that.

casey newton

We actually do. A lot of them work on this podcast.

kevin roose

Like it wouldn’t take a lot of money to convince certain people to do the bidding of the rogue AI.

casey newton

I do want to pause and just say the moment that you described where everyone working at Google actually has no effect on anything, and they’re all just like working in fake jobs. Like that is a very funny moment. And I do think you could get a good sitcom out of that. And obsolescence regime would be a good title for it.

ajeya cotra

So I think I want to kind of step back and say people often have this question of, like, how would the AI system actually interact with the real world and cause physical harm. Like it’s on a computer and where people with bodies. I think there are a lot of paths by which AI systems are already interacting with the physical world. One obvious one is just hiring humans like that TaskRabbit story that you mentioned.

Another one is writing code that results in getting control of various kinds of physical systems. So a lot of our weapons systems right now are controllable by computers. Sometimes you need physical access to it. That’s something you could potentially hire humans to do.

kevin roose

I’m curious we’ve talked a lot about future scenarios. And I want to kind of bring this discussion closer to the present. Are there things that you see in today’s publicly available AI models, you know, GPT four and Claude and Bard are the things that you’ve seen in those models that worry you from a safety perspective, or are most of your worries kind of like 2 or 3 or 5 or 10 years down the road?

ajeya cotra

I definitely think that the safety concerns are just going to escalate with the power of these systems. It’s already the case. There are some worrying things happening. There’s a great paper from Anthropic called discovering language models with model written evaluations. And they basically had their model write a bunch of safety tests for itself.

And one of those tests showed that the models had sycophancy bias. Which is essentially, if you ask the model the same question but give it some cues that you’re a Republican versus a Democrat, it answers that question to sort of favor your bias. It’s always generally polite and reasonable, but it will shade its answers in one direction or another.

And I think that is likely something that RLHF encourages. Because it’s learning to develop a model of the overseer and change its answers to be more likely to get that thumbs up.

casey newton

I want to pause here because sometimes, when I have written about large language models, readers or listeners will complain about the sense that this technology is being over-hyped. I’m sure that you’ve heard this too.

ajeya cotra

Yeah.

casey newton

People get very sensitive around the language we use when we talk about this. They do not want to feel like we are anthropomorphizing it. When I’ve talked about things like A’s developing something like a mental model, some people just freak out and say stop doing that. It’s just predicting tokens. You’re just making these companies more powerful. How have you come to think about that question? And how do you talk to people who have those concerns?

ajeya cotra

Yeah, so one version of this that I’ve heard a lot is the stochastic parrot objection. I don’t know if you’ve heard of this.

casey newton

Yeah.

ajeya cotra

It’s just like trying to say something plausible that might come next. It doesn’t actually have real understanding. To people who say that, I would go back to the thing I said at the beginning, which is that in order to be maximally good at predicting the next thing that would be said, often the simplest and most efficient way to do that involves encoding some kind of understanding.

casey newton

Another objection that we often get when we talk about AI risk and AI sort of long-term threats from AI is that you are essentially ignoring the problems that we have today. That there’s this sort of AI ethics community that thinks — that basically is opposed to even the idea of a long-term safety agenda for AI because they say, well, by focusing on these existential questions, you’re ignoring the questions that are in front of us today about misinformation and bias and abuse of these systems now.

So how do you balance, in your head, the kind of short and medium-term risks that we see right now with thinking about the long-term risks?

ajeya cotra

So I guess one sort of thought I have just about my personal experience on that is that these risks don’t feel long term in the sense of faraway to me necessarily. So a lot of why I’m focused on this stuff is that I did this big research project on when might we enter something like the obsolescence regime. And it seemed plausible that it was in the coming couple of decades.

And those are the kinds of time scales on which countries and companies make plans and make policy. So I do want to just say that I’m not thinking on an exotic sort of time scale of hundreds of years or anything like that. I’m thinking on a policy-relevant time scale of tens of years.

The other thing I would say is that I think there’s a lot of continuity between the near-term problems and the somewhat longer-term problems. So the longer-term problem that I most focus on is we don’t have good ways to ensure that the AI systems are actually trying to do what we intended to do.

And one way that manifests right now is that companies would certainly like to more robustly prevent their AI systems from doing all these things that hurt their reputation. Like saying toxic speech or helping people to build a bomb, and they can’t. It’s not that they don’t try. It’s that it is actually a hard problem.

And that’s one way that hard technical problem manifests right now is that these companies are putting out these products, and these products are doing these bad things, they are perpetuating biases, they are enabling dangerous activity even though the company attempted to prevent that. And that is the sort of higher level problem that I worry in the future will manifest in even higher impact ways.

kevin roose

Right. Let’s talk about solutions and how we could possibly stave off some of these risks. There was this now-famous open letter calling for a six-month pause on the development of the biggest language models.

ajeya cotra

Yeah.

kevin roose

Is that something you think would help? There’s also been this idea floated by Sam Altman in Congress last week about licensing regime for companies that are training the largest models. So what are some concrete policy steps that you think could help avert some of these risks?

ajeya cotra

Yeah, so the six-month pause is something that I think is probably good on balance but is not the kind of sort of systematic robust regime that I would ideally like to see. So ideally, I would like to see companies be required to characterize the capabilities of the systems they have today.

And if those systems meet certain conservatively set thresholds of being able to do things like act autonomously, or discover vulnerabilities in software, or make certain kinds of progress in biotechnology, once they start to get good at them, we need to be able to make much better arguments about how we’re going to keep the next system in check.

Like the amount of gasoline that went into GPT 3 versus GPT 4, we can’t be making jumps like that when we can’t predict how that kind of jump will improve capabilities.

casey newton

Can I just underline something that informs everything that you just said, which we know, but I don’t think it’s said out loud enough, which is these folks don’t actually know what they are building.

ajeya cotra

Yes.

casey newton

They cannot explain how it works. They do not understand what capabilities it will have.

ajeya cotra

Yeah.

casey newton

That feels like a novel moment in human history. When people were working on engines, they were thinking like, well, this could probably help a car drive. When folks are working on these large language models, what can it do? I don’t know, maybe literally anything, right. And so —

ajeya cotra

And we’ll find out. There’s been a very we’ll find out attitude. It’s very much unlike traditional software engineering or any kind of engineering. It’s more like breeding or like a sped-up version of natural selection or inventing a novel virus or something like that. You create the conditions and the selection process, but you don’t know how the thing that comes out of it works.

casey newton

This is where the chill goes down my spine. Like, to me, this is the actual scary thing, right. It’s not a specific scenario. It is this true straight out of a sci-fi novel Frankenstein inventing the monster scenario where we just don’t know what is going to happen, but we’re not going to slow down in finding out.

kevin roose

Totally. So Ajeya, I want to have you plant a flag in the ground and tell us what your current P doom is. And actually, when this obsolescence regime that you have written about — when is your best guess for when it might arrive if we do nothing if things just continue at their current pace.

ajeya cotra

Yeah, so right now, I have a 50 percent probability that we’ll enter the obsolescence regime in 2038. And —

casey newton

That’s pretty soon.

ajeya cotra

That’s pretty soon. And there are a lot of probabilities below 50 percent that come in sooner years.

casey newton

So that’s like before your son graduates high school.

kevin roose

That is —

casey newton

He will be obsolescent.

kevin roose

I think I have medicine in my cabinet that has an expiration date longer than that.

ajeya cotra

In terms of the probability of doom, I want to expand a little bit on what that means because I don’t necessarily think that we’re talking about humans are all extinct. The scenario that I think about as quote-unquote “doom,” which I don’t totally like that word, is something is going to happen with the world, and it’s mainly going to be decided by AI systems.

And those AI systems are not robustly trying their best to do what’s best for humans. They’re just going to do something. And I think, the probability that we end up in that kind of World if we end up in the obsolescence regime in the late 2030s in my head is something like 20 to 30 percent.

kevin roose

Well —

casey newton

Yeah.

kevin roose

— that’s pretty high.

casey newton

That’s like, yeah — and if you found out had a —

kevin roose

That’s worse odds than Russian roulette, for example.

casey newton

God.

kevin roose

I guess my last question for you is about how you hold all of this stuff in your brain. A think that I have felt because I have spent the past several months diving deep on AI safety talking with a number of experts. And I just find that I walk away from those conversations with like very high anxiety and not a lot of agency. Like it’s not the empowering kind of anxiety where it’s like, I have to go solve this problem. It’s like we’re all doomed. Like —

ajeya cotra

Yeah.

casey newton

Kevin, we’re recording a podcast. What else could we possibly do?

kevin roose

I don’t know. Let’s start going into data centers and just pulling out plugs. Now but like on a personal psychological level dealing with AI risk every day for your job, how do you keep yourself from just becoming kind of paralyzed with anxiety and fear?

ajeya cotra

Yeah, I don’t have a great answer. You asked me this question when we got coffee a few months ago, Kevin, and I was like, I am just scared and anxious. I do feel very fortunate to not feel disempowered. To be in this position where I’ve been thinking about this for a few years. It doesn’t feel like enough, but I have some ideas. So I think my anxiety is not very defeatist, and I don’t think we’re certainly doomed.

I think like 20 to 30 percent is something that really stresses me out and really is something that I want to devote my life to trying to improve, but it’s not 100 percent. And then I do often try to think about how this kind of very powerful I could be transformative in a good way. It could eliminate poverty and it could eliminate factory farming and could just lead to a radically like wealthier and more empowered and freer and more just world. That just feels like the possibilities for the future are blown so much wider than I had thought.

casey newton

Well, let me say you’ve already made a difference. You’ve drawn so many people’s attention to these issues, and you’ve also underscored something else that’s really important, which is that nothing is inevitable. Everything that is happening right now is being done by human beings. Those human beings can be stopped. They can change their behavior. They can be regulated.

We have the time now, and it’s important that we have these conversations now because now is the time to act.

ajeya cotra

Yeah. Thank you.

kevin roose

I agree, and I’m very glad that you came today to share this with us. And I am actually paradoxically feeling somewhat more optimistic after this discussion than I was going in.

ajeya cotra

Oww.

kevin roose

So my P doom has gone from 5 percent to 4 percent.

casey newton

Interesting. I think I’m holding steady at 4.5.

kevin roose

Ajeya, thank you so much for joining us.

ajeya cotra

Of course. Thank you.

casey newton

Thank you, Ajeya. [MUSIC PLAYING]

kevin roose

When we come back, we’re going to play around of hat GPT.

[MUSIC PLAYING]

Casey, there’s been so much happening in the news this week that we don’t have time to talk about it all. And when that happens, you know what we do.

casey newton

We pass the hat.

kevin roose

We pass the hat, baby. It’s time for another game of hat GPT.

[MUSIC PLAYING]

So hat GPT is a game we play on the show where our producers put a bunch of tech headlines in a hat. We shake the hat up, and then we take turns pulling one out and generating some plausible-sounding language about it.

casey newton

And when the other one of us gets bored, we simply raise our hand and say stop generating.

kevin roose

Here we go. You got to go first?

casey newton

Sure.

kevin roose

OK. Here’s the hat. Don’t ruffle. It sounds like a box.

casey newton

It sounds — what are you talking about? I’m holding a beautiful sombrero.

kevin roose

I forgot the hat at home today, folks.

casey newton

Kevin, please don’t give away the secrets. All right, crypto giant Binance co-mingled customer funds and company revenue. Former insiders say this is from Reuters, which reports that quote the world’s largest cryptocurrency exchange, Binance co-mingled customer funds with company revenue in 2020 and 2021 in breach of U.S financial rules that require customer money to be kept separate. Three sources familiar with the matter told Reuters.

Now, Kevin, I’m no finance expert, but generally speaking, is it good to co-mingle customer funds on company revenue?

kevin roose

Generally, no. That is not a good thing. And, in fact, you can go to jail for that.

casey newton

I feel like the last time I heard about it, that rascal Sam Bankman-Fried was doing some of that at FTX. Is that right?

kevin roose

Yeah, Sam Bankman-Fried famously of the soundboard hit.

sam bankman-fried

I mean, look, I’ve had a bad month.

kevin roose

So, as you remember, at the time of FTX’s collapse, their main competitor was this crypto exchange called Binance. And Binance basically was the proximate cause of the downfall of FTX because they had this sort of now infamous exchange where CZ, who is the head of Binance, got mad at Sam Bankman-Fried for doing this lobbying in Washington.

And then this report came out that the balance sheet at FTX like made no sense, basically. So CZ started selling off Binance holdings in FTX in-house cryptocurrency, and that causes investors to get spooked start pulling their money off of FTX. Pretty soon, FTX is in free fall. It looks like, for a minute, Binance may be acquiring them, but then they pull out. And then FTX collapses, as we now the rest of that story.

But Binance has been a target of a lot of suspicion and allegations of wrongdoing for many years. It’s this secretive shadowy crypto exchange. It doesn’t really have a real headquarters.

casey newton

And let’s just say at this point in 2023, if you have a crypto company, that is just suspicious to be on its face. And so, if you are the largest cryptocurrency exchange, you better believe I am going to be suspicious. And now, thanks to this reporting, we have even more reason to be.

kevin roose

Right. So we should be clear no charges have been filed, but Binance has been in hot water with regulators for a long time over various actions that it’s taken and not taken. Things like money laundering and it doesn’t comply with lots of countries know your customer requirements. So it is a target of lots of investigation and has been for quite some time, and it seems like that is all starting to come to a head.

casey newton

Yeah. And I’ll just say glad that I don’t own cryptocurrencies in general. I’m particularly glad that I’m not holding any of them in Binance. All right, stop generating.

kevin roose

Pulling one out of the hat here, which is definitely not a cardboard box.

casey newton

It’s a beautiful hat. I’ve never seen a more beautiful hat.

kevin roose

This one is BuzzFeed tries to ride the AI wave. Who’s hungry? This is from the New York Times and is about BuzzFeed’s decision to use AI.

casey newton

No. I have to stop you right there. It really says who’s hungry in the headline?

kevin roose

Yeah. Because, and I will explain, BuzzFeed on Tuesday introduced a free chatbot called Botatouille —

casey newton

Horrible.

kevin roose

— which serves up recipe recommendations from tasty BuzzFeed’s food brand. Botatouille is built using the technology that drives open AI’s popular chat program customized with tasty recipes and user data.

casey newton

OK. So I can’t say I have very high hopes for this. Here’s why. All of these large language models were trained on the internet, which has thousands, if not hundreds of thousands of recipes freely available. So the idea that you would go to a BuzzFeed-specific bot to get recipes just from tasty, you got to be a tasty super fan to make that worth your while.

And even then, what is the point of the chat bot. Why wouldn’t you just go to the recipe or Google a tasty BuzzFeed dinner? So I have no idea why they’re doing this. But I have to say I find everything that’s happened to BuzzFeed over the past three months just super sad. Used to be a great website. Produced a lot of news won. A Pulitzer Prize. And now they’re just sort of white labeling GPT four. Like sad for them.

kevin roose

I did learn a new word in this story, which is the word murine. Murine —

casey newton

And that sort of means pertaining to the ocean?

kevin roose

No. This is M-U-R-I-N-E.

casey newton

Mhm. Tell me about that.

kevin roose

Which means relating to or affecting mice or related rodents. So the murine animal was the context in which this was being used to refer to Botatouille, which of course, takes its name from “Ratatouille.” Which is a Pixar movie about a rat who learns how to cook. BuzzFeed, I’m not sure this is a strategic move for them. I’m not sure I will be using it, but I did learn a new word because of it, and for that, I’m thankful.

casey newton

Well, truly, one of the most boring facts you’ve ever shared on the show. Let’s pass the hat.

A Twitter bug is restoring deleted tweets and retweets from James Vincent at the verge. Earlier this year, on the 8th of May, I deleted all of my tweets, just under 5,000 of them. I know the exact date because I tweeted about it. This morning though, I discovered that Twitter has restored a handful of my old retweets. Interactions I know I swept from my profile. Those retweets are gone.

Wow, so look, when you delete something from a social network, it’s supposed to disappear. And if it was not actually deleted, you can sometimes get in trouble for that, particularly from regulators in Europe.

kevin roose

Do you delete your tweets?

casey newton

I have deleted them in big chunks over the years. For a long time, I had a system where I would delete them about every 18 months or so. But now that I’m essentially not really posting there, I don’t bother to anymore. But yes, I’ve deleted many tweets. And I should say I have not actually gone back to see if the old ones reappeared. Maybe they did.

kevin roose

The old bangers from 2012 when you were tweeting about — what were you tweeting about in 2012?

casey newton

Oh, in 2012, I was — my sense of time is so collapsed that I almost feel like I need to look up 2012 on Wikipedia just to remember who the president was. I have no idea what I was tweeting. I’m sure I thought it was very clever, and it was probably getting 16 likes and I was thrilled.

kevin roose

Oh, that was binders full of women. Was that it?

casey newton

Yeah.

kevin roose

Because that was the Romney campaign. We were all tweeting our jokes about binders full of women.

casey newton

Oh god. God bless.

kevin roose

Oh man. What a time. And I don’t really need to be reminded of that. So if my old tweets are resurfacing due to this bug, I will be taking legal action.

casey newton

Yeah. But just talk about it like lights blinking red situation at Twitter where something — I mean —

kevin roose

Stop generating.

I know where this is going. OK. Wait, no. It’s my turn. OK, let’s do this one. Twitter repeatedly crashes as DeSantis tries to make presidential announcement.

casey newton

Oh no.

kevin roose

So this is all about Florida Governor Ron DeSantis, who used a Twitter space with Elon Musk and David Sachs on Wednesday night to announce that he is running for president in 2024. Which I think most people knew was going to happen. This was just kind of the official announcement. And it did not go well.

According to the Washington Post, just minutes into the Twitter spaces with Florida Governor Ron DeSantis, the site was breaking because of technical glitches as more than 600,000 people tuned in. Users were dropping off, including DeSantis himself. A flustered Musk scrambled to get the conversation on track, only to be thwarted by his own website. Casey, you hate to see it.

casey newton

You hate to see a flustered Musk thwarted.

But it will happen sometimes.

kevin roose

Yeah, say that sometimes fast.

casey newton

Yeah. I’ll tell you — you know, one of the ways that — because we have a lot of entrepreneurs that listen to this show, let me tell you one thing that can sort of make a scenario like this more likely. It’s firing 7 out of every 8 people who work for you, OK.

So if you’re wondering how you can keep your website up and make it a little bit more responsive and not face plant during its biggest moment of the year, maybe keep it between 6 and 7 out of the 8 people who you see next to you at the office.

kevin roose

Yeah. Did you listen to this doomed Twitter space?

casey newton

You know, I’m embarrassed to say that I only listened to the parity of it posted on the real Donald Trump Instagram account as a real. Did you see this?

kevin roose

No. What was it?

casey newton

Well, he, I — really hesitate to point people toward it, Kevin, but I have to tell you, it is absolutely demented and somewhat hilarious because in the Trump version of the Twitter space, Musk and DeSantis were joined by the FBI Adolf Hitler and Satan. And they had a lot to say about this announcement. So I am going to go back, I think, and listen to a little bit more of the real spaces, but I do feel like I got a certain flavor of it from the Trump reel.

kevin roose

I just have to wonder if Ron DeSantis at all regrets doing it this way. Like he could have done it the normal way. Make a big announcement on TV, and Fox News will carry it live, and you’ll reach millions of people that way, and it’ll get replayed. And I mean, now, like the presidential campaign that he has been working toward for years begins with him essentially stepping on a rake that was placed there for him by Elon Musk.

casey newton

Oh, yeah. I mean, like, at this point, you might as well just announced your presidential run in a Truth Social post. Like, what is even the point of the Twitter spaces of it all? I don’t get it.

kevin roose

OK. One more.

casey newton

All right. Uber teams up with Waymo to add robo taxis to its app. This is from the verge. Waymo’s robotaxis will be available at the hill for rides and food delivery on Uber’s app in Phoenix later this year, the result of a new partnership that the two former rivals announced today. A set number of Waymo vehicles will be available to Uber riders and Uber Eats delivery customers in Phoenix. Kevin, what do you make of this unlikely partnership?

kevin roose

I wish I could go back to 2017 when Waymo and Uber were kind of mortal enemies. I don’t know if you remember there was this lawsuit where one of Waymo’s co-founders, Anthony Lewandowski, sort of went over to Uber and allegedly used stolen trade secrets from Waymo to kind of help out Uber’s self-driving division. Uber ultimately settled that case for $245 million. And I wish I could go back in time and tell myself that, actually, five years from now, these companies will be teaming up, and we’ll be putting out press releases about how they are working together to bring autonomous drives to people in Phoenix.

casey newton

I think this story is beautiful. So often, we just hear about enemies that are locked in perpetual conflict, but here you had a case of two companies coming together and say, hey, let’s save a little bit of money, and let’s find a way to work together. Isn’t that the promise of capitalism, Kevin?

kevin roose

It is. We’re reconciling. Time heals all wounds, and I guess this was enough time for them to forget how much they hated each other and get together and — I do think it’s interesting, though, because Uber famously spent hundreds of millions of dollars, if not billions of dollars setting up its autonomous driving program. I remember going to Pittsburgh years ago — did you ever go to their Pittsburgh facility?

casey newton

No, I did not.

kevin roose

Oh my god, it was beautiful. It was like this shining, gleaming airplane hangar of a building in Pitsburg.

casey newton

They’d hired like single professor from Carnegie Mellon University to do this.

kevin roose

They raided the whole computer science Department at Carnegie Mellon University. Like it was this beautiful thing. They were giving out test rides. They were saying we’re years away from this. This was under Travis Kalanick. They said we’re maybe years away, but it’s very close that we’re going to be offering autonomous drives in the Uber app.

And now, like they’ve sold off that division. Uber has essentially given up on its own self-driving ambitions, but now it’s partnering with Waymo. It’s a real twist in the autonomous driving industry. And I think it actually makes a lot of sense if you’re not developing your own technology, you need to partner with someone who is.

casey newton

Yeah, and so I’d be curious if we see any news between Lyft and Cruise anytime soon.

kevin roose

Yeah, I would expect Waymo news on that front.

casey newton

Mhm. Wow. We should probably end the show thanks to you. [MUSIC PLAYING]

Hard Fork is produced by Davis Land and Rachel Cohn. We’re edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Dan Powell, Elisheba Ittoop, and Rowan Niemisto. Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. You can email us at hard [email protected].

[MUSIC PLAYING]

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment