Quick News Bit

The Future Could Be Blissful—If Humans Don’t Go Extinct First

0

Or we might want to have strong regulation around which AI systems are deployed, such that you can only use it if you really understand what’s going on under the hood. Or such that it’s passed a large number of checks to be sufficiently honest, harmless, and helpful. So rather than saying, “[We should] speed up or slow down AI progress,” we can look more narrowly than that and say, “OK, what are the things that may be most worrying? Do you know?” And then the second thing is that, as with all of these things, you’ve got to worry that if one person or one group just unilaterally says, “OK, I’m gonna not develop this,” well, maybe then it’s the less morally motivated actors that promote it instead.

You write a whole chapter about the risks of stagnation: A slowdown in economic and technological progress. This doesn’t seem to pose an existential risk in itself. What would be so bad about progress just staying close to present levels for centuries to come?

I included it for a couple of reasons. One is that stagnation has gotten very little attention in the long-termist world so far. But I also think it’s potentially very significant from a long-term perspective. One reason is that we could just get stuck in a time of perils. If we exist at a 1920s level of technology indefinitely, then that would not be sustainable. We burn through all the fossil fuels, we would get a climate catastrophe. If we continue at current levels of technology, then all-out nuclear war is only a matter of time. Even if the risk is very low, just a small annual risk over time is going to increase that.

Even more worryingly with engineered bioweapons, that’s just only a matter of time too. Simply stopping tech focus altogether, I think it’s not an option—actually, that will consign us to doom. It’s not clear exactly how fast we should be going, but it does mean that we need to get ourselves out of the current level of technological development and into the next one, in order to get ourselves to a point of what Toby Ord calls “existential security,” where we’ve got the technology and the wisdom to reduce these risks.

Even if we get on top of our present existential risks, won’t there be new risks that we don’t yet know about, lurking in our future? Can we ever get past our current moment of existential risk?

It could well be that as technology develops, maybe there are these little islands of safety. One point is if we’ve just discovered basically everything. In that case there are no new technologies that surprise us and kill us all. Or imagine if we had a defense against bioweapons, or technology that could prevent any nuclear war. Then maybe we could just hang out at that point of time with that technological level, so we can really think about what’s going to happen next. That could be possible. And so the way that you would have safety is by just looking at it and identifying what risks we face, how low we’ve managed to get the risks, or if we’re now at the point at which we’ve just figured out everything there is to figure out.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment