Expert Voices

From Human Extinction to Super Intelligence, Two Futurists Explain (Op-Ed)

Human intelligence, future
The future is uncertain, and that’s a problem. (Image credit: cblue98, CC BY-SA)

This article was originally published at The Conversation. The publication contributed the article to Live Science's Expert Voices: Op-Ed & Insights.

The Conversation organised a public question-and-answer session on Reddit in which Anders Sandberg and Andrew Snyder-Beattie, researchers at the Future of Humanity Institute at Oxford University, explored what existential risks humanity faces and how we could reduce them. Here are the highlights.

What do you think poses the greatest threat to humanity?

Sandberg: Natural risks are far smaller than human-caused risks. The typical mammalian species lasts for a few million years, which means that extinction risk is on the order of one in a million per year. Just looking at nuclear war, where we have had at least one close call in 69 years (the Cuban Missile Crisis) gives a risk of many times higher. Of course, nuclear war might not be 100% extinction causing, but even if we agree it has just 10% or 1% chance, it is still way above the natural extinction rate.

Nuclear war is still the biggest direct threat, but I expect biotechnology-related threats to increase in the near future (cheap DNA synthesis, big databases of pathogens, at least some crazies and misanthropes). Further along the line nanotechnology (not grey goo, but “smart poisons” and superfast arms races) and artificial intelligence might be really risky.

The core problem is a lot of overconfidence. When people are overconfident they make more stupid decisions, ignore countervailing evidence and set up policies that increase risk. So in a sense the greatest threat is human stupidity.

In the near future, what do you think the risk is that an influenza strain (with high infectivity and lethality) of animal origin will mutate and begin to pass from human to human (rather than only animal to human), causing a pandemic? How fast could it spread and how fast could we set up defences against it?

Snyder-Beattie: Low probability. Some models we have been discussing suggest that a flu that kills one-third of the population would occur once every 10,000 years or so.

Pathogens face the same tradeoffs any parasite does. If the disease has a high lethality, it typically kills its host too quickly to spread very far. Selection pressure for pathogens therefore creates an inverse relationship between infectivity and lethality.

This inverse relationship is the byproduct of evolution though – there’s no law of physics that prevents such a disease. That is why engineered pathogens are of particular concern.

Is climate change a danger to our lives or only our way of life?

Sandberg: Climate change is unlikely to wipe out the human species, but it can certainly make life harder for our civilisation. So it is more of a threat to our way of life than to our lives. Still, a world pressured by agricultural trouble or struggles over geoengineering is a world more likely to get in trouble from other risks.

How do you rate threat from artificial intelligent (something highlighted in the recent movie Transcendence)?

Sandberg: We think it is potentially a very nasty risk, but there is also a decent chance that artificial intelligence is a good thing. Depends on whether we can make it such that it is friendly.

Of course, friendly AI is not the ultimate solution. Even if we could prove that a certain AI design would be safe, we still need to get everybody to implement it.

Which existential risk do you think we are under-investing in and why?

Snyder-Beattie: All of them. The reason we under-invest in countering them is because reducing existential risk is an inter-generational public good. Humans are bad at accounting for the welfare of future generations.

In some cases, such as possible existential risks from artificial intelligence, the underinvestment problem is compounded by people failing to take the risks seriously at all. In other cases, like biotechnology, people confuse risk with likelihood. Extremely unlikely events are still worth studying and preventing, simply because the stakes are so high.

Which prospect frightens you more: a Riddley Walker-type scenario, where a fairly healthy human population survives, but our higher culture and technologies are lost, and will probably never be rediscovered; or where the Earth becomes uninhabitable, but a technological population, with cultural archives, survives beyond Earth?

Snyder-Beattie: Without a doubt the Riddley Walker-type scenario. Human life has value, but I’m not convinced that the value is contingent on the life standing on a particular planet.

Humans confined to Earth will go extinct relatively quickly, in cosmic terms. Successful colonisation could support many thousands of trillions of happy humans, which I would argue outweighs the mere billions living on Earth.

What do you suspect will happen when we get to the stage where biotechnology becomes more augmentative than therapeutic in nature?

Sandberg: There is a classic argument among bioethicists about whether it is a good thing to “accept the given” or try to change things. There are cases where it is psychologically and practically good to accept who one is or a not very nice situation and move on… and other cases where it is a mistake. After all, sickness and ignorance are natural but rarely seen as something we ought to just accept – but we might have to learn to accept that there are things medicine and science cannot fix. Knowing the difference is of course the key problem, and people might legitimately disagree.

Augmentation that really could cause big cultural divides is augmentation that affects how we communicate. Making people smarter, live longer or see ultraviolet light doesn’t affect who they interact with much, but something that allows them to interact with new communities.

The transition between human and transhuman will generally look seamless, because most people want to look and function “normally”. So except for enhancements that are intended to show off, most will be low key. Which does not mean they are not changing things radically down the line, but most new technologies spread far more smoothly than we tend to think. We only notice the ones that pop up quickly or annoy us.

What gives you the most hope for humanity?

Sandberg: The overall wealth of humanity (measured in suitable units; lots of tricky economic archeology here) has grown exponentially over the past ~3000 years - despite the fall of the Roman empire, the Black Death and World War II. Just because we also mess things up doesn’t mean we lack ability to solve really tricky and nasty problems again and again.

Snyder-Beattie: Imagination. We’re able to use symbols and language to create and envision things that our ancestors would have never dreamed possible.

Anders Sandberg works for the Future of Humanity Institute at the University of Oxford.

Andrew Snyder-Beattie works for the Future of Humanity Institute at the University of Oxford.

This article was originally published on The Conversation. Read the original article. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google +. The views expressed are those of the author and do not necessarily reflect the views of the publisher. This version of the article was originally published on Live Science.

University of Oxford