10 Things You Should Know About the Future of Artificial Intelligence

10 Things You Should Know about the Future of Artificial Intelligence
10 Things You Should Know about the Future of Artificial Intelligence

10 Things You Should Know About Artificial Intelligence and the Future of Humanity

 

We’re doing pretty well, us humans. As a species, we’ve been around for a good 200,000 years, surviving – if not evading – the full monty of general disasters ranging from volcanic eruptions to catastrophic comets – KT extinction events of the sort that ultimately did it in for the dinosaurs.

 

We’re also really rather creative as a species (putting the sapiens into homo sapiens if you like). Credit where credit’s due, in fact, for us not having invented anything yet that has the destructive power to eradicate our species from the face of this earth. And I even count nuclear weapons within this bracket, as – needing notoriously rare raw materials such as plutonium and lots of space – nuclear facilities tend to be both costly and really rather difficult to hide, meaning few groups have them.

 

But creativity brings risk, and just because we haven’t discovered something that poses an existential threat to us yet doesn’t mean it’s not on the horizon. What might such a threat look like? Genetic modification technology for humans? A tool that enables global totalitarianism? Molecular nanotechnology? Something that could biochemically wipes us out, or – indeed central to all of these things – machine technology?

 

Falling short of a freak, insurance-proof asteroid strike, if anything poses an existential threat in the next 100 years, probably going to be technological. In fact, as we accelerate the growth of our artificial intelligence, we’re also seeing a growing number of experts who reflect this fear. Here are 10 things you should know about what any such threat might look like:

10Dominance is human.

Humans days of being dominant might be limited.

Anyone who’s seen a Terminator film (which I’m guessing is most of us) will be aware of the series’ arch-antagonist, Skynet. Designed by the U.S. military, the AI hive-mind gradually gains consciousness and ultimately foresees that humanity will one day attempt to destroy it. In an act of self-preservation, Skynet therefore preempts a series of apocalyptic nuclear strikes between the U.S. and Russia; an event that comes to be known as “Judgment Day”.

 

The Nuclear Apocalypse Scene that forms such a vivid and memorable part of Terminator 2 perfectly encapsulates our fear of the extreme ends to which AI could lead us. But is there any rationale behind the fear we could one day see the fall of humanity and the “Rise of the Machines”?

 

According to Yan LeCun, the Director of AI Research at Facebook and Professor at NYU, we have nothing to worry about. For a start, if we were ever intelligent enough to program such powerful machines, it’s highly unlikely we’d program them with the capacity to do us that amount of damage. Second of all, LeCun argues, the will to dominate and destroy, is intrinsically human – reactionary behavior to perceived threats that’s been genetically coded into us over millennia of evolution. Its absence from artificial intelligence may be our “Salvation”.

 

It’s fair to say that if we decided to code anything of the sort into our AI, we’d arguably deserve our own destruction. There are problems with this view, however. And the first and foremost problem is that it might not be AI’s will to dominate us that could be the “Genesis” of this conflict. It could be its will to comply.

9There’s can be such a thing as “too helpful”.

Cars that drive themselves. Still seems spooky.

The worry is not AI developing destructive desires. It’s the worry of it being overly competent and committed to fulfilling its task. To tease out this idea, it might be useful to think about a current piece of AI most of us couldn’t live without – our GPS systems.

 

Imagine we program our GPS to get us across a city as quickly as possible. Our GPS will perform this function, but in doing so it may take us through areas that aren’t in our interest – dangerous neighborhoods or recently changed one-way streets, for example.

 

Now imagine that we’re not driving one of our current manuals or automatics, but we’re driving an autonomous vehicle, maybe something like one of the Google cars. And imagine that the AI is programmed to overall any attempted intervention on our part because – you know – the AI knows best.

 

This could conceivably happen in our relationship with AI. If we program it to carry out a specific task but then try to interrupt it from its task, and remove it from its function, the AI could perceive our intervention as interference. And, problematically, if we don’t put checks in place it could perceive such interference as a problem that needs to be overcome for the completion of the task. The answer? Make sure that if we design AI of such capabilities, we put checks in place so that our intervention is never seen as the problem.

8We could inadvertently use our own weapons against us.

Are we shooting ourselves in the foot with AI?

In a recent episode of the critically acclaimed TV show “Black Mirror”, “Men Against Fire”, we were offered a hypothetical glimpse into a terrifying future for the US military. In the episode, the military had designed a chip to be implanted into its soldiers. And what it essentially did **spoiler alert** was it rewired them so that they no longer saw their enemy as people but as hellish zombie-like creatures (or “roaches”) making it an awful lot easier to pull the trigger.

 

As with pretty much all other episodes of the series, the real horror comes from the fact that such a future is remarkably easy to imagine. In fact, what makes “Men Against Fire” even more terrifying is that technology has already been invented that takes the power of life and death out of human hands. And it permeates our military today.

 

I’m referring to lethal autonomous weapons; the kind that frighten AI experts the most. This AI is capable of seeking out and identifying targets independent of human involvement. There are narrow, short-term benefits to such weapons from a military perspective – the reduction in friendly casualties, for example. But the long-term risks of these cheap and easy-to-produce weapons could be catastrophic, especially if they fall into the hands of terrorists or totalitarian states.

 

At least this the fear expressed in this Open Letter From AI & Robotics Researchers. Global experts and household names, ranging form Stephen Hawking, Elon Musk, Steve Wozniak and Noam Chomsky are speaking out against this potential threat. We’d do well to listen.

7We need a global safe word.

Orange. Orange is the safe word.

If we’re going to create technologies that threaten us existentially, we need to make sure we design them with a kill switch. Admittedly, it’s unlikely this will be anything as simple as those tiny, inaccessible reset buttons you used to get on remote controls – the ones you could only push by sticking a pen up there (no, just me?) – but we’re going to need something.

 

So thinks Cambridge theoretical physicist Stephen Hawking.  Unless we prepare ourselves to face the dangers AI could bring, it could spell the end of humanity. But Hawking isn’t the only respected academic leading the charge. In his article entitled The Strategic Implications of Openness in AI Development, Oxford academic Nick Bostrom suggests that extreme competition in the race to build the first super-intelligent AI could prove disastrous.

 

We have, Bostrom argues, an obligation to future generations to establish a series of checks that will slow the process down. Fortunately, it seems the process has already begun. Google’s Deep Mind team has been designing a kill switch that allows for the “safe interruptibility” of robotic tasks. It doesn’t apply to all AI yet, but it’s at least start.

 

Technology that could threaten us must be designed with a simplistic, fail-safe kill switch. Then again, any future AI engineers looking for inspiration don’t have to look far. Just look at the simplicity of the Death Star’s kill switch that Galen designed in Rogue One. The absolute gentleman.

6We need to balance AI’s benefits and drawbacks.

We gotta get our priorities straight. Pump the brakes on AI.

Every aspect of human civilization derives from human ingenuity. Whether it’s the discovery of fire, the invention of the wheel, the inception of democracy or, indeed, the invention of the computer; all of these have contributed to our taxonomic dominance on this planet and all have come from our innate creativity.

 

Our species has – by in large – now navigated its course through the hunter-gatherer, agricultural and industrial stages and is now sailing the murky seas of the age of technology. And the fact that we recognize the potentials of this age is apparent by the phenomenal amount of money we’re pouring into AI research.

 

It’s well known that AI already dictates our economy. The fast-paced, drug-fuelled reactionary chaos of the stock market – as portrayed in “The Wolf of Wall Street” – is a remnant of yesteryear, with AI more able to assess and predict the markets than any human through its complex algorithms. But it goes further than that; even when we check hotel tariffs, plane fares, whatever, all of the prices are dictated by machines and algorithms.

 

But with all the benefits that AI can bring, there are a number of pitfalls, not least in terms of the future of the labor market. It’s for this reason a number of academics have outlined a series of Research Priorities for Robust and Beneficial Artificial Intelligence.

5 AI could take all our jobs.

Sorry chess masters your job is over.

The implications AI could have on jobs is the topic of Volker Hirsch’s Ted Talk “AI & The Future of Work”. As Hirsch outlines, the efficacy of machines (and the fact they don’t tend to require a salary) makes them preferable to people in terms of a company’s end-of-year balance.

 

It’s presumably for this reason that the Apple’s supplier Foxconn decided to replace 60,000 workers with robots in one of its Chinese factories in May last year. And AI’s predominance isn’t limited to the confines of factories and factory floor. Builders had also better beware as there’s a robot that can build a house from brick and mortar in two days flat (and without needing numerous tea breaks).

 

But it’s not just logistical, secondary industry jobs that will come to be replaced. AI is also diagnostically capable of performing procedures we consider so technical and advanced that we still entrust them to highly paid professionals. Machines can recognize tumors quicker than humans can, for example.

 

In essence, we must make sure we mitigate the adverse effects that smarter AI could have on employment with its economic advantages. And we must also consider that its potential to permeate our labor market and leave scores of people without jobs could lead to considerable social inequality. Sure we’ll produce more, and it’ll cost less. But, without jobs, will there be anybody to sell to?

4 We are strong while our AI is weak.

Is it time to be nervous? Absolutely!

All the effects that AI could have on the future of humanity we’ve outlined so far have one thing in common. They’re predictions based on our current, “weak” AI. If our AI were to become “strong”, the situation could become much more serious. But to understand this, we first need to differentiate between narrow (or weak) AI and general (or strong) AI.

 

“Weak” AI is fine-tuned to perform a specific task. It can drive a car, for example, be a bot in a videogame, or be that impossible-to-beat PC chess master on hard mode. It can perform many tasks – playing chess for example – better than all, if not most, humans. But it’s limited to a specific task. “Strong” AI, when invented, will be able to perform a series of tasks. And just as weak AI is currently able to perform individual tasks better than humans can, strong AI will be able to do the same with general tasks.

 

Even the current military drones we use, like the X-47b Pegasus Drone, count as weak AI. And they’re capable of carrying out unpiloted missions and then landing themselves on an aircraft carrier – something than any pilot would tell you is the most difficult landing maneuver.

 

And this should make us worried. Before pushing forward in our quest to design “strong” AI, we should first be clear about what we want from our artificial intelligence, and, secondly, protect ourselves from potential hijackings of “strong” AI by groups who have other, more sinister, motives. Because reprograming is a constant worry among AI scientists, confirming this article’s view that humans, not robots, are the real reason artificial intelligence is scary.

3It could undo some of the damage we’ve already done.

Does AI offer the promise of amazing breakthroughs? Maybe.

Technological breakthroughs in 2016 showed us just how fertile the field of AI was. Considerable advantages in data size and developments in algorithm complexity led to the invention of AI that could outperform humans in almost every sense. But it would be a mistake just to look at how AI can help us, only as individuals and only in the minutiae of our day-to-day lives.

 

Globally, we’re seeing a frightening reduction in biodiversity, with around two-thirds of wild animals expected to be wiped off the face of the planet by 2020. Human activities are seriously accelerating the rate of extinction – by a baffling rate of 1,000 times what it would be without our involvement. And, make no mistake, the effect this is having on our ecosystems is as dangerous for us as its current – often furry, soon to be extinct – residents.

 

AI is doing its part to help though. By collecting inconceivable amounts of data relating to every aspect of environmental systems, it’s allowing us to create a virtual dashboard outlining what we need to do to ensure these ecosystems are protected. It’s only a start – investment and further inter-governmental collaboration are needed to really start reversing the terrible impact we’ve had on global biodiversity. But it is, at least, a start.

 

And it’s not just damage to the planet, but also damage to ourselves that AI could help rectify. And it’s paramount that we focus on those technologies that are existentially beneficial – things that cure diseases, nourish us, or generally retard the ageing process – rather than those that pose a threat to our existence.

 

2AI knows everything about us.

AI knows all.

There’s a good chance that, while you’re reading this, you have a couple of other tabs open. One of them might be YouTube, and assuming you’re using your own account, I’ll bet you have a bunch of recommended videos lined up that you didn’t pick but that you’ll (at least probably) enjoy. Another might be Amazon, and again I’ll be there are a bunch of recommended items that – actually, on a closer look – really would help you out and streamline various aspects of your life.

 

These are just a couple of examples of how we’ve become accustomed to AI collecting our data and using it to make our lives simpler and the tasks we’re set quicker. But – obviously – it goes beyond recommended videos and purchases. And the extent to which we’re observed, and data is collected on us, is deeply concerning.

 

Of all the data and information that’s held on us, Google holds the most. And this is in no small part because it owns the lion’s share of companies working in AI, including military research and weapons manufacturing companies such as DARPA and Lockheed Martin. Should we be concerned about the monopoly of data they have on us? Probably, yes. But don’t type your concerns into Google.

1By backing up, it could have our backs up

AI will get us one way or another.

We’ve all been grateful to the great god Technology at some point or another for his merciful intervention when using Microsoft Office. You’re writing up an important document, maybe you’re so into it that you’re dead to the world around you, and suddenly everything freezes and you see the dreaded spinning beach ball of death (you got me, I’m a mac user).

 

For the relatively computer-illiterate like me, you panic for a while, troubleshoot a few suggestions, but ultimately force close the document, trusting in the fact it’s either been autosaved or you saved it recently enough you’re not going to have a complete mental breakdown when next opening it to see a mere few lines of text. And the majority of the time, it’s worked – you can go about your job confused about your computer but content about the survival of your document.

 

The benefits that come with AI backing itself up are offset by potential pitfalls. As Jay Tuck alludes to in his frankly terrifyingly entitled Ted Talk: “Artificial Intelligence: It Will Kill Us”, like Theseus making his way through the Minotaur’s labyrinth AI leaves a trail of string so it can always work it’s way back if necessary.

 

Okay, Jay Tuck didn’t make a classical reference to Theseus. But the idea’s the same: in order to do what it’s programmed to do AI backs itself up so that it can survive and reassemble if we destroy part of it.

 

Conclusion.

It’s not through our physical strength that humans have evolved to become the dominant species on earth, but through our cerebral superiority. And, factoring out certain issues like the fact we’re chemically killing the planet, the positive steps we’re currently taking to sustain our species’ existence on this planet are quite remarkable.

 

The way in which we’re developing AI for the benefit of humanity is also laudable. However, it’s also something in need of regulation and, above all, caution. In our quest to eradicate war, famine, disease and poverty we must create AI with the capacity to do so. And in creating such artificial (super)intelligence, we need to make absolute sure that we research AI safety in tandem and make sure the intelligence is working in line with us, not against us.

 

Because we shape almost every aspect of the landscape around us, and leave such a visible ecological footprint, we tend to think of our species as invincible. In reality, 99 percent of the species that have at one point walked, flown or swam this earth no longer do so, and it’s perfectly conceivable that – despite our delusions of grandeur – if we don’t heed the advice on experts and start placing checks on some of our AI advances, we could end up following in their footsteps.