Big Data Flood, Part 2: The Pros, Cons, and Realities of Artificial Intelligence

Everyday Implications of the Big Data Flood_ Part 1
Big Data Flood-2

This article is the second in a series on Everyday Implications of the Big Data Flood. See the first article here.

The conflict between Elon Musk and Mark Zuckerberg that was reported barely two months ago revolved around a difference in opinions on a topic that may seem ridiculous: whether AI is an existential threat to human civilization.

The dialogue partly served to promote the two businessmen’s personalities and companies, but it also highlighted that there is no consensus on the topic, even among people who are closely involved with it and generally pretty smart.

This particular dialogue started when Musk made an appeal to the National Governors Association, during their annual meeting, asking them to impose proactive regulations – i.e. to act right now, without waiting for any problems to arise – on the development of artificial intelligence, as it poses a fundamental, existential risk to human civilization.

 

Many tech leaders seem to believe that AI is a legitimate threat.

Elon Musk has been talking the talk on the risks of AI for quite some time, and he also seems to be walking the walk. Consider:

  • He has asked for more government regulation. How often do you see a billionaire businessman ask public administrators to regulate something?

  • He has put a fair amount of his own money into OpenAI, a non-profit research company whose mission is to develop AI independently of any Silicon Valley giants and make all the work publicly available. This way, AI will not bring profit to only one company, institution, or even country.

Alongside Musk, many prominent tech experts share similar apprehension. And we’re not just talking about thought-leaders whose expertise lies in different fields, like Bill Gates or Stephen Hawking. The concerned also include a fair share of researchers who have devoted their career to AI, such as Ray Kurzweil, Eric Horvitz, Stuart J. Russell, and multiple members of the Association for the Advancement of Artificial Intelligence (AAAI), one of the most influential organizations in this domain.

 

There are two common-sense facts about AI that should make you at least a little nervous.

Putting aside the sci-fi visions of killer robots that Elon Musk brings to mind, there are a couple of more-or-less obvious aspects of AI technology that should make you think twice.

  • First, artificial intelligence does not have to reach the level of being general, i.e. artificial general intelligence (AGI), to become dangerous. Every software engineer knows that each and every IT system contains errors in code. According to the authors of the programming bible Code Complete, on average there are between 15 and 50 errors per one thousand lines of code. Programs that implement AI algorithms are no exception. What is more, AI systems are oftentimes so complex that it is not entirely known why they work the way they do and are treated as black boxes, whose behaviour can be analyzed only on the basis of their input and output.

  • Second, today’s artificial intelligence doesn’t directly involve ethics or morals. It is centered around optimizing the values of particular parameters or metrics and trying to reach a goal without much regard for anything else. That is where we get the idea of the “callous” or “soulless” machine. AI is a very sophisticated machine, but like every machine it simply does what it is designed to. If we make a mistake when specifying a goal, we can end up with totally unpredicted consequences.

 

Fans of unregulated AI will worry when – and if – the threat actually materializes.

Enthusiasts of unbounded AI progress come up rather short when it comes to argumentation.

Mark Zuckerberg just says that he is an optimist and simply leaves it at that. He does not understand “naysayers” who are “negative.”

And even the father of modern robotics, Rodney Brooks, unconvincingly points out that existing AI is quite primitive and that, based on his experience designing industry-grade robots, achieving anything that actually works in this domain is extremely hard.

Supporters of unregulated AI question whether we can really achieve AGI and if so how long it will take. Douglas Hofstadter, a professor of cognitive science and AI pioneer, said that “Life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt [the singularity] will happen in the next couple of centuries.”

Jaron Lanier, an AI pioneer and philosopher working at Microsoft Research, argues that humans are not merely biological computers and as such cannot be simply replaced with machines.

Gordon Bell, on the other hand, has stated that the human race will destroy itself before it can reach the singularity.

What to make of all these differing viewpoints? In my opinion:

 

AI fans might not win this debate, but they’ll still get their way.

It is hard to imagine how can we actually control or limit the development of AI, for three reasons:

  • First, country-level laws aren’t very effective, as the research can easily be moved abroad or conducted undercover.

  • Second, the domain is so complex that it would be hard to find people who can judge what is dangerous and what is not. Moreover, researching and developing AI doesn’t require any special equipment and can be done in a gifted student’s garage, however much of a cliche that might be.

  • Third, the major companies that sponsor and lead today’s AI research have the financial resources to influence political or legal decisions that may affect their work in the space. And as can be seen in the example of Google or Apple’s tax avoidance planning, following the spirit of the laws on the books is not always a priority for tech giants.

 

In conclusion, I believe that when it comes to the rapid expansion of AI, there are several legitimate reasons to be concerned and few well-argued reasons not to be. But that expansion won’t let up, whether we like it or not.

Back
to Top