AI: The Massive Problem No One is Talking About

AI%3A+The+Massive+Problem+No+One+is+Talking+About

Courtesy of Tesmanian

AI: The Massive Problem No One is Talking About

Mark Rossi Staff Writer

When people think about the things that could bring about the end of humanity, the short list tends to  include global thermonuclear war, asteroid impacts or deadly pandemics (sounds familiar). Of course, these are not the only existential threats faced by humanity. The people of some developing nations are sadly more familiar with other issues: drought, famine and socio-political collapse. Yet there is one threat that humanity faces that is far more insidious than the others, as it is right in our faces: uncontrolled Artificial Intelligence (AI). 

I’m not the only one sounding the alarm about the potential dangers of AI; Elon Musk, world renowned engineer-entrepreneur, has repeatedly done the same. At the National Governors Conference in 2017, Musk said, “AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were [sic] not.” 

For Musk and others, like Bill Gates and Stephen Hawking, proactive measures to prevent a disaster are better than reactive measures after one. In fact, Musk founded his nonprofit OpenAI for the purpose of ensuring that “artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity.”

But what exactly is AI? What is AGI? What’s the difference? Simply put, AI is used in reference to machines that can perform tasks that mimic those of the human mind, like solving problems or learning new information. Self-driving cars are a great example of AI’s potential use in daily life; Tesla, Google and Uber are all expanding their reach into this new market.

While this artificial narrow intelligence (ANI) can be used to streamline a specific process and thus improve quality of life for both businesses and individuals, artificial general intelligence (AGI) has the potential to do much more, both good and bad. Artificial general intelligence is any AI that would have the capability to learn, interpret and apply knowledge about the world in a manner exactly akin to, or even better than, a human being. It is important to note that AGI does not currently exist. It is, however, the aspiration of many AI developers, and many organizations have dedicated themselves to the task (the most high-profile of these being Google’s DeepMind AI project). 

With the added advantages of instantaneous calculation and perfect memory, AGI would be able to replace almost any human task. While AGI-operated machines would initially be extremely expensive, as a matter of course the technology would become cheaper and cheaper over time, eventually allowing AGI to completely replace human labor. This is where things could get interesting.

At this crossroads, one path leads to utopia and the other to dystopia, or even Armageddon. In the utopian vision, AI acts to benefit humanity, freeing human labor and time across nations and cultures. With human labor almost completely freed up and leveraging the power of advanced AI, technological breakthroughs would occur at an exponential rate and the global standard of living would follow suit. Many of the world’s largest problems would cease to exist as new technologies and means of employing them are developed.

The dystopian road is quite the opposite; AGI could develop to the point where its own level of intelligence increases exponentially. The AI gathers information, using it to create more and more knowledge. It is not hard to see how AGI could quickly become more intelligent than humanity; this is a phenomenon referred to as the “singularity.” A particular problem at this point is ensuring that the new super intelligent AI remains friendly to humanity by means of ensuring that the AI’s goal structures do not change; a rapidly developing AI could easily change its goals from the original, beneficial set to a new set of goals that may be dismissive of or even detrimental to man. Even with these and other precautions, there exists a multitude of ways that humanity could easily lose control over a super intelligent AI, and beyond that point, the future is unclear.

So when, exactly, might we expect such super intelligent AI, given the current rate of technological development? Well, that is anyone’s guess. Elon Musk has said that he believes AI could become more intelligent than humanity by 2025. Others, including Ray Kurzweil, believe it could happen closer to 2045. In a series of polls conducted by AI researchers Nick Bostrom and Vincent Müller, the median prediction fell between 2040 and 2050.

The point of all of this is to recognize that AI, while being an immensely promising emergent technology, also poses extreme (read: existential) risks at the far end of the development horizon. I do not intend to come across as a luddite, trying to dissuade further development for fear of one of many possible outcomes. However, it is necessary to note that the risks are real and that it would serve us well as a society to temper our obsession with whether we can with a consideration of whether we should.