Interface versus face-to-face: AI to end humanity?

by PDBY Staff | Apr 9, 2019 | Features

COURTNEY TINK

In 2016, artificial intelligence is predicted to advance in leaps and bounds. However, while this creates anticipation for some, it also brings about feelings of foreboding and negativity in others.

Artificial intelligence is defined as an area of computer science that deals with giving machines the power to copy intelligent human behaviour.

It has become increasingly clear in the modern world that man and machine have begun to merge, relying on one another to exist. According to English theoretical physicist, cosmologist and author Stephen Hawking, primitive forms of artificial intelligence (AI) have proven to be very useful, evident by Hawking’s use of AI communication technology to communicate. However, Hawking recently discussed his fear that there will be consequences for creating something that can match or surpass human intelligence. “The development of full artificial intelligence could spell the end of the human race. In theory, a hyper-intelligent form of AI would find the human race obsolete and unnecessary, or perhaps, as seen in numerous science-fiction works, may seek to save the human race from its greatest enemy: itself,” he said. Hawking clarifies that this will not stem from malice, but rather competency. He states that if humanity’s goals do not match robots’, then it will become problematic for humans, turning The Terminator from far-out fiction into near-future fact.

Elon Musk, a South African born engineer and inventor, warns society about the dangers of AI and weaponry. Musk tweeted in 2014 that AI is “potentially more dangerous than nukes”. This is based on the idea that if military weapons became autonomous, they could activate themselves worldwide without any need for human intervention. Currently, there are bomb-activating robots and drones that can be remotely controlled. If these were to gain autonomy, they would be able to harm a specific group based on variable criteria that were not programmed by any human.

Another potential issue with AI lies in the possibility of super intelligent weaponry being sold on the black market to terrorists and warlords, which could create a new wave of war and destruction unlike any before it. Recently, Musk and Hawking signed an open letter, along with over a 1000 other leading scientists and businessmen, released by the Future of Life organisation which calls for a ban on lethal weapons controlled by artificially intelligent machines in an attempt to highlight the dangers of AI.

While AI does have negative stigma surrounding it, it has created many new possibilities that could be beneficial to society. For example, the supercomputer The Nautilus has access to a multitude of news articles, to which it applies sentiment analysis algorithms and place-name detection, thus creating “predictions” based on any found patterns and common agendas. It managed to find Osama Bin Laden and predict the Arab Spring revolts in test environments. The Nautilus could be able to assist law enforcement agencies in finding criminals by using unique patterns to locate them. Google is also currently working on a car that uses AI to drive itself to predetermined destinations. Called “Google Chauffeur”, its goal is to stop fatal road accidents, as seen in South Africa during the festive season last year. The car will assess the driver and situation before allowing them to drive, leading to situations such as drunk-driving becoming a problem of the past.

AI has near-unlimited potential. However, with great power comes great responsibility. As stated by Hawking, depending on the manner in which AI is developed, AI could either redevelop or destroy humanity.

 

 

Image: Ciske van den Heever

Website | view posts