/ Digitalization

Our Artificial Intelligence Choice of the Future

Technological experts have had beliefs that there is an artificial intelligence choice on how and what the ability is to change the world. But there hasn't been any common agreement on what kind of effect the transformation will have on the average person or how it can be chosen.

Some say humans will be better in the hands of the advanced AI systems, while others think that it will result to our inevitable downfall. How could a single technology evoke such difference responses from people within the tech community?

Background

Artificial intelligence is software which is built mainly for the purpose of learning or solving problems and also carries out processes which are mainly performed by the human brain. Digital assistance such as Amazon’s Alexa and Apple’s Siri, as well as Tesla’s Autopilot, are all powered by AI. There are some forms of AI which can even create visual art or write songs.

There is little question AI has the ability to be revolutionary. Automation can easily change the way we work by replacing humans with software and machines. Further developments in self-driving cars are also poised to make how we drive a thing of the past. The way we shop can also be changed by the use of artificially intelligent shopping assistants.

These aspects of our lives have always been controlled by humans, so it makes sense to be a bit wary of letting an artificial system take over it.

The Lay Of The Land

AI is becoming a major economic force at an uncontrollable rate, according to a paper from the McKinsey Global Institute Study which was reported by Forbes, in the year 2016 alone, an amount ranging between $8 billion and $12 billion was invested in the development of AI around the world. An analyst from Goldstein Research likewise predicts that, by 2030, AI will be a $14 billion industry.

The chief technology officer at Wipro, KR Sanjiy, have the belief that companies in the fields which are desperate such as finance and healthcare are investing so much in AI quickly because they do not want to be left behind. “So as with all things new and strange, the prevailing wisdom is that the risk of being left behind is far greater, and far grimmer, then the benefits of playing it safe,” this was written in an op-ed published in Tech Crunch last year.

Games likewise provide a useful window into the increasing sophistication of AI. Case in point, developers such as Elon Musk’s OpenAI and Google’s DeepMind have been using games to teach AI systems how to learn. So far, these systems have outperformed the world’s greatest players of the ancient strategy game Go, and even more complex games such as DOTA 2 and Super Smash Bros.

On the surface, the victories which were attained may sound incremental and minor, AI that can play Go can’t navigate a self-driving car, after all. However, on a deeper level, these developments are indicative of the more sophisticated AI systems of the future. With the aid of these games, AI become more capable of handling a complex decision making which could later translate into real world tasks. Software which can play infinitely complex games such as Starcraft, which require a lot of research and development, autonomously perform surgeries or process a multi-step voice commands.

Whenever this happens, AI would become sophisticated incredibly and this is the exact point where the worrying starts.

AI Anxiety (Not so Optimistic Artificial Intelligence Choice)

The wariness which is surrounding powerful technological advances is not novel. Different fiction stories such as Matrix and I Robot have exploited viewers’ anxiety around AI. Most of the plots focus mainly on a concept which is called “the Singularity,” the moment in which AIs become more intelligent than humans creators. There is a difference in this scenario but they usually end with the total eradication of human race, or with machine overloads subjugating people.

Most of the world-renowned sciences and technological experts have been very vocal about their fear of AI. Stephen Hawking who is a famous theoretical physicist worry that advances AI will take over the world and thus end the human race. If the robots become smarter than humans, his logic goes, the machines would be able to create unimaginable weapons and also manipulate human leaders with ease. “It would easily take off on its own, and then redesign itself at an ever-increasing rate,” he told BBC this in 2014. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Elon Musk, the futurist CEO of Ventures like SpaceX and Tesla, echoes those sentiments, calling AI “fundamental risk to the existence of human civilization” at the National Governors Association Summer Meeting which was held in 2017.

Neither Hawking nor Musk has the belief that developers should avoid the development of AI, but they agree that government regulation should ensure the tech does not go rogue. “Normally, the way regulations are set up, lots of bad things happen, there is a public outcry, and after several years, a regulatory agency is set up so as to regulate the industry,” this was said by Musk during the same NGA talk. “It takes forever, that in the past, has been bad, but not something which represent a fundamental risk to the existence of civilization.”

Hawking has the belief that a global governing body needs to regulate the development of AI so as to prevent a particular nation from becoming superior. Russian President Vladimir Putin recently stoked this fear at a meeting with Russian students in early September, when he said, “The one who becomes the leader in this sphere will be the ruler of the world.” These comment further emboldened Musk’s position, he then tweeted that the race for AI superiority is the “most likely cause of WW3.”

Musk has also taken appropriate steps to combat this perceived threat. He and Sam Altman who is a startup guru founded the non-profit OpenAI so as to guide AI development towards innovations that benefit the entire humanity. According to the mission statement of the company: “By being at the forefront of the field, we can easily influence the conditions under which AGI is created.” Musk likewise founded a company called Neuralink which is intended to create a brain-computing interface. Linking the brain to a computer would augment the brain processing power so as to keep pace with the AI systems.

Less Optimistic Artificial Intelligence Choice

Other predictions are less optimistic, the senior astronomer at SETI, Seth Shostak, believes that AI will succeed humans as the most intelligent entities on the planet. “The first generation of AI is just going to do what you ask them to do; however, by the third generation, they will have their personal agenda,” Shostak said in an interview with Futurism.

However, Shostak does not believe that sophisticated AI will end up enslaving the human race; rather, he predicts humans will just become immaterial to these hyper-intelligent machines.

Shostak thinks that these machines will exist on an intellectual plane so far above humans that, at worst, we will be nothing more than a tolerable nuisance.

Our Artificial Intelligence Choice of the Future
Share this

Subscribe to Soqqle - The Best Educational Game in 2018