Artificial Intelligence ‘AI’ is up for debate. Here’s why

Femi Bamigbetan

April 7, 2019

0

The rise of robots and artificial intelligence is reshaping the world’s highways, homes, and hospitals — but not everywhere, and not for everyone.

The risks and rewards of AI are not evenly distributed, and the greatest risks are stacked most heavily against people vulnerable to AI’s unintended consequences: data breaches, identity theft, job loss, widening wealth inequality, and human rights violations. AI’s impact is up for debate. But what do we mean — and not mean — by “artificial intelligence”?

Solutions start with definitions, so here’s an overview of the essential distinctions between kinds of AI. Keep it on hand for the live debate on April 3, 2019, at noon ET, when four top AI experts weighed the scenarios in which AI can help or harm humanity’s overall well-being.

The many definitions of artificial intelligence

Artificial intelligence is a machine’s ability to simulate intelligent human behavior, like speech recognition. Robots can be AI-driven, but robots can also move physically, whereas AI can be purely software, like GPS and game engines.

redtv-artificial-intelligence-robots

Within AI, three categories of complexity can affect human well-being: weak AI, strong AI, and superintelligence.

Weak AI is almost all of the AI in the world today. It provides specialized tools to perform difficult tasks faster and better than we can. It powers 3 billion smartphones, generates recommendations, recognizes faces, and automates activities, from driving to writing news articles. It can crush mere mortals at competitive games like chess and Go, and it’s used controversially in predictive policing and other pattern-matching tasks driven by data.

But weak AI is neither self-aware nor self-assigning. It’s also not controlled by everyone. A relatively small handful of companies and governments have disproportionate control, with vastly more data, platforms, and resources than anyone else. AI’s methods are widely accessible to anyone with some algorithm skills, but power is concentrated, and transparency is limited. Although weak AI causes major economic disruption, its most fundamental threat is thoughtless design and unjust use by companies and governments.

Strong AI is where risks and rewards escalate: machines that can think, improve, adapt, and deploy on their own. Just how soon strong AI could rise up and escape human oversight is not only an engineering question; it’s a human rights question. It’s also an emotional issue. Some proponents believe that if AI is built to value empathy, it can improve people’s lives. It leads to the doorstep of curing diseases and improving climate science if it’s in the right hands and used ethically.

But strong AI, also known as artificial general intelligence, threatens to gut factory and service sector jobs and any industry where automation makes humans dispensable. Even if strong AI is years away, anticipating its impact is an urgent task.

Superintelligence, like it sounds, surpasses human intelligence — the tipping point into digital utopia or digital doomsday, depending on who you ask. More than one top engineer has called it the greatest threat to human survival. Peaceful coexistence is the hope, but if drones can fully automate, self-weaponize, and evolve to have self-interest or egos, who or what could stop them? Unfriendly robots and programmers are possible. Just as possible are medical robots that save lives, or that devalue some lives for efficiency or economic gain. If AI slips free of human control, the question of human well-being might not be in our hands.

But many people are less concerned and more hopeful, seeing more benefits than costs — especially if AI can drive social, cultural, and economic progress like hunger relief, housing availability, access to education, transportation safety, disease prevention, and peace agreements.

Tags:
artificial intelligence PRESENT Technology The future
next story

JEFF & MACKENZIE BEZOS BUILDING THEIR SOLO EMPIRES 0