Good governance is the next big step in ai

As originally published in Engadget on November 23, 2016.

There’s no question that artificial intelligence systems are going to play an increasingly important role in daily life, in almost every domain. It will provide incredible benefits — and serious new risks.

But unlike previous eras in which technology caused harm due to misuse from bad human decisions (either mistakes or intended with malice), AI has the potential to make bad decisions itself. The system-gone-amok scenario has been a staple of science fiction (think Skynet in “Terminator”). In a coming world in which missiles, medicines, high-speed transportation, and energy can be guided by machines, this is quickly transitioning to science fact.

For that reason, it is important to begin exploring how society will set up safeguards to prevent or reduce AI-caused harm.

The Need for Standards in AI Governance

One of the most critical issues in AI ethics and governance is how we conceive of its decisional authority. In order to design and implement these systems while minimizing risks, we must conceive of them not as decision makers, but decision supporters.

In this regard, AI can be viewed as not necessarily “artificial” intelligence — separate and independent of human intuition and rationality — but as “augmented intelligence,” combining the best machine features (massive data, speed, complex algorithms, and tirelessness) with human judgment and intuition. We are all already used to this sort of pairing (consider the control of a plane through autopilot), but the pilot still has the ability to toggle it and take manual control of the aircraft.

This concept of augmented intelligence must be woven into robust frameworks and standards for AI, insuring designs that mitigate risk. This will further stimulate and facilitate development in AI for two reasons: First, where there is the perception of low risk, there is investment. Second, for programmers, having standards translates to having the freedom to explore machine capabilities without the mental and moral burden of improvising risk mitigation features as systems evolve.

Of course, these standards will have to be well-tested and robust. With broadband Wi-Fi or other internet connectivity becoming ubiquitous, preventing hacking of AI systems is critical, and so cybersecurity and user privacy should be tightly integrated alongside AI standards.

This early period of standards drafting is something society has been through before: As the internet became successful, issues and threats came to light, and now, there are well-defined policies around its use. The same will happen with AI, but we’re early in that process because we’re still lacking a formal public policy and laws regarding automated decision-making by machines.

The Discussion Begins

Although this is all fairly new territory, the public discussion is well underway. Nick Bostrom, a philosophy professor at Oxford, has written extensively on these concerns, like in his 2014 bestseller, “Superintelligence.” Bostrom hunts only the biggest game: His analyses are concerned primarily with the gravest threats of AI — doomsday scenarios (like Skynet) in which all human life is threatened — which is an idea he calls “existential risk.”

Other writers have addressed smaller spheres of concern, such as the ethics of allowing driverless cars to make real-time decisions regarding human safety. This focused view of AI governance, in which we approach AI concerns on a case-by-case basis, is most realistic and practical. AI systems will not be monolithic supercomputers. Instead, systems will be sized and purposed to task domains. An AI that monitors body signals in real-time for preventing seizures will be very different — and may require different governance — than one controlling environmental settings in the home.

The process of moving this issue from the sphere of public intellectuals to actual laws and standards should be interdisciplinary — not just hashed out by software companies. Experts from transportation safety, genetics, military, medicine, security, philosophy, government, public health, and other domains will bring their strengths and perspectives to drafting the most effective standards for ensuring a world of safe extended intelligence.

The Pace of AI Quickens

The AI future is in some ways already here. Just this March, Google’s AlphaGo AI system made international headlines by beating a top-ranked player in the ancient board game of Go, which had long been touted as the most difficult game for machines to play and win. Driverless cars, guided drones, algorithmic-enhanced songwriting, and even “Jeopardy”-winning computers are becoming familiar concepts in our world.

Luckily, the threats of bad decisions made by AI have not yet roused public attention by any unfortunate event. Before any occur, we should devote time and resources when it comes to discussing (and ultimately enacting) AI industry ethics standards. With proper governance in place, innovation will be accelerated and the benefits to our world will be introduced rationally and safely.

Subscribe to our blog

Stay up to date on the latest CX news, resources, and service tips.