Artificial Intelligence And Policy

waymo_steve_mahan_1_thumbImage courtesy of Alphabet Inc.

While we might be far off from artificial intelligence at the caliber and malice of the world-ending “Skynet” from the Terminator universe, A.I. is moving right along.

Certain applied usages of machine learning, neural networks, and other conceptions of artificial intelligence are revolutionizing transportation, how we use big data, and how we interact with information more generally. As A.I. gets “smarter”, we will need to consider how to address this from a policy standpoint.

When discussing artificial intelligence, it is important to distinguish between general intelligence and specific intelligence. General intelligence is what it sounds like; a general intelligence that has the common sense and thought process abilities to do basic tasks. Specific intelligence, on the other hand, is rooted in some sort of specific ability or skill.

So, for example, a chess app might have a very powerful algorithm that could beat even the very best chess player in the world, but a toddler is better at pretty much anything else in the world besides chess. The algorithm has specific intelligence for chess, but has no notion of general intelligence.

Smart policy will be necessary for regulating both general intelligence applications and specific intelligences. The immediate questions are rooted in specific applications of A.I. (think self-driving cars, kidney distribution algorithms, privacy issues, etc.), and the more long-term questions are rooted in the development of general intelligence (think neural networks, brain emulations, and other pathways to seed general artificial intelligences).

For now, one compelling policy area is transportation technology policy, specifically regarding self-driving cars. There are plenty of mind-bending questions to consider. Who is responsible when an accident happens, the car company or someone else? Should the cars be designed to, in emergency situations, save the most lives or protect those in the car? If a driverless car malfunctions, is the car company always liable?

These questions are becoming more and more relevant to the real world every day. Uber recently started testing self-driving cars in Pittsburgh. Google, Tesla, and others also have impressive projects underway. The sooner we figure these questions out, the better.

A little more far off, though not as far off as one might think, policymakers and industry leaders must take much more seriously the methods by which we move towards artificial general intelligence. Since the potential value of this system is so large, private researchers and corporation R&D departments will be in a race of sorts. If it is assumed that one could reach general A.I. more quickly by cutting ethical and safety corners, then it is likely that a “race to the bottom” could develop, driven by the prisoner’s dilemma set-up. While it would be socially optimal for all of the research teams to take precautions and ensure ethical guidelines, this is less likely to happen when everybody is in a race.

This race, believe it or not, has already begun. One organization, Open A.I., is attempting to mitigate the risk of this “race to the bottom” by encouraging researchers to use ethical standards in their developments.

While private efforts are to be encouraged, they might not be strong enough. This is where policymakers could step in and set smart legal guidelines for what you can and cannot do in A.I. research.

While it might sound very abstract or “out there”, there is a strong need for these sorts of questions. We need to collectively decide what role we want artificial intelligence to play in our lives, and in the lives of our children. Policymakers, researchers, nonprofits, and corporations should all have a seat at the table to determine how to best move forward.




There are no comments

Add yours