A Dark Side of Artificial Intelligence

Researchers at Nvidia (known for their graphics chips) have developed a unique autonomous car. What makes this autonomous vehicle special is that it doesn’t follow instructions provided by an engineer or programmer. It instead relies on an algorithm that taught itself to drive by watching a human do it.

A network of artificial neurons processes data from the sensors of the vehicle and then delivers the commands to operate the steering wheel, brakes, and other systems. The result seems to match the responses you’d expect from a human driver. As amazing of an accomplishment this is there is an unsettling component which is, it isn’t clear exactly how the car makes its decisions. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action.

This points to a dark side of artificial intelligence not just in vehicles, but AI in general. The car’s AI technology, known as deep learning, has proven to be very powerful at solving complex problems. The hope is deep learning will be able to diagnose deadly diseases, make trading decisions, and help transform whole industries. But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. That’s one reason Nvidia’s car is still experimental. Click here to learn more.

Advertisements

Leave a Reply