https://www.wsj.com/articles/bad-intelligence-behind-the-wheel-1492983234
This month Apple became the 30th company to receive a permit to test autonomous vehicles on the mean streets of California. I can’t wait to have that ponytail guy at the Apple Store Genius Bar check my oil. Beyond a $150 permit fee, the Department of Motor Vehicles requires these businesses to report all traffic accidents involving their self-driving cars. I read all the reports, and they’re mostly minor fender-benders.
Self-driving cars exist only because of artificial intelligence and machine learning. They aren’t so much programmed; rather, their sophisticated pattern-recognition-systems identify oncoming traffic, road stripes and stop signs. Autonomous cars will eventually be safer than what we have today. A time of fewer accidents and saved lives is coming.
But these cars still have a lot to learn. Most of the posted accidents involve Google’s cars, which have clocked some two million street miles. Impressive, but it’s still only the equivalent of what 200 normal drivers put on their vehicles in a year. That’s statistically insignificant given there are more than 250 million cars and trucks on American roads. Artificial intelligence needs lots more data.
Google Photos, which uses similar machine learning for facial recognition, hosts billions, maybe even trillions, of pictures. It is wicked smart, a window into a fantastic, if not slightly creepy, future. You tag a face with a name. It then correctly finds that face in other photos—even if they’re a decade old and have 30 other people in them. If you ask Google how it works, the company will say machine learning. But no one really knows exactly.
Neural networks, one of many machine-learning techniques, are modeled on the human brain. Information passes between nodes that look for patterns by weighing signals among these artificial neurons. After devouring millions of deer pictures, the identifying signals become stronger and stronger until the machine can easily identify the animal.
Deep learning, which came of age in the past two years thanks to faster processor architectures, uses multiple layers of neural networks to intensify the training—patterns of patterns. As you go deeper down the stack of neural networks, signals emerge for patterns that humans don’t consciously sense. Maybe it is the distance between eyes or the tail-to-torso ratio. No one knows. As professor Tommi Jaakkola explained to the MIT Technology Review, once a neural network becomes extremely large, “it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” This can cause some trouble.
In 2015 Google Photos tagged two African-Americans as gorillas. The YouTube Kids App, meant for children 5 and under, suggested videos that included foul language and jokes about pedophilia. In March 2016, Microsoft released a chatbot named Tay.ai, letting people on the internet train it. The bot quickly turned into “a Hitler-loving, feminist-bashing troll,” according to TechRepublic. Microsoft shut it down.
Bad artificial intelligence can be deadly. There was the Tesla crash in Florida last year, when the car’s autopilot sensors mistook a white truck trailer for the sky. The National Highway Traffic Safety Administration closed its investigation stating that “a safety-related defect trend has not been identified at this time.” Four months before the Florida crash, 23-year-old Gao Yaning died in Handan, China, when his Tesla rear-ended a slow-moving road-cleaning truck. His family is suing Tesla.
I can already imagine the cross-examination: “So, Mr. Musk, can you show me the code that instructs the car to avoid trucks or deer or drunk spring breakers? No? Can you give me the name of the programmer who wrote the code?” Of course not. This kind of code doesn’t exist. If it did, someone holding up a picture of a deer could get your car to swerve. Treble damages.
In the future it will be hard to find a business that artificial intelligence hasn’t disrupted. But be ready for a mangy mop of mesothelioma lawyers rushing headfirst into the artificial-intelligence injury racket. The industry desperately needs a safe harbor—much like the Digital Millennium Copyright Act of 1998, which kept legal paws off the emerging web. AI also needs a framework for functionality and verification, plus clear legal and regulatory rules. Otherwise trial lawyers would be happy to fill the void with lawsuits.
Mr. Kessler writes on technology and markets for the Journal.