To listen to the daily stream of automotive news, you’d imagine that the self-driving car has been perfected and is 99.99 percent ready to roll. While many companies are testing self-driving cars, it’s important to remember that while there are many systems that can be easily automated in a perfect, “clean” environment, driving often involves the unexpected, and this could delay commercial introduction of autonomous vehicles.
In a recent Harvard Business Review article, writers Nick Oliver, Kristina Potočnik and Thomas Calvard noted that processes and environments that are structured are much easier to automate than those that are not. A straight road in good condition, for example, or a steady flow of traffic.
“Automated systems need to collect, classify, and respond to information, and this is easier to do in a clean, unambiguous environment — which is what many driving environments are not,” they wrote. “The designers of self-driving systems simply cannot foresee every possible combination of conditions that will occur on the road. (Though companies are trying: Google’s Waymo team deliberately subjects its cars to ‘pathological situations’ that are unlikely to happen, such as people hiding in bags and then jumping in front of the car.)”
Self-driving cars, of course, depend on cameras, sensors, lasers, inter-car communications, data collection and interpretation and other technologies that may do great on a dry, clear road in good shape. The same systems might become confused by less-than-optimal driving conditions.
“Cameras are challenged by strong, low-angle sunlight (important for reading traffic lights), and lasers can be confused by fog and snowfall,” wrote the HBR article authors. “Unusual, unfamiliar, and unstructured situations (so-called edge cases), such as accidents, road work, or a fast-approaching emergency response vehicle, can be hard to classify. And self-driving systems are not good at detecting and interpreting human cues, such as gestures and eye contact, that facilitate coordination between cars on the road.”
One conclusion is that even when self-driving cars are ready for the road, the road may not yet be ready for self-driving cars. While many elements of self-driving vehicles will do far better than human drivers ever could – reaction times, for example, or lasers that can see in the dark – there are still circumstances in which computers are not yet ready to replace human judgement. Machine learning – situations in which vehicles can “learn” fixes for unexpected problems and then share them across the network – will go a long way toward helping cars become more “human.”