Driverless buses tested in Lyon, Paris and London, cars that park themselves… If the autonomous vehicle is already on the streets, hackers are waiting in ambush to denounce its security flaws.
We are gradually getting used to the idea of entrusting our lives to self-driving vehicles. More and more driverless buses are being tested in Lyon, Paris and London. A study published in 2016 predicts that in 2030, 15% of cars sold worldwide could be autonomous.
On March 14, 2017, a video posted online showed an autonomous car immobilized by a continuous white line traced on the asphalt. James Bridle’s Autonomous Trap 001 is conceived as a disenchantment ritual. The British hacktivist artist is concurrently developing a DIY autonomous vehicle system, which he has shared on Github.
“Autonomous Trap 001”, James Bridle, 2017:
Leading the self-driving frenzy is Google, which since 2009 has been developing autonomous cars via its subsidiary Waymo—from which Uber is accused of stealing technology to equip its own driverless fleet. But the entire auto industry is in the same boat. The common point shared by the majority of autonomous vehicles is the use of Lidar (Light Detection and Ranging) laser scanning to detect obstacles. It’s a $85,000 solution led by the Silicon Valley company Velodyne.
Elon Musk rejects Lidar
The only voice of discord is that of Elon Musk, president of the electric vehicle factory Tesla, who considers Lidar to be too expensive. Even if the California start-up Quanergy is promising $250 Lidar, Musk prefers to invest in the effective processing of data transmitted by the combination of cameras, radars and ultrasounds that already equip Tesla vehicles. Unfortunately, this autopilot mode, sometimes left on for hundreds of kilometers, has already caused at least one traffic fatality.
A Tesla X model’s autopilot avoids an accident:
@elonmusk Finally the right one. pic.twitter.com/2fspGMUoWf
— Hans Noordsij (@HansNoordsij) December 27, 2016
Raspberry Pi tricks Lidar
In terms of accident statistics, autonomous vehicles should fare better than human drivers, although their self-driving systems are still vulnerable. In September 2015, Jonathan Petit, a French researcher at Security Innovation, claimed to have tricked a Lidar using the tiny Raspberry Pi computer. He used the non-secure data collected by the laser to simulate an obstacle that paralyzed the vehicle.
Tesla was also forced to release a security update for its autopilot in 2016, after Chinese engineers took control of a Tesla S from some 20km away. Thanks, Internet.
Chinese engineers from Keen Security demonstrate the attack on the Tesla S:
Ethics in the engine
Even when properly secure, autonomous systems are on equal footing with human drivers if a pedestrian decides to cross the street without looking both ways first. As with drones, ethical issues arise: should an autonomous vehicle avoid a pedestrian at the risk of killing its passengers? Will the answer be the same from the manufacturer, the insurer or humanity? In response, since 2016, MIT has been collecting judgments for a statistical database by inviting people to react to catch-22 traffic situations in its Moral Machine.
Presentation of MIT’s Moral Machine initiative:
Pedestrians, those behavioral hackers
Pedestrians in particular are the thorns in the tires of autonomous vehicles. One reason is an article published in October 2016 by Adam Millard-Ball. In it, the U.C. Santa Cruz researcher develops a theory based on the game of chicken, whereby vehicles speed toward each other to see which one will veer off course first to avoid the impending collision. Obviously, driverless vehicles systematically steer clear of the approaching obstacle. Millard-Ball imagines pedestrians developing a false sense of impunity in regard to autonomous vehicles, forcing them to break abruptly. This potentially dangerous impact could challenge the improved traffic safety promised by driverless cars.
This study shouldn’t weigh too heavily on current investments in the industry. In terms of taking pedestrians to task, Belgian artist Dries Depoorter already imagined a similar system with his installation Jaywalking, presented in 2015 at the IDFA documentary festival in Amsterdam. As an artificial intelligence points out jaywalkers in surveillance camera images, visitors had the choice of reporting them to the police. At least the machine leaves us the masochistic pleasure of punishing each other.