Self-Driven: Uber and Tesla


Self-driving cars have been in the news a lot in the past two weeks. Uber’s self-driving taxi hit and killed a pedestrian on March 18, and just a few days later a Tesla running in “autopilot” mode slammed into a road barrier at full speed, killing the driver. In both cases, there was a human driver who was supposed to be watching over the shoulder of the machine, but in the Uber case the driver appears to have been distracted and in the Tesla case, the driver had hands off the steering wheel for six seconds prior to the crash. How safe are self-driving cars?

Trick question! Neither of these cars were “self-driving” in at least one sense: both had a person behind the wheel who was ultimately responsible for piloting the vehicle. The Uber and Tesla driving systems aren’t even comparable. The Uber taxi does routing and planning, knows the speed limit, and should be able to see red traffic lights and stop at them (more on this below!). The Tesla “Autopilot” system is really just the combination of adaptive cruise control and lane-holding subsystems, which isn’t even enough to get it classified as autonomous in the state of California. Indeed, it’s a failure of the people behind the wheels, and the failure to properly train those people, that make the pilot-and-self-driving-car combination more dangerous than a human driver alone would be.


A self-driving Uber Volvo XC90, San Francisco.

You could still imagine wanting to dig into the numbers for self-driving cars’ safety records, even though they’re heterogeneous and have people playing the mechanical turk. If you did, you’d be sorely disappointed. None of the manufacturers publish any of their data publicly when they don’t have to. Indeed, our glimpses into data on autonomous vehicles from these companies come from two sources: internal documents that get leaked to the press and carefully selected statistics from the firms’ PR departments. The state of California, which requires the most rigorous documentation of autonomous vehicles anywhere, is another source, but because Tesla’s car isn’t autonomous, and because Uber refused to admit that its car is autonomous to the California DMV, we have no extra insight into these two vehicle platforms.

Nonetheless, Tesla’s Autopilot has three fatalities now, and all have one thing in common — all three drivers trusted the lane-holding feature well enough to not take control of the wheel in the last few seconds of their lives. With Uber, there’s very little autonomous vehicle performance history, but there are leaked documents and a pattern that makes Uber look like a risk-taking scofflaw with sub-par technology that has a vested interest to make it look better than it is. That these vehicles are being let loose on public roads, without extra oversight and with other traffic participants as safety guinea pigs, is giving the self-driving car industry and ideal a black eye.

If Tesla’s and Uber’s car technologies are very dissimilar, the companies have something in common. They are both “disruptive” companies with mavericks at the helm that see their fates hinging on getting to a widespread deployment of self-driving technology. But what differentiates Uber and Tesla from Google and GM most is, ironically, their use of essentially untrained test pilots in their vehicles: Tesla’s in the form of consumers, and Uber’s in the form of taxi drivers with very little specific autonomous-vehicle training. What caused the Tesla and Uber accidents may have a lot more to do with human factors than self-driving technology per se.

You can see we’ve got a lot of ground to cover. Read on!

The Red Herrings

But first, here are some irrelevant statistics you’ll hear bantered around after the accidents. First is that “more than 90% of accidents are caused by human drivers”. This is unsurprising, given that all car drivers are human. When only self-driving cars are allowed on the road, they’ll be responsible for 100% of accidents. The only thing that matters is the relative safety of humans and machines, expressed per mile or per trip or per hour. So let’s talk about that.


From phenominal RAND report on self-driving cars.

Humans are reportedly terrible drivers because 35,000 died on US highways last year. This actually demonstrates that people are fantastically good at driving, despite their flaws. US drivers also drove three trillion miles in the process. Three relevant statistics to keep in your mind as a baseline are 90 million miles per fatality, a million miles per injury, and half a million miles per accident of any kind. When autonomous vehicles can beat these numbers, which include drunk drivers, snowy and rainy weather conditions, and cell phones, you can say that humans are “bad” drivers. (Source: US National Highway Traffic and Safety Administration.)

Finally, I’m certain that you’ve heard that autonomous vehicles “will” be safer than human drivers, and that some fatal accidents now are just an uncomfortable hump that we have to get over to save millions of lives in the future. This is an “ends justify the means” argument, which puts it on sketchy ethical footing to start with. Moreover, in medical trials, patients are required to give informed consent to be treated, and the treatment under consideration has to be shown to be not significantly worse than a treatment already in use. Tests of self-driving technology aren’t that dissimilar, and we’ve not signed consent forms to share the road with non-human drivers. Worse, it looks very much like the machines are less good than we are. The fig leaf is that a human driver is ultimately in control, so it’s probably no worse than letting them drive without the machine, right? Let’s take a look.

Tesla: Autopilot, but not Autonomous


[Image: Electrek]

I reviewed Tesla’s “Autopilot” safety record just after the second fatal accident; nothing’s really changed. Tesla added a third fatality but could have also racked up another hundred million miles under Autopilot (they’re not saying, so we presume their average hasn’t improved dramatically). It’s likely that Tesla’s Autopilot has about the same average fatality rate, around 90 million miles per fatality, as the general US population, but we won’t be able to establish statistical confidence until they have a few billion running miles on them.

This is not great. Autopilot is supposed to only be engaged on good sections of road, in good weather conditions, and kept under strict supervision. If one considers how humans drive in these optimal conditions, they would do significantly better than average as well. Something like 30% of fatal accidents occur at intersections which should make Autopilot’s traditional cruising-down-the-highway use case look a lot better. An educated guess, based on these factors, is that Autopilot isn’t all that much worse than an average human driver under the same circumstances, but it certainly isn’t demonstrably better either.


Perhaps the public perception of “hand-on” driving technology needs adjustment

But Autopilot isn’t autonomous. According to the SAE definitions of automation, the current Autopilot software is level 2, partial automation, and that’s dangerous. Because the system works so well most of the time, users can forget that they’re intended to be in control at all times. This is part of the US National Transportation Safety Board (NTSB)’s conclusion on the accident in 2016, and it’s the common thread between all three Autopilot-related deaths. The drivers didn’t have their hands on the wheel, ready to take over, when the system failed to see an obstacle. They over-relied on the system to work.

It’s hard to blame them. After thousands of experiences when the car did the right thing, it would be hard to second-guess it on the 4,023rd time. Ironically, this would become even harder if the system had a large number of close calls. If the system frequently makes the right choice shortly after you would have, you learn to suppress your distrust. You practice not intervening with every near miss. And this is also the reason that Consumer Reports called on Tesla to remove the steering function and the Autopilot name — it just promises too much.

Uber: Scofflaws and Shambles

You might say that it’s mean-spirited to pile on Uber right now. After all, they have just experienced one tragedy and it was perhaps unavoidable. But whereas Tesla is taking baby steps toward automation and maybe promoting them too hard, Uber seems to be taking gigantic leaps, breaking laws, pushing their drivers, and hoping for the best.

In December 2016, Uber announced that it was going to test its vehicles in California. There was just one hitch: they weren’t allowed to. The California DMV classified their vehicles as autonomous, which subjected them to reporting requirements (more later!) that Uber were unwilling to subject themselves to, so Uber argued that their cars weren’t autonomous under California law, and they drove anyway.

After just a few days, the City of San Francisco shut them down. Within the one week that they were on the streets of San Francisco, their cars had reportedly run five or six red lights, with at least one getting filmed by the police (and posted on YouTube, naturally). Uber claims that the driver was at fault for not overriding the car, which apparently didn’t see the light at all.

In a publicity stunt made in heaven, Uber loaded up their cars onto a big “Otto” branded trucks and drove south to Arizona. Take that, California! Only the Otto trucks drove with human drivers behind the wheel, in stark contrast to that famous video of self-driving Otto trucks (YouTube) which was filmed in the state of Nevada. (During which they also didn’t have the proper permits to operate autonomously, and were thus operating illegally while filming.)

Uber picked Arizona, as have many autonomous vehicle companies, for the combination of good weather, wide highways, and lax regulation. Arizona’s governor, Douglas Ducey, made sure that the state was “open for business” and imposed minimal constraints on self-drivers as long as they follow the rules of the road. Notably, there are no reporting requirements in Arizona and no oversight once a permit is approved.

How were their cars doing in Arizona before the accident? Not well. According to leaked internal documents, they averaged thirteen miles between “interventions” when the driver needed to take over. With Autopilot, we speculated about the dangers of being lulled into complacency by an almost-too-good experience, but this is the opposite. We don’t know how many of the interventions were serious issues, like failing to notice a pedestrian or red traffic light, or how many were minor, like swerving slightly out of a lane on an empty road or just failing to brake smoothly. But we do know that an Uber driver had to be on his or her toes. And in this high-demand environment, they reduced the number of drivers from two to one to cut costs.


This guy will take the heat. Uber and Arizona are responsible.

Then the accident. At the time, Uber cars had a combined three million miles under their belt. Remember the US average of in excess of 90 million miles per fatality? Maybe Uber got unlucky, but given their background, I’m not willing to give them the benefit of the doubt. To tally even these three million fatality-free miles to the autonomous car is pushing it, though. Three million miles at thirteen miles per intervention means that a human took control around 230,000 times. We have no idea how many of these include situations where the driver prevented a serious accident before the one time that a driver didn’t. This is not what better-than-human driving looks like.

Takeaway

Other self-driving car technologies have significantly better performance records, no fatalities, or have subjected themselves to even the lightest public scrutiny by testing their vehicles in California, the only state with disclosure requirements for autonomous vehicles.  As a California rule comes into effect (today!) that enables full deployment of autonomous vehicles, rather than just testing, in the state, we expect to see more data in the future.  I’ll write up the good side of self-drivers in another article.

But for now, we’re left with two important counterexamples. The industry cannot be counted on to regulate itself if there’s no (enforced) transparency into the performance of their vehicles — both Tesla and Uber think they are in a winner-takes-all race to autonomous driving, and are cutting corners.

Because no self-driving cars, even the best of the best, are able to drive more than a few thousand miles without human intervention, the humans are the weak link. But this is not for the commonly stated reason that people are bad drivers — we’re talking about human-guided self-driving cars failing to meet the safety standards of unassisted humans after all. Instead, we need to publicly recognize that piloting a “self-driving” car is its own unique and dangerous task. Airplane pilots receive extensive training on the use and limitations of (real) autopilot. Maybe it’s time to make this mandatory for self-driving pilots as well.

Either way, the public deserves more data. From all evidence, the current state of technology in self-driving cars is nowhere near safer than human drivers, by any standards, and yet potentially lethal experiments are being undertaken on the streets. Obtaining informed consent of all drivers to participate in this experiment is perhaps asking too much. But at least we should be informed.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *