Monday, October 17, 2016

The car reliably drives itself, except it doesn't

On June 30th, 2016, the National Highway Traffic Safety Administration (NHTSB) opened an investigation on Tesla Motors due to a fatal crash involving a Tesla Model S.  The issue is the car's Autopilot software was enabled and driving the car.  The autonomous system missed a truck crossing the highway perpendicular to the Tesla's direction of travel.  As a result, the Tesla's passed under the truck's trailer.  Presumably, the parts of the trailer that went through the Tesla's windshield resulted in the driver's death.

Tesla's blog post on the incident states, in part:

This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.

These averages are so broad the comparisons are meaningless.

For example, Tesla quotes (without citation) that among all vehicles in the US, there is a fatality every 94 million miles.  The "all vehicles" category presumably includes buses and farm equipment.  Does comparing the fatality rates of mopeds and Tesla cars make sense?  Furthermore, Tesla's Model S has a suggested retail price of $70,000 USD, which is hardly representative of average vehicles.  What is the fatality rate for vehicles comparable to Tesla's Model S?  Are the comparable vehicles similar enough to reach meaningful conclusions?  What do different vehicle classes look like?  How are the associated driver populations for each category best described?  Are populations associated with higher fatality rates likely to adopt autonomous driving systems in the first place?

Recall the fatality per mile traveled averages include the effects of negligent drivers.  Under this light, Tesla's own numbers are questionable at face value.  Specifically, according to the CDC, 31% of the US driving related fatalities involve alcohol impairment.  Thus, sober human drivers cause one fatality per 136 million miles traveled, instead of the 94 million miles quoted.  The CDC data also indicates a further 16% crashes involve drugs other than alcohol, but the number of resulting fatalities could not be clarified for those collisions with available data.

Tesla's post compares fatality per mile averages.  However, Tesla's total of 130 million miles traveled pales in comparison to the number of miles driven to arrive at the CDC averages.  It looks like the sample sizes differ by several orders of magnitude.  Is Tesla's sample size really enough to reach accurate conclusions?  Tesla seems to think so, thus an assessment is fair game.  The above numbers show the Autopilot software compares favorably to possibly driving under the influence of intoxicants.  Moreover, Autopilot also compares as roughly equivalent to an average driver: likely speeding, perhaps reckless, in some cases impaired by drugs other than alcohol.  Altogether, Autopilot is not a good driver.

As a side note, when Tesla invokes fatality per miles traveled averages, the implication is that Tesla's Autopilot is better than average and hence good.  But most drivers incorrectly believe themselves better than average.  It follows the average driver underestimates what it takes to be a good driver.  Tesla's statement could be setting up average readers to deceive themselves by tacitly appealing to their illusory superiority.

But back to the story.  What are the self-driving car performance expectations?  This June 30th CNN article states, in part:

Experts say self-driving systems could improve safety and reduce the 1.25 million motor vehicles deaths on global roads every year. Many automakers have circled 2020 as a year when self-driving systems will be released on public roads.

The year 2020 is basically just around the corner.  Today, drivers feel compelled to forcibly take control from autonomous driving systems alarmingly often.  This LA Times article from January 2016 states, in part:

The California Department of Motor Vehicles released the first reports from seven automakers working on autonomous vehicle prototypes that describe the number of "disengagements" from self-driving mode from fall 2014 through November.  This is defined by the DMV as when a "failure of the autonomous technology is detected" or when the human driver needs to take manual control for safety reasons.

Google Inc. reported 341 significant disengagements, 272 of which were related to failure of the car technology and 69 instances of human test drivers choosing to take control. On average, Google experienced one disengagement per 1,244 miles. [total 424,331 miles traveled]

The average driver response time was 0.84 of a second, it said. [Who is "it"?  The DMV?  Google?]

Most of the cases in which drivers voluntarily took control of the car involved "perception discrepancy," or when the car's sensors did not correctly sense an object such as overhanging tree branches, Google said. 

Bosch recorded 625 disengagements, or about one per mile, and Delphi Automotive totaled 405, or one per 41 miles.  [Delphi total 16,662 miles traveled]

Nissan North America Inc. reported 106 disengagements, which breaks down to one per 14 miles; Mercedes-Benz Research and Development North America Inc. listed 1,031, or one every two miles; and Volkswagen Group of America Inc. totaled 260, or one every 57 miles.  [VW total 14,945 miles traveled]

Tesla Motors Inc. said it did not have any disengagements from autonomous mode. It did not report how many miles its self-driving cars had traveled. 

Both the information and the questions required for good understanding are missing.  Would you be comfortable being driven by someone who misses branches, or just fails to drive at all, as frequently as once a mile?  Are the driving conditions for those driven miles reported by Google and others realistic?  What proportion were driven in snow, ice, heavy rain, fog, or smoke?  Did autonomous driving systems encounter geese, ducks, or deer on the road?  How do those systems handle emergency situations?

Suppose you will never let beta software drive you around.  What happens when you are affected by someone who does?  Back to the CNN article,

Experts have cautioned since Tesla unveiled autopilot in October that the nature of the system could lead to unsafe situations as drivers may not be ready to safely retake the wheel.

If Tesla's autopilot determines it can no longer safely drive, a chime and visual alert signals to drivers they should resume operation of the car. A recent Stanford study found that a two-second warning -- which exceeds the time Tesla drivers are sure to receive -- was not enough time to count on a driver to safely retake control of a vehicle that had been driving autonomously.

Given this expert assessment, what does the lack of Tesla disengagements in California DMV's report mean?  That Tesla's software is just better?  That the average Tesla driver is less engaged?  Does the Tesla crash suggest so-so software is duping drivers into not paying attention?

But even if the timely warning was possible, are driving autopilots a good idea in the first place?  In aviation, autopilots do most of the flying and as a result human pilots' ability to fly by hand is compromised.  Recovering flight emergency situations often requires manual flying, which is not the time to discover those skills are lacking.  Specifically, the professional recommendation is:

The SAFO [Safety Alert for Operators], released earlier this month, recommends that professional flight crews and pilots of increasingly advanced aircraft should turn off the autopilot and hand-fly during "low-workload conditions," including during cruise flight in non-RVSM airspace. It also recommends operators should "promote manual flight operations when appropriate" and develop procedures to accomplish this goal.

"Autoflight systems are useful tools for pilots and have improved safety and workload management, and thus enabled more precise operations," the SAFO notes. "However, continuous use of autoflight systems could lead to degradation of the pilot's ability to quickly recover the aircraft from an undesired state."
The SAFO adds that, though autopilots have become a prevalent and useful tool for pilots, "unfortunately, continuous use of those systems does not reinforce a pilot's knowledge and skills in manual flight operations."

In contrast, driving autopilots are promoted for heavy use, and especially for low-workload driving scenarios.  The ideal situation often casts the driver as a self-absorbed mass transit passenger:

Note the irony of "progress" illustrated by reading a paper (!) book, comfortably sitting with all driving controls out of reach.  And there is more than one irony in play: studying from paper rather than a tablet is associated with better comprehension and retention of the materialPaper also leads to better results than a Kindle.  Why does this picture associate technological improvement with paper books?

Of course the flying environment is very different from the driving environment.  Kids and pets don't run in front of the plane from behind a row of clouds.  Plane collisions are infrequent due to spacious traffic control enforced with multiple radars.  And if you listen to plane mishap recordings, you will notice bad situations develop over comparatively long periods of time.  Limited flight autopilot failures can be tolerated because the entire flying environment is engineered to catch problems before they become unsurmountable.

In contrast, there is no car equivalent to the cockpit's emergency procedure binder.  The driving experience still requires quick judicious action.  Experience with flight autopilots show excessive dependency can result in compromised pilot skills.  So why should the professional advice for planes, with all the implied liability and gravitas, be any different in nature for cars?  And if unattentive drivers do not have the time to recover from an undesired state, why should drivers stop paying attention in the first place?  Tesla's own advice agrees: "be prepared to take over at any time".

For clarity's sake, maybe Tesla's "Autopilot" is better described in terms of "Driver Assist".

For time's sake, maybe traffic and community planning is a better way to curb hours wasted while driving.  As an example, shutting down container shipping terminals increases truck traffic.  Trucks disproportionately wear roads because pavement damage is proportional to the fourth power of weight --- each truck axle carrying 18,000 pounds is equivalent to 10,000 cars.  So, more truck traffic means more road work, which in turn causes even more congestion.  Self-driving vehicles can be irrelevant to traffic.

For everyone's sake, maybe a characterization of the behaviors correlated with fatalities could lead to keeping drivers exhibiting those behaviors off the road.  Ignition interlocks prevent drunk driving 70% of the time according to the CDC, but of course this is invasive for the sober majority.  So instead, a reasonable non-invasive Driver Assist feature could detect unfit driving.  Critically, this approach does not require developing a fully fledged autonomous driving system to be, in some respects, perhaps just as helpful.

Update: it looks like common sense is finally catching on --- the NTSB says fully autonomous cars probably won't happen, many proponents of the technology are running into trouble and/or scaling back expectations, and Tesla just disabled its Autopilot system.

1 comment:

Michael Klein said...

There is one automatic driving feature I would like, even in a buggy beta form:

Alert me that the traffic light has turned green after I stop for a red light.
Failure is defined as the car behind me honks its horn first.

Other than that, I'm happy if the computer continues its role as navigator/entertainer.

On second thought, I wouldn't mind a little help parallel parking.
It would also be cool to see where the computer thinks the lane is at night.. my eyes not so good.

But while the car is moving, my hand is on the wheel.