Monday, October 17, 2016

The car reliably drives itself, except it doesn't

On June 30th, 2016, the National Highway Traffic Safety Administration (NHTSB) opened an investigation on Tesla Motors due to a fatal crash involving a Tesla Model S.  The issue is the car's Autopilot software was enabled and driving the car.  The autonomous system missed a truck crossing the highway perpendicular to the Tesla's direction of travel.  As a result, the Tesla's passed under the truck's trailer.  Presumably, the parts of the trailer that went through the Tesla's windshield resulted in the driver's death.

Tesla's blog post on the incident states, in part:

This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.

These averages are so broad the comparisons are meaningless.

For example, Tesla quotes (without citation) that among all vehicles in the US, there is a fatality every 94 million miles.  The "all vehicles" category presumably includes buses and farm equipment.  Does comparing the fatality rates of mopeds and Tesla cars make sense?  Furthermore, Tesla's Model S has a suggested retail price of $70,000 USD, which is hardly representative of average vehicles.  What is the fatality rate for vehicles comparable to Tesla's Model S?  Are the comparable vehicles similar enough to reach meaningful conclusions?  What do different vehicle classes look like?  How are the associated driver populations for each category best described?  Are populations associated with higher fatality rates likely to adopt autonomous driving systems in the first place?

Recall the fatality per mile traveled averages include the effects of negligent drivers.  Under this light, Tesla's own numbers are questionable at face value.  Specifically, according to the CDC, 31% of the US driving related fatalities involve alcohol impairment.  Thus, sober human drivers cause one fatality per 136 million miles traveled, instead of the 94 million miles quoted.  The CDC data also indicates a further 16% crashes involve drugs other than alcohol, but the number of resulting fatalities could not be clarified for those collisions with available data.

Tesla's post compares fatality per mile averages.  However, Tesla's total of 130 million miles traveled pales in comparison to the number of miles driven to arrive at the CDC averages.  It looks like the sample sizes differ by several orders of magnitude.  Is Tesla's sample size really enough to reach accurate conclusions?  Tesla seems to think so, thus an assessment is fair game.  The above numbers show the Autopilot software compares favorably to possibly driving under the influence of intoxicants.  Moreover, Autopilot also compares as roughly equivalent to an average driver: likely speeding, perhaps reckless, in some cases impaired by drugs other than alcohol.  Altogether, Autopilot is not a good driver.

As a side note, when Tesla invokes fatality per miles traveled averages, the implication is that Tesla's Autopilot is better than average and hence good.  But most drivers incorrectly believe themselves better than average.  It follows the average driver underestimates what it takes to be a good driver.  Tesla's statement could be setting up average readers to deceive themselves by tacitly appealing to their illusory superiority.

But back to the story.  What are the self-driving car performance expectations?  This June 30th CNN article states, in part:

Experts say self-driving systems could improve safety and reduce the 1.25 million motor vehicles deaths on global roads every year. Many automakers have circled 2020 as a year when self-driving systems will be released on public roads.

The year 2020 is basically just around the corner.  Today, drivers feel compelled to forcibly take control from autonomous driving systems alarmingly often.  This LA Times article from January 2016 states, in part:

The California Department of Motor Vehicles released the first reports from seven automakers working on autonomous vehicle prototypes that describe the number of "disengagements" from self-driving mode from fall 2014 through November.  This is defined by the DMV as when a "failure of the autonomous technology is detected" or when the human driver needs to take manual control for safety reasons.

Google Inc. reported 341 significant disengagements, 272 of which were related to failure of the car technology and 69 instances of human test drivers choosing to take control. On average, Google experienced one disengagement per 1,244 miles. [total 424,331 miles traveled]

The average driver response time was 0.84 of a second, it said. [Who is "it"?  The DMV?  Google?]

Most of the cases in which drivers voluntarily took control of the car involved "perception discrepancy," or when the car's sensors did not correctly sense an object such as overhanging tree branches, Google said. 

Bosch recorded 625 disengagements, or about one per mile, and Delphi Automotive totaled 405, or one per 41 miles.  [Delphi total 16,662 miles traveled]

Nissan North America Inc. reported 106 disengagements, which breaks down to one per 14 miles; Mercedes-Benz Research and Development North America Inc. listed 1,031, or one every two miles; and Volkswagen Group of America Inc. totaled 260, or one every 57 miles.  [VW total 14,945 miles traveled]

Tesla Motors Inc. said it did not have any disengagements from autonomous mode. It did not report how many miles its self-driving cars had traveled. 

Both the information and the questions required for good understanding are missing.  Would you be comfortable being driven by someone who misses branches, or just fails to drive at all, as frequently as once a mile?  Are the driving conditions for those driven miles reported by Google and others realistic?  What proportion were driven in snow, ice, heavy rain, fog, or smoke?  Did autonomous driving systems encounter geese, ducks, or deer on the road?  How do those systems handle emergency situations?

Suppose you will never let beta software drive you around.  What happens when you are affected by someone who does?  Back to the CNN article,

Experts have cautioned since Tesla unveiled autopilot in October that the nature of the system could lead to unsafe situations as drivers may not be ready to safely retake the wheel.

If Tesla's autopilot determines it can no longer safely drive, a chime and visual alert signals to drivers they should resume operation of the car. A recent Stanford study found that a two-second warning -- which exceeds the time Tesla drivers are sure to receive -- was not enough time to count on a driver to safely retake control of a vehicle that had been driving autonomously.

Given this expert assessment, what does the lack of Tesla disengagements in California DMV's report mean?  That Tesla's software is just better?  That the average Tesla driver is less engaged?  Does the Tesla crash suggest so-so software is duping drivers into not paying attention?

But even if the timely warning was possible, are driving autopilots a good idea in the first place?  In aviation, autopilots do most of the flying and as a result human pilots' ability to fly by hand is compromised.  Recovering flight emergency situations often requires manual flying, which is not the time to discover those skills are lacking.  Specifically, the professional recommendation is:

The SAFO [Safety Alert for Operators], released earlier this month, recommends that professional flight crews and pilots of increasingly advanced aircraft should turn off the autopilot and hand-fly during "low-workload conditions," including during cruise flight in non-RVSM airspace. It also recommends operators should "promote manual flight operations when appropriate" and develop procedures to accomplish this goal.

"Autoflight systems are useful tools for pilots and have improved safety and workload management, and thus enabled more precise operations," the SAFO notes. "However, continuous use of autoflight systems could lead to degradation of the pilot's ability to quickly recover the aircraft from an undesired state."
The SAFO adds that, though autopilots have become a prevalent and useful tool for pilots, "unfortunately, continuous use of those systems does not reinforce a pilot's knowledge and skills in manual flight operations."

In contrast, driving autopilots are promoted for heavy use, and especially for low-workload driving scenarios.  The ideal situation often casts the driver as a self-absorbed mass transit passenger:

Note the irony of "progress" illustrated by reading a paper (!) book, comfortably sitting with all driving controls out of reach.  And there is more than one irony in play: studying from paper rather than a tablet is associated with better comprehension and retention of the materialPaper also leads to better results than a Kindle.  Why does this picture associate technological improvement with paper books?

Of course the flying environment is very different from the driving environment.  Kids and pets don't run in front of the plane from behind a row of clouds.  Plane collisions are infrequent due to spacious traffic control enforced with multiple radars.  And if you listen to plane mishap recordings, you will notice bad situations develop over comparatively long periods of time.  Limited flight autopilot failures can be tolerated because the entire flying environment is engineered to catch problems before they become unsurmountable.

In contrast, there is no car equivalent to the cockpit's emergency procedure binder.  The driving experience still requires quick judicious action.  Experience with flight autopilots show excessive dependency can result in compromised pilot skills.  So why should the professional advice for planes, with all the implied liability and gravitas, be any different in nature for cars?  And if unattentive drivers do not have the time to recover from an undesired state, why should drivers stop paying attention in the first place?  Tesla's own advice agrees: "be prepared to take over at any time".

For clarity's sake, maybe Tesla's "Autopilot" is better described in terms of "Driver Assist".

For time's sake, maybe traffic and community planning is a better way to curb hours wasted while driving.  As an example, shutting down container shipping terminals increases truck traffic.  Trucks disproportionately wear roads because pavement damage is proportional to the fourth power of weight --- each truck axle carrying 18,000 pounds is equivalent to 10,000 cars.  So, more truck traffic means more road work, which in turn causes even more congestion.  Self-driving vehicles can be irrelevant to traffic.

For everyone's sake, maybe a characterization of the behaviors correlated with fatalities could lead to keeping drivers exhibiting those behaviors off the road.  Ignition interlocks prevent drunk driving 70% of the time according to the CDC, but of course this is invasive for the sober majority.  So instead, a reasonable non-invasive Driver Assist feature could detect unfit driving.  Critically, this approach does not require developing a fully fledged autonomous driving system to be, in some respects, perhaps just as helpful.

Update: it looks like common sense is finally catching on --- the NTSB says fully autonomous cars probably won't happen, many proponents of the technology are running into trouble and/or scaling back expectations, and Tesla just disabled its Autopilot system.

Sunday, October 16, 2016

NSA's August 2016 puzzle periodical problem

You can see the statement of an interesting problem I heard recently here.  Basically, it has two parts: a simpler stage, and a more complex setup.

The simpler stage is as follows.  Players A and B both take a card from a standard 52-card French deck and put it face up on their foreheads.  A can only see B's card, and vice versa.  Their task is to guess the color of their own card.  They cannot communicate with each other, and must write down their guesses at the same time.  If at least one of them guesses correctly, they both win.  Is there a strategy that always wins?

The more complex setup has four players, and now they must guess the card's suit.  If at least one of them guesses correctly, they all win.  Is there a strategy that always wins in this case?


if you want to have at the problem, stop here

So you are still there?  Did you really make an honest attempt at the problem?


Ok, so you want to read on.  Fine :).

First, the simpler stage.  Clearly, treating both players as identical doesn't go anywhere.  But if one considers that A's identity is different than B's, one can also assume they behave differently in response to each other's card.  That is, the card they see is a selector for some behavior.  Moreover, both players may have different responses to the same messages.

This looks a lot like ECC with XOR or parity, and RAID disk arrays.  A and B could be stand-ins for error recovery mechanisms that try to reconstruct missing data.  If at least one guesses correctly, together they recover the unknown card.  This metaphor is not precise enough, but it suggests the following strategy.

For example, let's say A guesses A's card is always the same color as B's card (which A can see).  Now if B behaves the same that's not good enough --- so let's have B always guess a color different than that of A's card (which B can see).  Let's say color black is 0, and color red is 1.  Moreover, let CX stand for player X's card color.  With this convention, the approach boils down to:
  • A plays 0 xor CB
  • B plays CA xor 1
A quick check (by hand, the truth table has 4 entries) shows the players always win with this strategy.

If that's what's going on for two colors and two players, what could be a reasonable guess for 4 suits and 4 players?  Well, let the suits be represented by 0, 1, 2 and 3, and further let CX now stand for player X's card suit.  Then,
  • A plays 0 xor CB xor CC xor CD
  • B plays CA xor 1 xor CC xor CD
  • C plays CA xor CB xor 2 xor CD
  • D plays CA xor CB xor CC xor 3
And again, a quick check (with code, the truth table has 256 entries) shows the players always win with this strategy too.

Monday, September 05, 2016

Smalltalks 2016 Invitation

The Fundación Argentina de Smalltalk (FAST, invites you to the 10th International Conference on Smalltalk Technologies (Smalltalks), to be held from November 9th through November 11th at the Universidad Tecnológica Nacional, Facultad Regional Tucumán, located in the city of Tucumán, Argentina. Everyone, including teachers, students, researchers, developers and entrepreneurs, are welcome as speakers or attendees.

This year, we are extremely happy to announce Ralph Johnson and Gilad Bracha will attend our conference.

Ralph Johnson is part of the Gang of Four that gave us the language to talk about Design Patterns in software.  He taught Smalltalk at the University of Illinois with widespread influence on the community.  His students include Don Roberts and John Brant, authors of the Refactoring Browser.

Gilad Bracha's work as a language designer and implementer spans Strongtalk and the Animorphic VM, the Java HotSpot VM, and more recently Dart at Google as well as NewSpeak.

1. Registration: Registration is free and now open at

Please make sure to register early to receive the conference's shirt, as well as to help us plan the conference's social events. We are accepting donations from participants to help fund the conference's costs. Please see the Donate section at FAST's website,

Contributions are greatly appreciated, and can be received both in pesos for local attendees, as well as via Paypal for those coming from abroad. Please note that donors, including those that have already sent us their contribution (thank you!), will receive a set of thank you gifts as well as the conference's shirt. For those of you that need a receipt, we can provide those on site.

2. Call for Participation.

Talk proposal submission for the Industry Track is now open at our website:

If you need special arrangements (e.g. because you would like to hold a workshop session), please indicate so in the abstract. The Industry Track's submission deadline is October 20th, 2016.

3. Related events: We will update related event information as we get closer to the conference, so please check for updates.

For any additional questions please write an email to

See you in Tucumán!
Fundación Argentina de Smalltalks (FAST)

Tuesday, August 09, 2016

Marquette Camp Smalltalk September 15-18th

Hi!  Your Camp Smalltalk crew has been at it for a while, and we are very happy to announce a Camp Smalltalk at Marquette, Michigan, on September 15-18th.  We are generously hosted by Northern Michigan University, where John Sarkela teaches Smalltalk courses.  Of course, NMU is also home to the Modtalk project.  You can help us provide a better experience for everyone by registering here.  Don't miss the date!

Saturday, July 16, 2016

Smalltalks 2015 videos now available

The videos from Smalltalks 2015 are now available here.

Stay tuned for coming information about Smalltalks 2016 :).

Sunday, June 26, 2016

Reliable email matters

Many of today's issues with software ultimately cause unreliable service.  Software's popularity does not seem greatly influenced by reliability, so the audience seems to tolerate the situation.  However, when unreliability becomes the norm, the resulting ecosystem is one in which nothing works as advertised.  You have effectively no recourse other than to roll out your own, become a system administrator, or put up with it.

This kind of environment directly limits what you can accomplish in life.  Take for instance email.  Although delivery was never guaranteed, at least you had some chance to track down problems and there seemed to be a general willingness to ensure correct transmission.  Today, emails simply vanish with no explanation, and you're not supposed to know what has happened.  After some debugging, the best working hypothesis for the latest occurrence is as follows:

Comcast silently refuses to deliver you emails that contain your email address.

To verify this hypothesis, I sent myself emails with just "" in the message body.  These emails did not bounce, did not show up in a junk email folder, and were not delivered.  But emails reading "", with the last 't' missing, were delivered.

That aggressive spam filtering is a necessary evil, the usual excuse, doesn't cut it in this case.  Someone replies to you, and the text says "at some point, wrote:".  Or someone comments on a forwarded email of yours that reads "From:".  These ubiquitous, well established email body patterns are being dropped without notice.

This new form of unreliability started at least a few weeks ago.  Comcast's first approach to resolve the issue was to unilaterally reset my password on a Saturday, while stating the department taking action does not work on weekends.  When resetting a password predictably didn't fix the delivery problem, Comcast's final position was for me to complain to Thunderbird, GMail, and several other ISPs / email client software makers for their evident, collective, and synchronized fault.

The side effects of unreliable software are allowed to spread unchecked in part because, in an unknowable and incomprehensible software world, naturally there is no responsibility and thus no recourse.  Hence, the above diagnosis is merely a best working hypothesis.  Occam's razor suggests the email problem is Comcast's fault.  But how do you find where the problem actually is without access, much less appropriate support?

I don't think this will get any better as long as software and derived services can be sold without any form of accountability whatsoever.  Consequently, until then, protecting yourself from unreliability is up to you.  In the case of email, that means implementing and/or managing your own email server.  But where does that road end?  Email is hardly a top reliability concern.  The go-it-alone approach does not scale.

Tuesday, October 20, 2015

Smalltalks 2015 keynote speakers

Smalltalks 2015 is around the corner.  We're very pleased with this year's keynote presenters: John Brant, Damien Cassou, and Clément Béra.  Such quality speakers are possible thanks to our sponsors, community donors, and collaborators.  We really appreciate your support, thank you!

... I can't wait for the conference to start :).

Tuesday, October 06, 2015

Smalltalks 2009 videos online

I am very happy to report that Smalltalks 2009 videos are now online here, including the incredible talks by Dan Ingalls.

Monday, August 31, 2015

Smalltalks 2010 videos on YouTube

You can see the freshly uploaded playlist here!

Sunday, August 23, 2015

Camp Smalltalk PDX wrap up

We wrapped up Camp Smalltalk PDX tonight.  The Saturday barbeque with fresh Oregon food and the Flat Nines jazz band was really nice!  Thank you Instantiations, FAST, and others who contributed to make it such a great experience --- including the cooks Paul DeBruicker and Dave Caster.  The CTRL-H hackerspace was a welcoming venue.  Thank you CTRL-H!  We had room and amenities, they lent us their backyard for the barbeque, and we also heard about the hackerspace's member projects.  And we even got Camp Smalltalk shirts courtesy of Dave Caster.

Of course, there was a lot of Smalltalk.  I heard of people working on VA Smalltalk, Monticello, Squeak, Cuis, web frameworks, GemStone, and so on.  Personally, I had a lot of fun hacking some VM stuff until a while ago.  And it's not just the work itself --- it's also the people you meet, the connections you make, and the passion you can share.

Photos will become available I am sure --- such as here.  In the mean time, enjoy this preview :).