Wednesday, June 13, 2018

Stuck keys since the 1990s

Recently, I ran into another instance of one of the shift, alt, or ctrl keys getting logically stuck.  That is, any keyboard press you make gets a gratuitous instance of shift, alt, or ctrl, as if they were held down continuously.  This is similar to the problem described here and elsewhere.

A solution to this problem is to press and hold the offending stuck key, then pressing the delete key (not backspace).  Releasing both then restores the stuck key to its working state.  If you're not sure which key is stuck, you just go through ctrl+del, alt+del, shift+del, and so on, perhaps doing both left and right keys, until you get good behavior.

I am at a loss on how to form an opinion on the problem.  I first observed this behavior in the early 1990s on machines running MS-DOS --- and the delete fix I discovered back then still works today.  What are some possible explanations for this phenomenon?

One is that keyboards today use the same hardware controller as in the 1990s, that this interface has some random race condition in it, and that nobody fixed it since then.  Another is that essentially the same keyboard software driver has been ported along since the MS-DOS era into Windows, bugs and all, and this is why the problem replicates in modern as well as museum configurations.

The software explanation seems more likely because I've seen this happen on virtualized Windows machines: while the guest OS sees stuck keys, the host keyboard behavior is correct.  It's really weird to have to type e.g. alt+del on your machine, which is working correctly, so that the guest OS also sees alt+del, so that the guest keyboard behavior is thus restored.

In other words, the museum configurations where this problem manifests itself may be real (as in the 1990s) as well as emulated (as in today).  Specifically, the latest instance of this problem occurred while running DOSBox inside Windows XP inside VMware inside OS X, and I was called to correct the issue via Skype with screen sharing.  The symptom was that pressing 'e' resulted in a new Windows Explorer window rather than the character 'e' appearing on the screen.

However, different software is not a guarantee policy against stuck keys.  I see this weird keyboard behavior happen about once a year in Windows land.  I have also seen this on OS X, although much less frequently --- perhaps once every 5 years.  I saw this happen once on Linux, too.  I've tried to find reasonable bug reports for this problem several times, but alas I haven't found a good explanation for why this happens (much less a patch or a definitive fix). 

Could anyone please find and correct this bug?  I'd be also interested in a good diagnosis that determines where the problem actually is, even if the analysis does not come with a fix.

Wednesday, September 27, 2017

On the apparent novelty of today's "innovation"

These days, developments related to "technology" seemingly must be hyped as "disruptive innovations" that will "revolutionize the world", "change people's lives", etc.  It's easy to lose perspective and take these claims at face value.  How about a reality check?  For example, this modern wrist stuff business of taking a pulse and counting calories is actually 1986-vintage quaint.

The modus operandi of dressing up decades old, forgotten products in "technology" appears more like borderline plagiarism because sources are left uncited --- only be sure always to call it please, "research".  Taken in context, these technological novelties could appear as ephemeral from the beginning.  And if that were so, technology hype would be appropriately reduced to merely enabling more efficient reproduction of sources of amusement and entertainment.  This usage of science hardly constitutes progress, much less a revolution.

As a suggestion, maybe taking a look at Status Update by Alice Marwick, or perhaps Amusing Ourselves to Death by Neil Postman, is in order.

Sunday, January 08, 2017

Smalltalks 2016 slides and photos now available!

You can now get the Smalltalks 2016 slides, as well as photos and reports, by visiting our website.  Enjoy!

Sunday, November 27, 2016

Smalltalks 2016 videos now available

You can see the videos from the Smalltalks 2016 conference here.  Enjoy!

Tuesday, November 01, 2016

First 2017 NA Camp Smalltalk announced

Please mark your calendars: a Camp Smalltalk is in the works for March 31st through April 2nd, to be held in Raleigh / Durham (North Carolina in the US).  More info soon!

Monday, October 17, 2016

The car reliably drives itself, except it doesn't

On June 30th, 2016, the National Highway Traffic Safety Administration (NHTSB) opened an investigation on Tesla Motors due to a fatal crash involving a Tesla Model S.  The issue is the car's Autopilot software was enabled and driving the car.  The autonomous system missed a truck crossing the highway perpendicular to the Tesla's direction of travel.  As a result, the Tesla's passed under the truck's trailer.  Presumably, the parts of the trailer that went through the Tesla's windshield resulted in the driver's death.

Tesla's blog post on the incident states, in part:

This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.

These averages are so broad the comparisons are meaningless.

For example, Tesla quotes (without citation) that among all vehicles in the US, there is a fatality every 94 million miles.  The "all vehicles" category presumably includes buses and farm equipment.  Does comparing the fatality rates of mopeds and Tesla cars make sense?  Furthermore, Tesla's Model S has a suggested retail price of $70,000 USD, which is hardly representative of average vehicles.  What is the fatality rate for vehicles comparable to Tesla's Model S?  Are the comparable vehicles similar enough to reach meaningful conclusions?  What do different vehicle classes look like?  How are the associated driver populations for each category best described?  Are populations associated with higher fatality rates likely to adopt autonomous driving systems in the first place?

Recall the fatality per mile traveled averages include the effects of negligent drivers.  Under this light, Tesla's own numbers are questionable at face value.  Specifically, according to the CDC, 31% of the US driving related fatalities involve alcohol impairment.  Thus, sober human drivers cause one fatality per 136 million miles traveled, instead of the 94 million miles quoted.  The CDC data also indicates a further 16% crashes involve drugs other than alcohol, but the number of resulting fatalities could not be clarified for those collisions with available data.

Tesla's post compares fatality per mile averages.  However, Tesla's total of 130 million miles traveled pales in comparison to the number of miles driven to arrive at the CDC averages.  It looks like the sample sizes differ by several orders of magnitude.  Is Tesla's sample size really enough to reach accurate conclusions?  Tesla seems to think so, thus an assessment is fair game.  The above numbers show the Autopilot software compares favorably to possibly driving under the influence of intoxicants.  Moreover, Autopilot also compares as roughly equivalent to an average driver: likely speeding, perhaps reckless, in some cases impaired by drugs other than alcohol.  Altogether, Autopilot is not a good driver.

As a side note, when Tesla invokes fatality per miles traveled averages, the implication is that Tesla's Autopilot is better than average and hence good.  But most drivers incorrectly believe themselves better than average.  It follows the average driver underestimates what it takes to be a good driver.  Tesla's statement could be setting up average readers to deceive themselves by tacitly appealing to their illusory superiority.

But back to the story.  What are the self-driving car performance expectations?  This June 30th CNN article states, in part:

Experts say self-driving systems could improve safety and reduce the 1.25 million motor vehicles deaths on global roads every year. Many automakers have circled 2020 as a year when self-driving systems will be released on public roads.

The year 2020 is basically just around the corner.  Today, drivers feel compelled to forcibly take control from autonomous driving systems alarmingly often.  This LA Times article from January 2016 states, in part:

The California Department of Motor Vehicles released the first reports from seven automakers working on autonomous vehicle prototypes that describe the number of "disengagements" from self-driving mode from fall 2014 through November.  This is defined by the DMV as when a "failure of the autonomous technology is detected" or when the human driver needs to take manual control for safety reasons.

Google Inc. reported 341 significant disengagements, 272 of which were related to failure of the car technology and 69 instances of human test drivers choosing to take control. On average, Google experienced one disengagement per 1,244 miles. [total 424,331 miles traveled]

The average driver response time was 0.84 of a second, it said. [Who is "it"?  The DMV?  Google?]

Most of the cases in which drivers voluntarily took control of the car involved "perception discrepancy," or when the car's sensors did not correctly sense an object such as overhanging tree branches, Google said. 

Bosch recorded 625 disengagements, or about one per mile, and Delphi Automotive totaled 405, or one per 41 miles.  [Delphi total 16,662 miles traveled]

Nissan North America Inc. reported 106 disengagements, which breaks down to one per 14 miles; Mercedes-Benz Research and Development North America Inc. listed 1,031, or one every two miles; and Volkswagen Group of America Inc. totaled 260, or one every 57 miles.  [VW total 14,945 miles traveled]

Tesla Motors Inc. said it did not have any disengagements from autonomous mode. It did not report how many miles its self-driving cars had traveled. 

Both the information and the questions required for good understanding are missing.  Would you be comfortable being driven by someone who misses branches, or just fails to drive at all, as frequently as once a mile?  Are the driving conditions for those driven miles reported by Google and others realistic?  What proportion were driven in snow, ice, heavy rain, fog, or smoke?  Did autonomous driving systems encounter geese, ducks, or deer on the road?  How do those systems handle emergency situations?

Suppose you will never let beta software drive you around.  What happens when you are affected by someone who does?  Back to the CNN article,

Experts have cautioned since Tesla unveiled autopilot in October that the nature of the system could lead to unsafe situations as drivers may not be ready to safely retake the wheel.

If Tesla's autopilot determines it can no longer safely drive, a chime and visual alert signals to drivers they should resume operation of the car. A recent Stanford study found that a two-second warning -- which exceeds the time Tesla drivers are sure to receive -- was not enough time to count on a driver to safely retake control of a vehicle that had been driving autonomously.

Given this expert assessment, what does the lack of Tesla disengagements in California DMV's report mean?  That Tesla's software is just better?  That the average Tesla driver is less engaged?  Does the Tesla crash suggest so-so software is duping drivers into not paying attention?

But even if the timely warning was possible, are driving autopilots a good idea in the first place?  In aviation, autopilots do most of the flying and as a result human pilots' ability to fly by hand is compromised.  Recovering flight emergency situations often requires manual flying, which is not the time to discover those skills are lacking.  Specifically, the professional recommendation is:

The SAFO [Safety Alert for Operators], released earlier this month, recommends that professional flight crews and pilots of increasingly advanced aircraft should turn off the autopilot and hand-fly during "low-workload conditions," including during cruise flight in non-RVSM airspace. It also recommends operators should "promote manual flight operations when appropriate" and develop procedures to accomplish this goal.

"Autoflight systems are useful tools for pilots and have improved safety and workload management, and thus enabled more precise operations," the SAFO notes. "However, continuous use of autoflight systems could lead to degradation of the pilot's ability to quickly recover the aircraft from an undesired state."
The SAFO adds that, though autopilots have become a prevalent and useful tool for pilots, "unfortunately, continuous use of those systems does not reinforce a pilot's knowledge and skills in manual flight operations."

In contrast, driving autopilots are promoted for heavy use, and especially for low-workload driving scenarios.  The ideal situation often casts the driver as a self-absorbed mass transit passenger:

Note the irony of "progress" illustrated by reading a paper (!) book, comfortably sitting with all driving controls out of reach.  And there is more than one irony in play: studying from paper rather than a tablet is associated with better comprehension and retention of the materialPaper also leads to better results than a Kindle.  Why does this picture associate technological improvement with paper books?

Of course the flying environment is very different from the driving environment.  Kids and pets don't run in front of the plane from behind a row of clouds.  Plane collisions are infrequent due to spacious traffic control enforced with multiple radars.  And if you listen to plane mishap recordings, you will notice bad situations develop over comparatively long periods of time.  Limited flight autopilot failures can be tolerated because the entire flying environment is engineered to catch problems before they become unsurmountable.

In contrast, there is no car equivalent to the cockpit's emergency procedure binder.  The driving experience still requires quick judicious action.  Experience with flight autopilots show excessive dependency can result in compromised pilot skills.  So why should the professional advice for planes, with all the implied liability and gravitas, be any different in nature for cars?  And if unattentive drivers do not have the time to recover from an undesired state, why should drivers stop paying attention in the first place?  Tesla's own advice agrees: "be prepared to take over at any time".

For clarity's sake, maybe Tesla's "Autopilot" is better described in terms of "Driver Assist".

For time's sake, maybe traffic and community planning is a better way to curb hours wasted while driving.  As an example, shutting down container shipping terminals increases truck traffic.  Trucks disproportionately wear roads because pavement damage is proportional to the fourth power of weight --- each truck axle carrying 18,000 pounds is equivalent to 10,000 cars.  So, more truck traffic means more road work, which in turn causes even more congestion.  Self-driving vehicles can be irrelevant to traffic.

For everyone's sake, maybe a characterization of the behaviors correlated with fatalities could lead to keeping drivers exhibiting those behaviors off the road.  Ignition interlocks prevent drunk driving 70% of the time according to the CDC, but of course this is invasive for the sober majority.  So instead, a reasonable non-invasive Driver Assist feature could detect unfit driving.  Critically, this approach does not require developing a fully fledged autonomous driving system to be, in some respects, perhaps just as helpful.

Update: it looks like common sense is finally catching on --- the NTSB says fully autonomous cars probably won't happen, many proponents of the technology are running into trouble and/or scaling back expectations, and Tesla just disabled its Autopilot system.

Sunday, October 16, 2016

NSA's August 2016 puzzle periodical problem

You can see the statement of an interesting problem I heard recently here.  Basically, it has two parts: a simpler stage, and a more complex setup.

The simpler stage is as follows.  Players A and B both take a card from a standard 52-card French deck and put it face up on their foreheads.  A can only see B's card, and vice versa.  Their task is to guess the color of their own card.  They cannot communicate with each other, and must write down their guesses at the same time.  If at least one of them guesses correctly, they both win.  Is there a strategy that always wins?

The more complex setup has four players, and now they must guess the card's suit.  If at least one of them guesses correctly, they all win.  Is there a strategy that always wins in this case?


if you want to have at the problem, stop here

So you are still there?  Did you really make an honest attempt at the problem?


Ok, so you want to read on.  Fine :).

First, the simpler stage.  Clearly, treating both players as identical doesn't go anywhere.  But if one considers that A's identity is different than B's, one can also assume they behave differently in response to each other's card.  That is, the card they see is a selector for some behavior.  Moreover, both players may have different responses to the same messages.

This looks a lot like ECC with XOR or parity, and RAID disk arrays.  A and B could be stand-ins for error recovery mechanisms that try to reconstruct missing data.  If at least one guesses correctly, together they recover the unknown card.  This metaphor is not precise enough, but it suggests the following strategy.

For example, let's say A guesses A's card is always the same color as B's card (which A can see).  Now if B behaves the same that's not good enough --- so let's have B always guess a color different than that of A's card (which B can see).  Let's say color black is 0, and color red is 1.  Moreover, let CX stand for player X's card color.  With this convention, the approach boils down to:
  • A plays 0 xor CB
  • B plays CA xor 1
A quick check (by hand, the truth table has 4 entries) shows the players always win with this strategy.

If that's what's going on for two colors and two players, what could be a reasonable guess for 4 suits and 4 players?  Well, let the suits be represented by 0, 1, 2 and 3, and further let CX now stand for player X's card suit.  Then,
  • A plays 0 xor CB xor CC xor CD
  • B plays CA xor 1 xor CC xor CD
  • C plays CA xor CB xor 2 xor CD
  • D plays CA xor CB xor CC xor 3
And again, a quick check (with code, the truth table has 256 entries) shows the players always win with this strategy too.

Monday, September 05, 2016

Smalltalks 2016 Invitation

The Fundación Argentina de Smalltalk (FAST, invites you to the 10th International Conference on Smalltalk Technologies (Smalltalks), to be held from November 9th through November 11th at the Universidad Tecnológica Nacional, Facultad Regional Tucumán, located in the city of Tucumán, Argentina. Everyone, including teachers, students, researchers, developers and entrepreneurs, are welcome as speakers or attendees.

This year, we are extremely happy to announce Ralph Johnson and Gilad Bracha will attend our conference.

Ralph Johnson is part of the Gang of Four that gave us the language to talk about Design Patterns in software.  He taught Smalltalk at the University of Illinois with widespread influence on the community.  His students include Don Roberts and John Brant, authors of the Refactoring Browser.

Gilad Bracha's work as a language designer and implementer spans Strongtalk and the Animorphic VM, the Java HotSpot VM, and more recently Dart at Google as well as NewSpeak.

1. Registration: Registration is free and now open at

Please make sure to register early to receive the conference's shirt, as well as to help us plan the conference's social events. We are accepting donations from participants to help fund the conference's costs. Please see the Donate section at FAST's website,

Contributions are greatly appreciated, and can be received both in pesos for local attendees, as well as via Paypal for those coming from abroad. Please note that donors, including those that have already sent us their contribution (thank you!), will receive a set of thank you gifts as well as the conference's shirt. For those of you that need a receipt, we can provide those on site.

2. Call for Participation.

Talk proposal submission for the Industry Track is now open at our website:

If you need special arrangements (e.g. because you would like to hold a workshop session), please indicate so in the abstract. The Industry Track's submission deadline is October 20th, 2016.

3. Related events: We will update related event information as we get closer to the conference, so please check for updates.

For any additional questions please write an email to

See you in Tucumán!
Fundación Argentina de Smalltalks (FAST)

Tuesday, August 09, 2016

Marquette Camp Smalltalk September 15-18th

Hi!  Your Camp Smalltalk crew has been at it for a while, and we are very happy to announce a Camp Smalltalk at Marquette, Michigan, on September 15-18th.  We are generously hosted by Northern Michigan University, where John Sarkela teaches Smalltalk courses.  Of course, NMU is also home to the Modtalk project.  You can help us provide a better experience for everyone by registering here.  Don't miss the date!

Saturday, July 16, 2016

Smalltalks 2015 videos now available

The videos from Smalltalks 2015 are now available here.

Stay tuned for coming information about Smalltalks 2016 :).