Sunday, November 30, 2008

On implicit self, v2.0

Vassili Bykov posted an answer to my previous post regarding the use of implicit self in his blog. So, I will answer to his answer in my blog. I usually reread what I write and make corrections, so I have a feeling this post will be edited heavily. Please make sure you get the latest version.

Regarding implicit self in Newspeak, Vassili writes:

As far Newspeak is concerned, what we have is not “implicit self”, and its purpose is not saving keystrokes. What Newspeak has are implicit receivers. Because of class nesting, a message with an implicit receiver may really be sent to an object different from the “real” self (the receiver of the current message). This feature is very important in supporting the minimalist module system of Newspeak. Thus, an implicit receiver is not simply an omitted self, and inserting “self” into a message send with an implicit receiver is not a behavior-preserving transformation.

Ok, I did not understand implicit receivers in Newspeak and therefore what I wrote regarding implicit self does not apply to it. However, how does the argument for consistency to which Vassili replied with the text above, namely this (which I wrote),

I find that the consistency offered by a few keystrokes makes it easier for me to read and understand code faster and more accurately. Therefore, since we read code much more often than we write it, I think that favoring reading speed over typing speed is the right decision to make.

apply to implicit receivers in general? Doesn't the fact that the receiver is implicit become confusing over time? I think that, as Vassili says near the end of his post, perhaps experience will tell and unfortunately there is not a whole lot of it yet. On the other hand, maybe an example of what Vassili is referring to is in order.

The next piece from my post that Vassili quotes is this.

I’d rather see self than having to assume it by scanning the first token until the first occurrence of $: (or ‘::’) to only then be able to disambiguate between a receiver and a keyword.

In short: I prefer the work of my internal parser to be made easier by the use of prefixes, rather than to have to keep a stack that only goes down in the presence of a suffix.

Vassili's answer is the following.

This aurgmnet is falwed for the smiple raeson that our percpetion dose’nt wrok this way. We do’nt hvae an intarenl parser. What we rellay hvae culod be desrcbeid as a comlepx adpative, preditcive and bakcptaching paettrn recogznier. This is why we can still read the above even though most of the words are messed up.

Well, there is something worth noting. All the words that are mispelled have something in common: their first letter, a prefix, is always correct!

I've always felt disappointed when VW's spell checker tries to offer suggestions assuming the first letter is not at fault, and that the first letter is always present. When that does not happen, lewl, higstn moceeb rmoe tlufcidif ot dratsnedun. Tralinen erspra ro ton, het klac fo trecocr gliaden sortacindi edos esem ot ucaes tiandadiol sharphid*.

Jokes aside, I would suggest that although our idea of parser may be limited as implemented in a computer, we are quite able to draw distinctions on text based on a number of criteria. My observation was simply a matter of my personal preference.

I prefer the work of my internal parser to be made easier by the use of prefixes, rather than to have to keep a stack that only goes down in the presence of a suffix.

I still think it's worthy of consideration though, particularly because I am not sure Vassili's argument holds in every case:

We don’t scan the text linearly one character and one token at a time. Words are pictures, not character arrays.

Sure, however sentences are read left to right and precedence of certain words does matter. My thing with $: (or '::') is that they are a suffix added on to a word, and that the presence of this suffix has the ability to change the sentence being read quite strongly: it controls whether the first word is a receiver or not.

To put it differently, when it comes to sentences that represent a message send, perhaps the issue here is that Smalltalk basically imposed that the most important thing is the receiver, and that is why it comes first in Smalltalk sentences. In fact, since receivers come first, it is not necessary to mark them with suffixes or anything else.

But in Newspeak this is not so. I was of the impression that implicit self (in Self) or receivers (in Newspeak) were a matter of economy of typing. To some extent I get the same impression from Vassili's comment here:

On the other hand, there are situations when they improve readability by eliminating noise. A good example are DSLs embedded in Newspeak. So far we have two such languages widely used in the system: Gilad’s parser combinators and my UI combinators in Hopscotch. The feature common to both are definitions written in a declarative style combining smaller things into larger ones. Compare an example of such a definition the way it’s commonly written:
heading: (
row: {
image: fileIcon.
label: fileName.
[column: folderContents]

with the same definition with explicit receivers:

self heading: (
self row: {
self image: self fileIcon.
self label: self fileName.
[self column: self folderContents]

The first example has nothing but the structure it defines. It’s important what the expressions say. The fact that they are message sends is an implementation detail. The second example leaks this implementation, and it takes some effort to see what it really says in between all the “self”s.

The effect I can't help seeing though is that the receiver appears to stop being the most important element of a sentence, so much so that sometimes it is implicit and it is not equivalent to self --- even though in the code above the implicit receiver is self.

How does Newspeak disambiguate between an implicit receiver of "self" and some other implicit receiver? Is the disambiguation expense cheap? Perhaps part of the answer is in Vassili's comment:

Those left unconvinced should also consider that modern IDEs, Newspeak’s Hospcotch included, do the parsing for you by colorizing the source code.

However, I find this particular argument unconvincing because, even though I did work on more than one project that had syntax coloring, I found it most useful when the code was convoluted. So is the coloring good because of itself, or does it become valuable when there are other things to consider such as the inherent entropy of each symbol being read?**

But I digress. Personally, I would prefer the receiver to always be explicit, or at least the indication of whether there is a receiver first or not be a prefix, but what can I say... that's my biased preference today. I do not have a good record: 13 years ago I thought that programming assembler on my 386 was the greatest thing since sliced bread, and yet here I am writing books about Smalltalk... nevertheless, I hope that this is not seen in these terms:

There’s much to be said about the human nature and the tendency to instinctively resist change to something familiar while trying to rationalize that resistance.

Resistance to what? I am not observing the alleged change in my environment, so I cannot possibly be resisting it. The more interesting bit though is this.

It takes some time and experimenting to see a change for what it is and get a feel of the new tradeoffs.

So, I also hope that it is clear that some of the tradeoffs seen in Newspeak seem a bit strange to me at first sight. Not wrong, not incorrect, nor anything like that. Just not something I'd naturally think of today because my preferences are currently somewhere else, that's all.

Now, Gilad says in his presentations that one of the goals of Newspeak is to improve what was achieved with Smalltalk (and other languages such as Self). Well... perhaps the arguments are a bit too long to fit in 45 minutes or 2 hours, and so the essence behind them is missed. However, using implicit receivers for the sake of modularity (and to type less as a side effect)... it just makes me curious. What other alternatives were considered? What tradeoffs were attractive for this one as compared to the ones that were discarded?

To summarize: I think explicit receivers are better because sentences are less ambiguous and because a key distinction of a sentence, the receiver of the message, it always present in the same place plus it seems fitting that it comes first due to its importance. On the other hand, Newspeak's use of implicit receivers has the advantage of making it easier to implement a minimalist modularity scheme, and as a side effect you type considerably less in some cases.

Is that a fair assessment? Where do we go from here?

*: When that does not happen, well, things become more difficult to understand. Internal parser or not, the lack of correct leading indicators does seem to cause additional hardship.

**: Now talking exclusively about Smalltalk for a moment: if coloring is there and I can manage the namespace of a rather complex method better, does that end up helping me? Or does it simply make it easier for poorly written code to live on, thus making syntax coloring necessary and apparently useful? If methods are short and no more than 5 lines long, like we always say they should be, do we really need syntax coloring? Would we even care much about formatting? Which one is the egg and which one is the chicken?

And now talking about C: coloring really helps, but I think the existence of large files with lots of code and little to no visual cues as to where the boundaries between each of the pieces are is what makes coloring helpful in the first place. Nevertheless, I'd rather have a browser.

On implicit self

I didn't quite like the implicit self in Self and Newspeak, but I couldn't quite point my finger on why. I just realized that the argument can be made quite concisely.

The implicit self is a special case of the grammar of the language which does not need to be there. In languages that strongly rely on message sends as the mechanism to allow behavior to occur, message sends should be expressed as unequivocally as possible. At least from my Smalltalk-biased POV, the fact that all sentences are of the form

receiver message: withArguments

is a benefit because it clearly states what the receiver is. As such, I'd rather see self than having to assume it by scanning the first token until the first occurrence of $: (or '::') to only then be able to disambiguate between a receiver and a keyword.

In short: I prefer the work of my internal parser to be made easier by the use of prefixes, rather than to have to keep a stack that only goes down in the presence of a suffix.

Similar arguments dictate my preference to write [nil] instead of [], and to add ^self at the end of empty methods rather than seeing a "blank" text pane.

I find that the consistency offered by a few keystrokes makes it easier for me to read and understand code faster and more accurately. Therefore, since we read code much more often than we write it, I think that favoring reading speed over typing speed is the right decision to make.

Saturday, November 29, 2008

Looking for the source of a quote

Again, I must turn to the human Google for this one. What is the source of this quote?

In most languages where inheritance is singular, it's a card you only get to play once, so you'd better play it wisely.

Thanks in advance!

Friday, November 28, 2008

About the Fundamentals book

I just ran a page estimate for the Fundamentals book. It just reached 206 pages. Chapter 4 looks like it needs another 20. The rest of the material, from chapter 5 (on polymorphism) to chapter 9 (on optimization) seems to be enough to push the page count close to 800.

This is a huge problem for two reasons. First, Lulu only binds volumes of up to 740 pages. Second, it leaves zero room to write about Assessments!

So the plan of action is as follows: split off the material on Assessments into its own volume, and hope that I can shoehorn all the other material into 740 pages or less.

I feel better already. Eliminating the divisions that were cutting the Fundamentals book into two parts, one for the techniques and one for Assessments, gave me 4 more precious pages of space.

It seems that I have enough stuff to write books for the next 5 to 10 years now. Well, I better get going with it, or I won't be able to finish in time.

Smalltalks 2008 Coding Contest writeup

As in other years, this Smalltalk coding contest consisted of writing a program that would play a game. The ranking of each player was determined by comparing their corresponding scores in the game. The qualifier round, held before the conference, serves the purpose of bringing all the contestants to more or less the same level of proficiency, i.e.: they all have a program that plays and obtains some score. The finals changes the rules of the game the participants have to play, but the nature of the changes is not communicated to them. It is up to those competing for a prize to determine what has changed, how to adapt their program to the perceived changes, and to do so under time pressure. The idea of having to make changes within something like two hours is that, if a program is well designed, then changing it will not require an extraordinary amount of time and effort. If, on the other hand, a participant comes to the final round with a program that is too tightly coupled with the problem at hand, then changing it will be costly. It is assumed that since business requirements change continuously in real life, this way of measuring the quality of a contestant's submission is appropriate.

The problem for the Smalltalks 2008 Coding Contest was again to play a game. The particular game was defined in terms of what happens in a software development team. In other words, participants would have to create a program that would behave like a software developer in a game in which an application is being built. Some of the factors that contestants had to keep in mind are things that we all know very well. For example, a little stress is a good thing because it may give us a sense of urgency that may allow us to finish our tasks quicker. On the other hand, too much stress will eat into our productivity, and no matter how much we work we will not make much progress.

Moreover, interaction with other team members is a key factor for a project's success. Because of this, the game gave an advantage to those players that collaborate with each other by making it less likely that completed work units would cause bugs when integrated into the final delivery. However, this is not so easy to achieve all the time. It is for this reason that the game came with six autonomous players with different personalities, in order to stress the capabilities and flexibility of each of the competing programs. The autonomous players were played by the game server itself, and so the contestants did not have control over who they had to play with in a particular game.

In order to make it interesting and politically correct, the autonomous players were modeled after six Dilbert characters: Dilbert, Alice, Asok, Dogbert, Ratbert and Wally (which finally explains this earlier post). The implementation of the game itself was rich enough so that it was possible to implement a single strategy shared by all these different characters. What made Dilbert different from Dogbert was a personality object that provided 17 tuning parameters for aspects such as the amount of stress the player would tolerate, how much work it would be willing to accept at any one time, and so on.

Something that was also modeled was how quickly players would behave in a counterproductive way as a means to retaliate for their perception of lack of collaboration by other players. For example, Dogbert is not usually very willing to help others. Since it is in his personality to put himself first, he may decide to simply accept work from others to just delete it and thus get rid of it. He does not care about the consequences of this, because he will delete this work as long as the boss does not assign it to him in the first place. What happens then is that if Dogbert deletes work sent by Alice, the boss will complain to Alice for letting work drop on the floor. This increases Alice's stress, so it is in her best interest to note that the behavior complaint was related to a work unit sent to Dogbert. In turn, this will make it less likely for Alice to ask Dogbert for help later on in the project.

Contestants had to deal with this counterproductive behavior as well. However, they were not told which player had which personality. All they saw were developers called names such as Smith, Jones, or Taylor. Part of the challenge was to make programs that would learn from their experience as projects progressed to completion.

You can see the official rules and qualifier server for the contest by selecting the coding contest section on the left here.

The winners of this year's contest were Guillermo Amaral and Guido Chari. The second best score was obtained by Hernan Wilkinson, one of the organizers. Unfortunately for Hernan, he was barred from receiving prizes at the finals. Diego Geffner finished in third place at the final round. Prizes included an iPod Touch for each of the winners, courtesy of Instantiations and Caesar Systems. Diego Geffner obtained an MP4 player courtesy of GeoAgris. Snoop Consulting also provided bookstore gift cards.

Wednesday, November 26, 2008

Some sensible observations regarding multicore CPUs

Finally somebody calls never ending exponential growth for what it is. Now we should recognize that exponential growth on core count is just as doomed as the GHz amount and many other things (see also here).

One way or the other, the future is not yet clear.

Tuesday, November 25, 2008

Smalltalks 2008 makes the newspaper

The Smalltalks 2008 conference was mentioned on the newspaper La Razón. Here is the pdf with the scanned page. The caption on the photo states there was a lot of public at the conference :).

Saturday, November 22, 2008

Smalltalks 2008 photos

Here are some photos I took while at the conference. Also, here's an album of photos taken at the social dinner event (courtesy of James Foster).

You do not want to miss out next year now, do you?

First prize of the Smalltalks 2008 Coding Contest

Well, Instantiations had generously provided us with an Ipod Touch for the Smalltalks 2008 Coding Contest first prize. But then we had the situation in which a pair won the finals, so more than an Ipod to share, it became an Ipod to divide. Nevertheless, another one of our sponsors came to the rescue: CaesarSystems will provide a second Ipod Touch to the winner pair.

Thank you Victor Koosh and CaesarSystems!

Sunday, November 16, 2008

Smalltalks 2008 video: A Reflective Reporting Tool

Gabriel Cotelli posted footage of his talk at Smalltalks 2008 here. Enjoy!

Saturday, November 15, 2008

Smalltalks 2008, Saturday notes

I just got back from the conference's last day. Here are the notes for today.

First, I did my presentation on the implementation of the Coding Contest. It had been quite a while since I wanted to tell that the numerical model behind all this work, including how to model the behavior of Dilbert, Alice, Asok, Dogbert, Ratbert and Wally was...

... y = arctan(x)...

This function is used to model the progress of work, the quality of the perception of how much a work unit is done, the inspiration of programmers, the stress of programmers, the irritation of programmers... everything. I even found out later that it has been used to model characteristics of bipolar behavior disorder.

But well, enough of that. The code will be made available shortly, and so you will be able to play with it. After that came Leandro Caniglia's presentation on instance behavior. I missed most of it because I had to leave the room after my presentation to talk to some people, so I cannot comment much on it. However, I did see that he got lots of questions regarding the applications, which shows there was plenty of interest.

After the break came a presentation by Gabriel Honoré, who wrote a Commodore 64 emulator in VisualWorks. Not only it works --- it runs at 100% on a Core 2 Duo @ 2GHz (his machine). In fact, not only it works... it works correctly!!! He played some games on the screen, he even brought up the game Truco (for an explanation of how to play the game, see here). Yep, the one that uses the SID chip for speech synthesis. And it worked, and sounded, perfectly. At this point, Gabriel decided to open inspectors on the components of the C64. So he brought up the VIC-II video chip and with a simple message send (IIRC, self color: 6), he changed the border of the screen to being green while the game continued to run. A suggestion to spy on the computer's cards was made. At least in my opinion, the emulator was so good that the emulated C64 even resetted itself faithfully... the particular way in which the video chip behaved while the machine rebooted was reproduced correctly on the screen as far as I could tell. Gabriel commented the code will be available, in free form, at the Cincom public Store repository in a few months.

Finally, it was the turn of Gerardo Richarte to show SqueakNOS. After booting from an USB device, he went on to show how hardware devices are programmed in simple terms. Although I do not remember the exact figures for each, Smalltalk device drivers for things like network cards, the mouse, the keyboard and so on were at most 100 methods and 300 LOC. Most methods were one liners. Hardware interrupts are served by the image. And the slides for the presentation ran in a SqueakNOS image running with no OS under it. What is more, at the end of the presentation Gerardo told us he was going to write a hardware device driver for the IDE hard drive controller at port 0x1F0 and read data from the hard drive. In about 5 minutes he was done typing something like 16 accessor like methods, and then invoked a read for sector 0 from drive 0 head 0 with command 0x20. 256 16 bit shorts came back, and the last 2 bytes were 0xAA55, what is expected of a boot sector.

Something like 15 years ago I wrote a program to detect and mark bad and near bad clusters on FAT partitions. I did that in Pascal, and I can't tell you the trouble I had to go through to get that working right. Here, we have a device driver for the controller written in 5 minutes...

Then came the closing ceremony. While the best talk votes were tallied, we ran a prize draw amongst the registered people still present. Lo and behold the first person that came up was my sister! Sure enough, she was registered, but this was a ~1/200 chance... oh well, so we skipped her (besides she wasn't there at the moment) and continued on. In this way we handed out 9 books: 3 books given by Pragma (one of our sponsors), and a set of 6 of my books (3 hashing books, 3 mentoring course books). As I stated while at the conference, I'd like to thank ESUG for purchashing several of my books for their conference in Amsterdam.

Then came the prizes for the coding contest. Guillermo Amaral and Guido Chari claimed the Ipod Touch given by Instantiations, and Diego Geffner claimed the MP4 given by GeoAgris. Since the winners would have had to divide (I mean share) the Ipod Touch, the winner pair also claimed some of the bookstore gift certificates given by Snoop Consulting.

Finally came the prizes to the best two talks of the conference. In second place came SqueakNOS by Gerardo Richarte, which was awarded half of the remaining bookstore gift certificates. In first place came Gabriel Honoré's Commodore 64 emulator, and he received the remainder of the bookstore gift certificates plus the original August 1981 Byte magazine donated by Diego Roig-Seigneur.

Well, we in the organization committee think this year's conference went quite well, but we are also sure there are things to improve. If you have any comments to make, please send them our way at smalltalks2008 at gmail dot com. We look forward to hearing from you.

See you next year at Smalltalks 2009!

Smalltalks 2008, Friday notes

Here are the notes for Friday at the conference. The day began with Hernán Wilkinson's Key Design Decisions presentation, in which he provocatively addressed a number of issues we are all very familiar with. He made the case for immutability of domain objects, full initialization of objects before they can be used (so e.g.: by the instance creation method as opposed to by the users of the class), and a number of others. This made such an impact that the presentation was heavily discussed over lunch.

Then we saw Claudio Acciaresi and Nicolás Butarelli's work on a thorough refactoring of the Collection hierarchy using Traits. It is interesting that while they saw several advantages to this (such as the elimination of code duplication and the possibility to create more diverse collection classes easily), in the end they commented it was not a slam dunk as Traits do come with their own complexity.

After the break Carlos Ferro showed how ExpertCare (as initially described by Dan Rozenfarb on Thursday) manages to make good question suggestions for telephone operators receiving health related phone calls. For example, it would be good if the system helped determine when to send an ambulance because of an emergency in as few questions as possible. It is not obvious how to do this because, as soon as one examines symptoms, the body systems they affect, and the syndroms they may imply, choices are not clear cut much less evident. Nevertheless, the strategies shown by Carlos allowed ExpertCare to detect an emergency in a median number of 1 question, with a maximum of slightly over 2 questions on average.

Then came Guillermo Amaral's talk on percolation. He did not just implement a few algorithms. Rather, he built a tool to model solids as a lattice of points, connected by arbitrary edge patterns, and then used several algorithms and procedures to determine the probability with which the material thus defined would allow liquids to pass through. Most impressive. To begin with the tool was graphical and included visual representations of the lattices, the connecting edges, the probability graphs (including choosing the color of the curves and graph combination)... a lot of serious work which led to the verification of possibly original conjectures in certain scenarios.

After lunch we had a persistency block. We started with Esteban Lorenzano, Mariano M. Peck and Germán Palacios' talk on SqueakDBX, an interface to the open source database library OpenDBX. The idea of OpenDBX is to allow access to a multitude of relational databases via a common interface. SqueakDBX is the Squeak interface to OpenDBX, and as such Squeak can now talk to Oracle, My Sql, Postgresql, etc etc etc. This works so well that e.g.: SqueakDBX is faster than Squeak's own native driver for Postgresql.

Something that will be added to SqueakDBX is support for Glorp, which was quite nice because the presentation naturally blended with Alan Knight's talk on Glorp. We saw many of the features that make Glorp nice. For example, the mapping model allows to map objects to rows, or to inline objects in the row of another object (e.g.: for speed), or to save an object across many tables. Glorp can query this by examining blocks such as [:person | person name = 'Alan']. Much more complicated block expressions are possible. On top of that, Alan described Hyperactive Records, which are used in Cincom's WebVelocity product.

After that, I ran the Coding Contest's final round. It went very well because this time, unlike last year's, I didn't have trouble with having multiple active HTTP servers in the same image. Maybe it's because the lesson I learned in 2007 made me put in a number of automatic measures to prevent that from happening...

  • Image packaging stops any existing HTTP server forcefully. If after a GC there are still instances of them, image building fails.
  • The packaged image startup sequence kills any existing HTTP server forcefully again.
Also, I was quite happy that the participants did not find any bug in the contest. This makes the finals stressful for the organizer as well: basically the finals are a software release, and the thing has to work. If that means you get to fix the bug right then and there, too bad. Fortunately it went smoothly.

And the participants? Their reactions to the changes in the finals were varied and interesting. Some were getting positive scores within 15 minutes. One finally got a positive score in the last minute. One cursed in frustration :). The results are as follows.
  1. Guillermo Amaral and Guido Chari, with over 26 million points.
  2. Hernán Wilkinson, with over 2 million points.
  3. Diego Geffner, with no certificate.
Congratulations to all of them!... although note that Hernán Wilkinson cannot get a prize due to being in the Organization Committee :). Therefore, the 2nd prize will be awarded to Diego Geffner.

See you in a bit for the last day of the conference!

Thursday, November 13, 2008

Smalltalks 2008, Thursday notes

Whoa, it's been an almost 20 hour day already, and I had not been able to sleep last night anyway. Organizer jitters, most likely. Here are some notes on Thursday's happenings at Smalltalks 2008.

The conference's opening was again in the hands of Hernán Wilkinson. The main point was that this event happens because our community shares the enjoyment of doing what we do. This goes from the UAI offering the conference's venue, to the sponsors offering the prizes for the contest (the finals are tomorrow!), to Diego Roig-Seigneur donating an original August 1981 Byte magazine to be given to the best presentation in the conference. There were plenty of jokes, and I was the receiver of one :)... since I will be the referee in the coding contest's finals, I became the infamous William Boo! We shall see is justice is served on the final round now...

Monty's keynote showcased a complete list of successful applications (where successful is defined as 10+ years in production) written in Smalltalk that have a profound effect in our lives whether we are aware of them or not. Besides OOCL's container shipping application, Progressive's auto insurance rating, Adventa's chip manufacturing application, Key Technologies' food sorting machinery, Florida's power utility call center running in a state that goes through hurricanes every year and many others that I do not recall, there was again mention of one that I remember fondly... JP Morgan's Kapital.

I really enjoyed Dan Rozenfarb's talk on his expert system to handle patients at a medical call center. He went through many of the attempts that did not work, and that made the final achievements of e.g.: 99.3% correct seriousness evaluation all the more impressive (if the call is incorrectly assessed, then the software can suggest not to send an ambulance when one is definitely needed).

Next was a follow up on Zafiro by Andrés Poncelas. Zafiro is InfOil's application framework, which was presented at Smalltalks 2007. With the new improvements, InfOil uses Zafiro as a means to easily express domain objects and their relationships in their applications, which are used to manage a significant fraction of all the oil and gas produced in Argentina.

After lunch, we saw Gabriela Arévalo's presentation on Moose, an application designed to enhance the way in which developers can obtain a high level view of software they do not yet know intimately. It was interesting to see how the 7 +/- 2 rule applies everywhere, even to the diagrams Moose produces --- for example, one can use colors in Moose to represent different metrics obtained from the code, but after 5 colors they become difficult to understand because it is difficult to concentrate on so many colors at the same time.

Gabriel Cotelli showed Mercap's reflexive report tool, which is used in XTrade to allow power users to produce ad-hoc reports in a controlled way and without having to write Smalltalk scripts.

Bruno Brassesco came from Uruguay to show how he used Dolphin to deal with a really obtuse XML, WebServices, .NET and C# development environment that produced applications for banks in Central America. Basically the problem they were having was that they had to use a C# framework that executed a WebServices stack. The WebServices stack was structured in 3 layers (the presentation layer, the business layer, and the system primitives layer) on top of a set of .NET libraries that called the back end. Each layer of the WebServices stack called services in the same layer or a lower layer, starting from an original invocation from HTML. Eventually the back end was called and the results were transformed via XML transformation rules until the last transformation produced HTML from XML. The issue came when there was a problem somewhere. Let's say we know the presentation level service didn't finish. So they would insert debug steps before and after each service invocation to determine which service failed in the presentation layer. The debugger action of these debug steps? Send the developer an email.

Yes, you read that right.

No, I am not kidding.

So if you were debugging a web service with 3 service sub invocations, you'd add 4 email sending debug steps. Let's say you find that service number 2 is broken (because you only get two emails: before step 1, before step 2, and then the thing crashes). That's great because now you have to go to the business model XML file, find the service definition for service number 2, and add more email debugging steps on that to see where the problem was.

Eventually you have 3 huge XML files open, with lots of copy/paste going on, missing service definitions, unsent services, broken XML, etc etc etc. Egad. Bruno showed us a definition of a service with well in excess of 100 sub invocations. And each time you make a change, you have to kill the server, recompile all the files, upload the files, restart the whole thing, and try your test case again by hand hoping you'll be able to reproduce the failure. And, oh by the way, without proper file locking in an environment with 150-200 developers.

What he did was to use Dolphin to create an IDE for all this mess. Since the technology could not be changed, then at least he was able to make work far more bearable. His IDE brought things like senders and implementors to the XML files. Automatic email probe management. Detection of problems before they actually happened, like missing service declarations, broken files, etc. It was a sorry state of affairs for those in the development project... attrition was horrendous and developers lasted 1.5 projects. However, the mere insanity of the development process they had been forced to use had us laughing out loud many times during Bruno's presentation.

Then, Fernando Olivero and Juan Matías Burella showed us the master thesis work they are preparing: using Croquet to assist in teaching object oriented programming. To do this, they have designed a language which is actually a subset of Self. They plan to introduce programming students to this first, to then progressively introducing additional concepts and techniques ending with students learning something like Smalltalk. Most interesting!

After the last break, we saw roadmap presentations for Cincom and GemStone, given by Alan Knight and James Foster respectively. Not everybody is aware of the fact that Alan is an actual soccer referee, and since he knows his soccer he added several photos and videos to his presentation --- one such video played in an ActiveX control embedded via an innocent looking windowSpec inside a VisualWorks window!

James Foster followed on with an impeccable roadmap presentation including several demos of working technology such as Seaside and a new scaffolding framework (see here). I write impeccable quite purposefully, as GemStone presentations are always flawless. This, despite the fact that there was a nasty snafu and James' laptop ended up being left behind in the US by accident! But no worries: with little to no time to recover, there was essentially no evidence that anything had gone wrong. Such nice guys, these GemStone folks :).

After that, we had dinner in San Telmo and now finally this long day has come to an end. To be continued tomorrow...

Wednesday, November 12, 2008

Round numbers

Well, we're just a few hours away from the Smalltalks 2008 conference, and we have reached 230 registrations. This is a good number (sure, just as 10, haha). See you tomorrow!

Tuesday, November 11, 2008

Smalltalks 2008 coming up

The Smalltalks 2008 conference will begin this Thursday. The schedule is packed with high quality talks and very interesting topics. Just as last year's, the response from the community has been amazing. We're above 220 registrations now, and they are still coming in.

One thing that is different from last year is that registration is needed to guarantee entry. Registration at the conference's website is now closed, so if you have not registered yet, please do so by sending an email at smalltalks2008 at gmail dot com. The rest of the information, such as the schedule and the information for the social dinner event on Thursday night, is available from the conference's website.

See you on Thursday!

Sunday, November 09, 2008

Smalltalks 2008 Coding Contest Qualifier Round Deadline

The qualifier round of the Smalltalks 2008 Coding Contest, with final round prizes including an iPod Touch, ends in a little less than 10 hours. If you have not done so yet, this would be a good time to send your score certificate to the contest's mailing list.

Good luck!

Tuesday, November 04, 2008

Comment about design

Recently I saw this post that says that the ctrl+3 shortcut in Eclipse is a display of its good design. All I saw was some drop down with options in a screenshot. Since I could not leave a comment (the thing didn't work), I am commenting here.

It is not enough to say "this is good". One also has to be able to say why. For example, why is it that ctrl+3 is good design? It is good compared to what?

Without any rationale such propositions become unfalsifiable, and then it is not rational discourse anymore.

So my friend, could anyone tell me why ctrl+3 resulting in that screenshot is an example of good design? Maybe there's something to learn, and it will be easier to do that if it is spelled out explicitly.