Sunday, June 29, 2008

The Fibonacci showdown: F(a+b) vs Dijkstra

The comment thread in this post has over 30 entries now! Something that did come up in those discussions is why not cache the Dijkstra recursion so it goes really fast, and how does that compare to the F(a+b) calculator.

For a fixed F(k), it seemed that caching Dijkstra made it run faster than F(a+b), even when F(a+b) was allowed to cache everything. So, to determine whether this is true in general, I did a random F(k) test.

The task is to calculate F(k) Fibonacci numbers with k in this array, and in the order shown:

#(1702587 1848601 135328 1144978 183508 1421520 734333 200747 1750499 1795081 307556 1726299 1911101 1997485 521440 1954962 979243 1793538 1633868 323412 1866722 612481 1155724 441908 1135563 1338921 815458 520627 873849 432030)

These numbers, between 0 and 2049151, were devised using the MinimumStandardRandom (Park-Miller) RNG in VW 7.4.1. So here we go. The first run time numbers are:

  • Non cached Dijkstra: 191.4 seconds.
  • Cached Dijkstra: 162.5 seconds.
  • Cached F(a+b): 158.2 seconds.
  • Cached and Greedy F(a+b): 156.2 seconds.
Since the cached Dijkstra and the Greedy F(a+b) calculators will take no time if asked to do this again because they cache every F(k), and since the non cached Dijkstra time will not change because it's not cached, it only makes sense to run this a second time for the cached F(a+b) calculator which is only caching F(k) triplets around powers of two indices.
  • Cached F(a+b): 148.2 seconds.
So, performance wise, if one is going to be calculating random F(k) numbers, the best choice is to go with the greedy F(a+b) calculator. The next best choice is to use the cached F(a+b) calculator. After that comes the cached Dijkstra calculator, and finally the non cached Dijkstra calculator.

At this point, it may be useful to examine how many entries the cached calculators have in their caches.
  • Cached Dijkstra: 1116 entries.
  • Cached and Greedy F(a+b): 1117 entries.
  • Cached F(a+b): 107 entries.
So, the cached F(a+b) calculator beats Dijkstra's cached calculator using 10x less cache entries. While doing so, it also manages to be very close to the greedy F(a+b) calculator.

As such I'd be inclined to say that, for general F(k) calculation use, the cached F(a+b) is a good first choice.

Update: I also tried Hakmem item #12, but it seems considerably slower than F(a+b) and Dijkstra's method.

Printing very large integers faster

The Fibonacci comment thread included some comments about how much time it takes to print very large integers in base 10. I went through the existing implementation, and using F(1953125) as a test case for the profiler, I managed to get the printString about 8% faster than the base VW code. The same happens with F(390625).

The mechanism I used was basically the same than the existing one: divide the receiver in half and try again until you hit small integers. What I did however was to tune the integer division process, hold all the powers of 10 involved, use the stack as I did with CharacterArray>>match: in the mentoring course book, and abuse the heck out of polymorphism.

I think it's great, but certainly it's not your "margin note" implementation. Here we go...

    Integer>>printBaseTenOn: aStream

      self < 0
            aStream nextPut: $-.
            0 - self printPositiveBaseTenOn: aStream
        ifFalse: [self printPositiveBaseTenOn: aStream]


      | answer |
      answer := (String new: 16) writeStream.
      self printBaseTenOn: answer.
      ^answer contents

    LargeNegativeInteger>>printBaseTenOn: aStream

      aStream nextPut: $-.
      0 - self printPositiveBaseTenOn: aStream

    LargePositiveInteger>>printBaseTenOn: aStream

      self printPositiveBaseTenOn: aStream

    LargePositiveInteger>>printPositiveBaseTenOn: aStream

      | initialChunkSize initialChunks |
      initialChunkSize := 10 raisedTo:
        (SmallInteger maxVal log: 10) truncated // 2.
      initialChunks := OrderedCollection
        with: initialChunkSize
        with: initialChunkSize squared.
        printFirstChunkPositiveBaseTenOn: aStream
        chunkSizes: initialChunks
        chunkIndex: 2

      printFirstChunkPositiveBaseTenOn: aStream
      chunkSizes: chunks
      chunkIndex: chunkIndex

      | chunkSize |
      chunkSize := chunks at: chunkIndex.
      chunkSize highBit * 4 < self highBit
            chunks size = chunkIndex
              ifTrue: [chunks add: chunkSize squared].
              printFirstChunkPositiveBaseTenOn: aStream
              chunkSizes: chunks
              chunkIndex: chunkIndex + 1
            | firstChunk |
            firstChunk := self // chunkSize.
              printFirstChunkPositiveBaseTenOn: aStream
              chunkSizes: chunks
              chunkIndex: chunkIndex - 1.
            self - (firstChunk * chunkSize)
              printPositiveBaseTenOn: aStream
              chunkSizes: chunks
              chunkIndex: chunkIndex - 1

      printPositiveBaseTenOn: aStream
      chunkSizes: chunks
      chunkIndex: chunkIndex

      | chunkSize |
      chunkSize := chunks at: chunkIndex.
      chunkSize highBit * 4 < self highBit
            chunks size = chunkIndex
              ifTrue: [chunks add: chunkSize squared].
              printPositiveBaseTenOn: aStream
              chunkSizes: chunks
              chunkIndex: chunkIndex + 1
            | firstChunk |
            firstChunk := self // chunkSize.
              printPositiveBaseTenOn: aStream
              chunkSizes: chunks
              chunkIndex: chunkIndex - 1.
            self - (firstChunk * chunkSize)
              printPositiveBaseTenOn: aStream
              chunkSizes: chunks
              chunkIndex: chunkIndex - 1

    SmallInteger>>printPositiveBaseTenOn: aStream

      self < 10
        ifTrue: [aStream nextPut: (Character digitValue: self)]
            self // 10 printPositiveBaseTenOn: aStream.
            self \\ 10 printPositiveBaseTenOn: aStream

      printFirstChunkPositiveBaseTenOn: aStream
      chunkSizes: chunks
      chunkIndex: chunkIndex

      self printPositiveBaseTenOn: aStream

      printPositiveBaseTenOn: aStream
      chunkSizes: chunks
      chunkIndex: chunkIndex

      | chunk toGo |
      chunk := chunks at: chunkIndex + 1.
      toGo := self.
      [chunk = 1] whileFalse:
          chunk := chunk // 10.
          aStream nextPut: (Character digitValue: toGo // chunk).
          toGo := toGo \\ chunk

For some reason, the switch from printFirstChunk to normal printing makes me think about electron energy levels.


I just came back from looking at the sky in the middle of nowhere at 7000 feet of altitude, no moon. Fantastic. And... I did an experiment.

Pick a star that is brighter than the average, and focus your eyes on it. Do not move your eyes and do not blink. After a while, everything starts fading away. Maybe you lose track of your star. But keep still and do not blink. Hold that for about 30 seconds. Then, without moving your eyes, blink.

You are now aware of many more stars than you were before starting. The dark blue looks lighter, and you can also feel as if the sky is really a fabric rich with bright dots. It almost appears as if it was a piece of cloth with texture.

Then, your eyes move ever so slightly and you're back to "normal".

Why does this happen? Is the fade out some "dead pixel removal" done by the brain, which later realizes what's going on and finally let's you see the long time exposure photo in your retinas?

In any case, it can be quite a surprise to get, for the briefest of moments, a much more clear picture of how many stars are really out there. Beautiful.

Saturday, June 28, 2008

Assessments status report

On Tuesday, I redid the SUnit Benchmarks support. It is much better now. I also found several minor bugs which were fixed. 130 classes, 816 methods, ~6.28 methods per class.

On Wednesday there was a mass deletion of 10 methods.

Nothing appeared to happen until today (Saturday). Then things changed.

First, an annoyance about exception handling got fixed, and the method count went down to 804 because I deleted dead code. Then I fixed another annoyance (with a pair of hash/= messages, nonetheless), and the method count went back up to 806.

After these warmup exercises, I implemented SUnitBenchmarks in Assessments, so now there are AssessmentsBenchmarks too. This caused a lot of new code to be created, and this shows in the statistics: 141 classes and 886 methods. Interestingly, the average methods per class is still at ~6.28.

More to come...

More Fibonacci

Note: you may want to read the extensive comment chain as it has much more information than the original post.

So I went looking around to see what other people had done to calculate Fibonacci numbers exactly. I saw plenty of naive implementations (F(n) = F(n-1) + F(n-2)). Wikipedia has plenty of sample code, but none seemed interesting the first time I went through the page (although it does have valuable things, see below). Wolfram (Mathematica) didn't say much (or I missed it if it did). Then I run into this page, and Dijkstra's recursion:

  • F(2n) = (2 F(n-1) + F(n)) * F(n)
  • F(2n-1) = F(n-1)^2 + F(n)^2
Ok Dijkstra, you're on now. Which is faster: Dijkstra's recursion, or the cached split recursion calculator based on F(a+b)?

As it turns out, Dijkstra's recursion is extremely fast. However, it is not as fast as the cached calculator can be. And in fact, the presence of the cache is the difference: by design, Dijkstra's recursion is difficult to cache because the indices that would be cached change with every F(k). On the other hand, by using F(a+b), one can choose which indices will be cached in advance, and therefore it is possible to use the same cache for all F(k).

Measurements show that F(15625) is calculated by the cached calculator about 2x faster than Dijkstra's method --- but only after the cache is filled. With F(390625), this advantage is reduced to a factor of about 1.4x.

How about F(1953125)? Dijkstra's method crunches this monster of a number, complete with its 408179 decimal digits, in 16.5 seconds. The cached calculator, after the cache is filled, does the same in 14.7 seconds. At first sight it appears that as the Fibonacci index increases, the methods become more and more similar. Maybe there is an asymptotical advantage towards Dijkstra's method (for example, multiplication is most efficient when squaring), although it's hard to tell at first sight.

Where the cached calculator does leave Dijkstra's recursion in the dust is when calculating F(k) mod: m. The caching proves too much, and F(15625) reveals a runtime difference of 12x.

Fun, isn't it? :)

PS1: at first I thought Wikipedia's alternate expressions for Dijkstra's recursion (under J, double recursion) would be faster, but measurements reveal that they are slightly slower than Dijkstra's original recursion.

PS2: unhappy with a couple things, I made a few changes to the cached calculator so its execution context is more aware of what is going on. This results is a speedup of a few percent compared to the code I posted earlier.

Friday, June 27, 2008


These days it seems we just censor ourselves whenever we run into difficult questions. But not this guy. Thank you! I had been meaning to write something like this, and you've done it better that I had planned.

With that, I will add my own bit to this. So every day now we hear that OMG gas prices are so high and how terrible that is. Really... ok, so assuming 80 million households, and $1k per household a year due to the increase in gas, we find that the gas price increase costs $80 billion a year.

That's a lot of money, to be sure. So why don't people say a word when the federal budget runs a deficit 5 times that much, some $400 billion, for several years in a row? That is $5k a year per household on top of everything else. Clearly it is not insignificant. Since the median salary is somewhere around $40k a year, that's equivalent to spending about 12% of the median yearly salary --- without actually having it. And OBTW, that's before taxes, so the percentage after taxes is actually larger.

But besides the precise amount down to the last cent, where do you think that money comes from? It's either debt, which has to be paid with interest just like any other credit card (and we all seem to know the credit card lesson by now). Or, even better, we just print money and then we get inflation. In other words, every price goes up but our salary does not.

Isn't that great?

I think it's time we significantly reprioritize our problems and act on our findings.

Thursday, June 26, 2008

Source code for FibonacciCachedSplitRecursionCalculator

As requested, here is the source code for the fast Fibonacci calculator. The class has an instance variable called cache, accessed via accessors. The class side has new implemented as a singleton. There is also fibonacci: on the class side, which redirects as ^self new fibonacci: anInteger.


      super initialize.
      self cache: self newCache


      ^Dictionary new
        at: 0 put: 0;
        at: 1 put: 1;
        at: 2 put: 1;

    FibonacciCachedSplitRecursionCalculator>>fibonacci: anInteger

      ^self cache
        at: anInteger
        ifAbsent: [self privateFibonacci: anInteger]

    FibonacciCachedSplitRecursionCalculator>>privateFibonacci: anInteger

      | firstHalf secondHalf answer |
      firstHalf := self firstHalfIndexFor: anInteger.
      secondHalf := anInteger - firstHalf.
      answer := (self fibonacci: firstHalf + 1) * (self fibonacci: secondHalf)
        + ((self fibonacci: firstHalf) * (self fibonacci: secondHalf - 1)).
      self cacheFibonacci: anInteger withValue: answer.

    FibonacciCachedSplitRecursionCalculator>>firstHalfIndexFor: anInteger
      "For example, 17 should be split into 9 + 8 instead of 16 + 1"

      ^1 bitShift: (anInteger - 2) highBit - 1

    FibonacciCachedSplitRecursionCalculator>>cacheFibonacci: anIndex withValue: aValue

      (self isCacheableIndex: anIndex) ifFalse: [^self].
      self cache at: anIndex put: aValue

    FibonacciCachedSplitRecursionCalculator>>isCacheableIndex: anIndex

      ^(anIndex + 1) highBit > anIndex highBit
        or: [(anIndex - 2) highBit < anIndex highBit]


Wednesday, June 25, 2008

Fun with Fibonacci

A couple nights ago I had the chance of showing Smalltalk to someone who had never seen it before. As it happens, we went over Fibonacci stuff. Why not, it's so much fun in Smalltalk because there's no overflow and things...

Some 10 years ago, I did Fibonacci stuff in Squeak, and I remembered that at one point I had abused the identity

F(a+b) = F(a+1) * F(b) + F(a) * F(b-1)

into a super fast Fibonacci number calculator because if F(k) had to be calculated, then the above could be used with a = k // 2 and b = k - a.

In addition, one can take things further and make the whole thing run even faster. For example, an option is to split k along powers of two while caching the value triplets F(2^n - 1), F(2^n) and F(2^n + 1). But, since I did that 10 years ago, the code was horrible and because of that I had been neglecting to merge it into my dev image.

So today, and given that I had done Fibonacci stuff again the other night, I decided enough was enough and I reimplemented the calculators. I am happy to report that

FibonacciCachedSplitRecursion fibonacci: 15625

evaluates in 4 milliseconds starting from scratch. Also,

FibonacciCachedSplitRecursion fibonacci: 390625

evaluates in about 1.1 seconds starting from scratch, and about twice faster the second time. Implementing it beautifully (including a fantastic check to detect triplet indices) took about 20-30 minutes.

Smalltalk is great.

PS1: did you know that F(5^k) = 5^k (mod 100)?

PS2: I wrote the tests in SUnit and ran them in Assessments :).

Monday, June 23, 2008

Assessments today

I was to the point where I had to avoid the sun, and therefore I switched to indoor activities. Assessments now runs SUnit Benchmarks properly. No rough edges remain, and I also caught a few bugs which I fixed. The stats for today are below:

  • 128 classes.
  • 779 methods.
  • ~6.09 methods / class.

More Assessments progress

In about 2 hours I got Assessments to handle SUnit Benchmarks just like it handles SUnit: without any changes to existing code required. Some rough edges remain, particularly with print strings and that kind of stuff. I do not expect major trouble in what is left for this task to be completed. After this, it's just SUnit Based Validation to go and then I am done.

Here are the metrics du jour:

  • 128 classes
  • 770 methods
  • ~6.02 methods / class on average
All is going well.

Sunday, June 22, 2008

An update on sqrtRounded

Nicolas Cellier posted the following solution to the sqrtRounded question from a while ago.


      | x |
      x := self sqrtTruncated.
      ^x squared + x - self >= 0
        ifTrue: [x]
        ifFalse: [x + 1]

That is an interesting approach indeed. I did something similar to Nicolas' initial derivations... here it is.


      | sqrtFloor |
      sqrtFloor := (self * 4) sqrtFloor.
      ^sqrtFloor // 2 + (sqrtFloor \\ 2)

This can also be rewritten so that there are no multiplications or divisions. Isn't it beautiful? :)

I put this as an additional exercise in the hash book. There remain several issues to address even if anybody is willing to believe the code is correct.
  • I gave no proof that it is correct. I can say however that if the code is beautiful, then the proof is doubly so.
  • With floating point numbers, the rounding behavior for numbers of the form k + (1/2) is an issue. Should the FPU round always up, always down, at random, or with some other criteria in those cases? This is a thorny issue that appears nowhere in the code. Why?
Seriously, who needs a DVD player on a long drive... I don't get it.


I can't believe I am on vacation. Finally, I am surrounded by still snowy mountains, pines, lakes, peace and quiet. At last, a while in which I will not think about integers, number theory, a coding contest, Assessments, books and all that fun stuff.

So what room number did I get at my new hotel?


It figures.

Friday, June 20, 2008

At STS 2008...

Now on the air from STS 2008... just a few comments here and there as I do not have lots of time.

  • Georg Heeg on finding Bach's house: amazing, fantastic. What a great story!
  • Leandro Caniglia on user changes: an illustration of how great it is to find the correct simple idea to take to its last consequences, and how fruitful this can be.
  • GemStone BOF: these guys simply rock. Maglev rocks. Their performance tuning rocks. GemStone itself seems to set the bar for everything else.
It feels bad not to talk about everything else that has been going on, but on the other hand I have some updates on my own work.
  • Assessments now supports SUnit test resources.
  • I got to talk about Minesweeper again, and I got some more ideas that now I need to track down. It is not fair, it really isn't. There is not enough time to think about everything!
Finally, I've been hearing very nice things about the books. This makes me happy because it is an independent confirmation of the fact that they are actually useful. Hopefully I will also hear about the errors I am sure they have when they become evident.

Gotta go!

Tuesday, June 17, 2008

Drove to Reno today

I drove for about 8 hours today. The trip was from home to the Smalltalk Solutions 2008 conference in Reno.

Sometimes I can't help wondering about the actual need to have things like DVD players in minivans and things like that. I guess people get bored too easily these days. To me, this trip was an opportunity to have fun.

I saw the changing scenery of roads I hardly ever travel on. I saw how the coast vegetation transformed into desert, then into the California valley, and then into the mountains again. Besides whistling to the new set of car CDs I made, calculating MPH averages every hour, setting the cruise control allowing for the 2% speed measuring error in my car, converting the thermometer readout from Fahrenheit to Centigrade and all that old school stuff, I also had a lot of time to think about an assortment of more involved matters.

One of those was integer arithmetic. Why not, right? And I am very happy I did that, because I realized how to implement Integer>>sqrtRounded using only Integer>>sqrtFloor. I did the proof in the car, and I implemented and tested it a while ago. I think the solution is beautiful... can you see how?

Also, I got frustrated once more because I could not find a beautiful and cost effective implementation of Point>>hash that I think will pass all the Hash Analysis Tool tests with flying colors. At least for me, the problem is harder than it seems. If you find a solution that is significantly less expensive than resorting to Bob Jenkins' lookup3 or something of that nature, please let me know.

I also thought about Assessments and exactly how to do SUnit prerequisites. Things should move forward on this soon.

This brings up something that I think is great and that sometimes I think is hard to explain or simply misunderstood. The nature of these problems allows work to be done on them without needing physical things. In a way, you carry fun stuff to do within you, wherever you go and no matter what you do. Plane trips, long drives, waiting for something to happen? They are all opportunities to have a bit of enjoyment. Being bored becomes a foreign state of affairs to you.

"Some say life is tough. That's nonsense. It is good to be alive. Life is exciting." --- The old man, The Village, Akira Kurosawa's Dreams.

Friday, June 13, 2008

Fortune cookie of the day

A mentor is one whose hindsight can become your foresight.

Thursday, June 12, 2008

More green

Really? It has been only one day since my last post? I can't believe it...

Well in any case, now the SUnit/SUnitVM bridge is actually mature. It detects errors and failures properly, and runs SUnit based test cases natively without needing to make any modifications to existing code whatsoever. This is extremely nice to watch, and the code is indeed beautiful (at least to me).

So far, 113 classes and ~6.2 methods per class on average.

Coming up next... support for SUnit test resources, and completing the SUnitVM bridge so Assessments can run both SUnit Based Validation and SUnit Benchmarks.

Success feels within reach.

Wednesday, June 11, 2008

First green light

A while ago, Assessments ran SUnit test cases natively for the first time. It resulted in 1040 passes because I had 1040 SUnit tests in my image. While not all the SUnit features are supported yet, it seems like it is just a matter of time and programming before everything is there. Things are going quite well.

Monday, June 09, 2008

More work, more progress

So let's see...

  • The new slides for Quality Measurements for Hash Functions, redone in Keynote (and looking very nice!) are done.
  • I began the SUnit bridge in Assessments. Again, class polymorphism really helps. It's going to take a while, but it is going well and no major trouble is in sight. The class count finally went past 100.
Only now I need to prepare for a trip... ah the joy of everyday life hassle, grrr!!!

Saturday, June 07, 2008

More work

Well well... just when I thought I'd be able to cope with my free time workload, more stuff got piled into the to do list.

  • Finish polishing the STS 2008 talk.
  • Study Stirling numbers while on vacation.
  • Finish Assessments before my talk at ESUG.
  • Write my books currently in progress (four of them).
Oh, and by the way, there is the coding contest for Smalltalks 2008. I wonder why I got assigned to do that...

Regarding the coding contest for Smalltalks 2008, I already know the problem I will implement. Actually, I was this close to using it for STS 2007, but chickened out. This time, however, it's back. And because I had time to let it simmer for a while, it will be much better too. I think it's a beautiful challenge, both from the point of view of the contestants and from my point of view as well. Hopefully you will like it :).

And well, on top of this, my experiences with these coding contests are piling up significantly. I hate to say this, but I think yet another book is in order.

Sigh... more work... I need 5 employees right now!!!

Assessments improvements

In the SUnit I have, TestCase has a total of 60 methods (both class and instance side) and an instance variable. I finished cleaning up the things I didn't like in Checklist, and it has just 44 methods and no instance variables.

Even though Checklist has less code, it does much more than TestCase already. Besides the new features, what I really like is that now it has absolutely no code related to the execution of checks. And since Assessments now does check execution externally, you can see where this is going: having the ability to run any kind of tests by providing an execution interface. More to come on this department...

545 methods, 96 classes, ~5.7 methods per class on average.

And by the way...

I just updated the Hash Analysis Tool to version 3.15. This new version is now capable of sorting data sets by data set object kind, and so you see all Point data sets together, all Integer data sets together, and so on.

In Smalltalk everything is an object. Therefore, polymorphism applies to everything as well --- even to classes. This is good because creating more classes, the instances of which behave like the classes you cannot have without making a mess of the class hierarchy, made adding this new Hash Analysis Tool feature really easy.


Assessments has tools

I just finished the first set of Assessments tools, consisting of a checklist evaluator and a result viewer. The checklist evaluator is like the test runner, but it does not display results. Those are shown in a separate window, the result viewer.

What this allows is to bring up a result viewer that allows you to debug and run things to failure (Assessments' exception handling really shines here, you don't have to step through any of the evaluation machinery). It also remembers the assessment you ran, and so let's say you fix something. Then you don't need to go back to the checklist evaluator to start over --- you just click on a button that says [Reevaluate Assessment]. This updates the result viewer by evaluating the assessment again. Therefore, you can see your defect list going down without having to disturb any of your code browsers (either by clicking on things you want to test again, or even by having to move to a different package that has the root test class you need to run).

The beauty of all this is that, by design, remembering things like the assessment and the checks that ran cannot cause memory leaks. So all of this comes basically for free.

I also had the experience to fix several small bugs. I have to say that I am very happy about the amount of effort I put in so that the code is flexible, as well as with the results of this work. It is almost too easy to address seemingly problematic situations. For example, at one point I found that failures did not cause a debugger to pop up when rerunning failures to the first unhandled exception. Fixing this was a matter of adding a single message in the abstract class of failure notifications.

Also, at one point I noticed that assessment results were not counting the tests run properly. It turns out that, by design, some checks can produce more than one result, and this was throwing the count off. Fixing this was a simple matter of teaching the results about whether to count towards the total or not. And since there is one class per type of check result, it was resolved via polymorphism too. Piece of cake.

Here are the latest metrics.

Class count (including 4 class extensions): 94.

Method count (including 14 extension methods): 529.

Average methods per class: ~5.6.

What is interesting about the numbers above is that, before the introduction of UI classes, the average methods per class was about 5.1 instead. UIs cause this metric to rise quickly.

The next tasks in the to do list include a proper SUnit bridge that does not touch any of the existing SUnit code, and teaching Assessments to do SUnit Benchmarks and SUnit Validation.

Also in the queue is a task to streamline exception handling and removing as much code as possible from AbstractChecklist. While it is cleaner than TestCase even though it is doing more work (30 instance methods and no instance variables, versus 38 instance methods and one instance variable), it can be better still.

More to come...

Wednesday, June 04, 2008

Smalltalks 2008 - The Conference

First invitation to participate in the second edition of the conference.

This year, the site chosen was the Universidad Abierta Interamericana.

November 13 through November 15.

Once again, entry, coffee and pastries will be completely free.

The conference will be divided into two modules, Scientific Research and Software Industry.

A coding contest will be held this year.

Guest lectures by outstanding personalities.

Tutorial sessions.

This is an international conference; smalltalkers from everywhere will be attending.

Important Dates

Conference: November 13 through November 15.

Paper submission deadline (research): August 31.

Presentation submission deadline (software development): September 30.

Contest registration deadline: September 30.

Conference registration deadline: November 7.


Important Addresses

Universidad Abierta Interamericana: Av. Montes de Oca 745, Buenos Aires, Argentina.

Conference site:

Email contact: