Monday, December 24, 2012

Update on Fundamentals volume 2

I've had some free time lately, so I went back to the multithreading chapter of Fundamentals volume 2.  I wrote several pages, and made good progress towards finishing the chapter.  The draft is now 198 pages.

Update on memory manager work

A while ago we were going over the memory manager changes I've been working on lately.  Among other things, I rewrote and optimized the OT and data compactors.  I knew the new code had to be significantly faster just from an algorithm analysis point of view.  But we hadn't measured the actual impact yet, so we just did.  The below is the run time, in seconds, for one of the stress tests from our memory policy stress tests.

  • Old VM: 550 seconds.
  • New VM: 452 seconds.
The new code runs through the test about 21.5% faster.  Note this is just a preliminary result for code that has not been fully reviewed much less integrated at this time (and your mileage may vary, etc).  But still, that's yet another significant performance increase for the HPS memory manager on top of everything else...

Large integer primitive improvements

Recently we had to go through the large integer primitives because the C type "long" does not mean the same thing in all our 64 bit platforms.  This type was used by the GCD primitives, so we had to go audit that code. Right, that code...

  • The code uses a hybrid of two different multiprecision GCD algorithms from Knuth.
  • Although it seems to work, the code comments do not have a proof for the correctness of the code.
That was an interesting bit of code audit.  The implementation uses Algorithm L, except that the multiprecision division is replaced with a customized implementation of Algorithm B.  It's quite complex because Knuth glosses over what happens when you use signed types to implement these algorithms.  So... why does all this stuff work, exactly?...

In any case, we deleted the usual few hundred LOC, wrote a new ultra paranoid set of tests, and we also produced a proof that says the code should work.  We didn't stop there either: we also threw out a bunch of big endian related code, and we also improved the performance for some of our big endian platforms (up to 25-30% speedup, depending on the use).

Moving along in this department, too...

Update: now up to 37% faster.

Sunday, December 23, 2012

More memory management work

Work on the HPS memory manager doesn't stop, or so it seems.  Earlier today I finished rewriting the object table compactor code.  The result is about 200 LOC gone, and a few kilobytes less executable code.  And since the code is written more clearly, it's far easier to produce a proof that says the code actually works --- and if the proof is wrong then it should be much easier to figure out why.  Other bits of work include refactoring the remember table implementation (more deleted code), and a fix / optimization for 64 bits.

By the way, we also had a cleanup pending for the large integer primitives.  We took advantage, improved big endian platform performance by double digit percentages, and deleted another few hundred LOC.

We also have a few hundred new tests for all this stuff.  Moving along...

Saturday, November 24, 2012

Smalltalks 2012 videos

See here for Smalltalks 2012's videos... they will be posted to the playlist as they become available.  Enjoy!

Wednesday, November 21, 2012

Smalltalks 2012 photos

Hello!... here are some photo sets from Smalltalks 2012.

In chronological order, first we have our usual visit to Tigre.  Then, we had a pre-Smalltalks event at Trelew.  After that, the conference proper on November 7, November 8, and November 9.  After the conference, some of us went on a day trip to explore a bit of Patagonia.  Finally, the day after, we had a small break before flying back to Buenos Aires.

Videos will be coming shortly...

Friday, October 12, 2012

Duff's device implementation details

Typically one writes a Duff's device to copy data like it's shown in the Wikipedia, i.e. using *to++ = *from++ for each step.  Most likely a compiler dealing with *to++ = *from++ will emit 4 instructions: a load, a store, and two additions to pointers likely stored in registers.  But, for example, if you have an 8 case device, you can arrange things so that the pointers are incremented with the slack iterations before the switch and then do

  • *(to-8) = *(from-8);
  • *(to-7) = *(from-7);
  • ...
  • *(to-1) = *(from-1);
  • if not done, increment to and from by 8 and go to the top;
With the explicit offsets above, each step requires just a load and store with fixed offsets that a reasonable CPU with instructions such as mov rax, [rsi+offset] calculates on the fly.  The resulting Duff's device is now roughly half the size in assembler instructions.

Wednesday, October 10, 2012

GC improvements continue

I had a bit of time recently to go back to the HPS memory manager and deal with a few loose ends I had previously identified.  I overhauled the large and fixed space implementation because of performance considerations, with the goal of removing bits of inefficiency from the VM's background performance level.  Besides the usual code deletion (about 250 LOC gone so far), scavenges in the presence of numerous fixed space objects became measurably faster.  The same should be true with large space, but of course there are generally fewer large space objects than fixed space objects so your mileage may vary.  Other operations that got a bit faster include mark/sweep GCs and becomes in which at least one of the objects involved is in fixed or large space.

Wednesday, October 03, 2012

Regarding how to teach programming

Here's an awesome talk by Mark Guzdial recorded at the 2012 C5 conference.  Watch how non-CS students can pass programming courses more often than CS students when taught a different way.  The goal of this improved teaching is to allow everyone to take advantage of programming, not just CS oriented people.  In that way, we can all start using the computer for our own stuff as opposed to letting it be a glorified [buy now] button.  We need more people to do the kind of work Mark is doing.

And it doesn't stop there.  Watch what happens when the same techniques are applied to CS students themselves...

Sunday, September 02, 2012

Audio over USB: back to the age of V42bis modems?

A long time ago I used to run a BBS.  When I first started, I only had a 2400 bps modem.  This modem was connected to an ISA slot on the motherboard, and you configured the COM port via jumpers.  Naturally, on the software side, the serial port was set to the maximum speed of the modem, i.e. 2400 bps.  Because of how the hardware interface worked, this meant the CPU was going to be interrupted 300 times per second as transmission happened.  Bytes moved one IRQ at a time.

These old machines, basically a 386 running DOS, could deal with this for the most part.  Then came modems that had a baseline 9600 bps transmission rate, plus two major improvements: V42, and V42bis.  The first one introduced packetized transmissions between the modems, with retries and everything, such that line noise and data corruption would (for the most part) become invisible to the modem user.  The second one introduced data compression of the Lempel-Ziv variety, and could achieve a maximum compression ratio of 4.  In other words, whereas modems had a physical data rate of 9600 bps and would at first glance require a CPU interrupt rate of 1200 times per second, this number of interrupts would be insufficient to deal with compressed data because the CPU would not be able to take data off of the modem's hands quickly enough.  Since the compression ratio could reach 4, then you would set the modem's serial port to 38400 bps to avoid defeating V42bis.

Ah, those computers.  There was no way they could handle 4800 serial IRQs per second plus writing whatever to disk, or paying attention to the keyboard, or whatever else the software was doing.  Bytes started getting overwritten at the serial port interface, and data errors started to become visible to applications.  In other words, since the hardware wasn't fast enough, by trying to avoid defeating V42bis with a higher serial port speed, you would defeat V42 as well.

The solution to this problem was to introduce the 16550 UART to replace the 8250 UART.  The 16550 came in with a FIFO buffer capable of holding (OOoooohhh!) 16 bytes.  In other words, the modem could send up to 16 bytes to the FIFO buffer, and no bits would be lost as long as the CPU eventually got to emptying the buffer.  This required some changes in applications to take advantage of the new hardware, and solved dropped byte problems completely.  Now you could set your serial port to 115200 or even 230400 bps if you wanted, and things would continue to work just fine.  Nice, isn't it?

Let's fast forward 20 years.  We have vastly more powerful computers these days, capable of emulating a whole DOS machine in software without breaking a sweat.  And we still seem to have not learned the FIFO lesson.  It's quite embarrassing, frankly.  The main symptom I see is that you cannot use USB for audio.  But why, right?  USB 2.0 can easily move 30 megabytes per second to that external hard drive, what could a couple hundred kilobytes per second do to it?  Well, on OS X in particular, a lot.

When things don't work on OS X, what you hear are electronic glitches, cutoffs, pops and other artifacts that do not belong.  Sometimes audio gets in a state where it doesn't even recover.  So for example, if you are in a Skype conversation, your voice may become garbled until you hang up and call again.  Something similar may happen with USB microphones in e.g. GarageBand, where audio input will sound incorrectly no matter what you do until you unplug the USB device and plug it again.  In every day use, these problems come and go without an apparent pattern.  However, there are hints that what is behind these issues is the old serial FIFO buffer problem from 20 years ago which we still have not learned to fix.  See for example this, this, this, and this.  Google has an exhaustive etcetera available as well.  In those, we see plenty of discussion about buffer sizes and buffer underruns.  We're still discussing how large the FIFO buffer should be.  In other words,

  • We still have not fixed the decades old FIFO problem, and
  • We're just asking the user to fix it instead of providing a proper solution.
The above means that at no point you can be absolutely sure that the magic number you put in some configuration dialog box will be enough to stop the FIFO buffer problem.  This is even with today's super powerful machines.  Moreover, since you could conceivably experience artifacts that are not immediately obvious, and since there is no monitor that can tell you e.g. when and by how much the buffer was insufficient, you're left in the dark in two ways.
  • If you cannot prove there was a buffer issue beyond any reasonable doubt, then you cannot say none occurred, and
  • You are not given the tools to prove or disprove that there were buffer issues.
But it gets better.  Under OS X, it is up to applications to set their buffer size if they want something else.  There is no configuration dialog to change the default buffer size which, according to pages such as this, is tiny.  Sometimes, it's on the order of a few hundred bytes.  Consequently, that means sending hundred of kilobytes per second a few hundred bytes at a time.  Are we really talking about potentially interrupting the CPU several thousand times per second, something even MSDN says is not necessarily a good idea?

So what happens when you use GarageBand?  You get two options: "small buffer", and "large buffer".  But what is small, and what is large?  The answer to that question seems to be "well my dear why would you care, just go buy another app at the app store will you?".  And what if you use Skype?  That I know of, Skype has zero configuration ability to set buffer sizes.  So, in this case, you are completely out of luck.  And it's still your problem because it is you that buys the hardware and uses the software.

And there's more.  In OS X 10.4, the process priority model allowed you to use the nice and renice tools and set the priority of processes to whatever you wanted.  So, if you wanted to get the computer to do some background batch processing, you could set it to the lowest priority and it wouldn't interfere significantly with anything else you were doing.  This ability was removed in OS X 10.5, and it results in that applications cannot run with the equivalent of lowest priority.  They will consume significant time no matter what, and they will continue doing so even if you go to the command line and use the (now basically useless) renice commands.  After you modify the renice priority, the OS X scheduler will seemingly apply a balancing change which you do not have access to, and in the end nothing will happen.  You can see this in action with software such as BOINC.

Why is this important?  Because the audio buffer problem is exacerbated when the CPUs are busy.  So if you keep your whatever cores super busy, like you should, then I'm sorry but you will experience audio problems no matter what you do.  So really the only way to do USB audio is to keep your super fast machine idle(!).  And even that is not a guarantee of anything.  For emphasis, as I write this very paragraph, a short Firefox CPU spike resulted in the USB microphone I was monitoring to start producing static.  The machine has plenty of computing power, and is otherwise not doing anything of consequence.

And why would this be a problem?  Why can't OS X simply raise the priority of the processes that demand enough priority to do USB audio in tiny little packets?  Well, it could, and even then this approach wouldn't necessarily work.  This is because, according to OS X docs (and the technical note linked above) the audio drivers provided by the OS predict when applications should be notified that something needs to be done with audio, and issue a software interrupt when action is needed.  The problem with these predictions is that they are guesses, and as a kernel service developer the problem is that if you are forced to guess then you are also forced to guess wrong.  So what happens when the machine is busy?  Well, basically, too bad.  But can you fix it?  No:
  • You cannot reconfigure the default buffer size used by the OS drivers.
  • Apparently, the OS drivers do not reconfigure themselves when there is a problem, as evidenced by failure modes that require resetting hardware to clean up.
  • Applications are supposed to specify buffer sizes, but most don't because the user is not expected to understand what is going on.
  • And even if applications provide a way to configure things, it is up to users to basically guess numbers large enough such that they more or less guess there are no problems anymore.  However, users cannot prove for a fact that their chosen number is large enough because there are no diagnostic facilities that will help determining the right course of action.
  • If there are systemic problems with the way the machine is operated and applications do not respond to the audio driver's attention request within the predicted (guessed) time, from the point of view of the OS it is always possible to blame applications for the problem.
So that seems to be why you get hundreds of thousands of hits for "usb audio glitches" in Google.  Apparently, that's also why you shouldn't do audio over USB at all.  Can't we please fix this problem already?  Or, at least, provide the following so we can at least do something about it?
  • Provide enough user visible diagnostic information to determine where the problem is.
  • Allow OS drivers to provide default configuration values determined by the user to applications that would otherwise not provide configuration capabilities.  This would be somewhat similar to the situation you have with graphics drivers in which the driver more or less imposes a particular configuration for applications at the user's request.
Would this be that hard?

Well the above sounds authoritative and all that.  Nevertheless, this is merely the best estimate I've been able to figure out so far.  If you know a real solution to this problem, or you know something I missed, please let me know.  TIA!

Thursday, August 30, 2012

Smalltalks 2011 YouTube playlist

Videos are appearing as I write... check it out!

Smalltalks 2011 videos coming up shortly

It just took a long while.  It's a story of low bandwidth, scratched DVDs that couldn't be read (and the attempts at manual polishing that were only partially successful), FTP connections that kept failing... but now I finally have amassed all the files in a single place and will be publishing videos shortly.  Stay tuned, stuff is coming up!

Saturday, August 25, 2012

Smalltalks 2012: Call for Submission/Participation

The Fundación Argentina de Smalltalk (FAST, http://www.fast.org.ar) invites you to the 6th Argentine Smalltalk Conference, to be held on November 7, 8 and 9, 2012 at the Universidad Nacional de la Patagonia San Juan Bosco located in Puerto Madryn, Argentina.

Everyone, including teachers, students, researchers, developers and entrepreneurs, are welcome as speakers or attendees. Registration is free and now open at http://www.fast.org.ar/smalltalks2012.

The goal of the Conference is to strengthen the bonds between the Argentine and the international Smalltalk community through the exchange of works, experiences and anecdotes connected with this technology and related matters.

As in the previous editions, renowned members of the international Smalltalk community will visit us, and like last year we will have a Research Session with publications reviewed by an international committee.

Also, the traditional Industry Track will be available for those who prefer a more practical environment. You can submit papers or talks now through our website at http://www.fast.org.ar/smalltalks2012/technical-session.

For more information about related events, such as a Pharo Sprint or related talks and presentations, please visit http://www.fast.org.ar/smalltalks2012/events.

If you have any questions please contact us at info@fast.org.ar.

See you there!

Tuesday, August 07, 2012

Airfares for Smalltalks

Planning to attend Smalltalks 2012?  If you will fly to the conference, please let us know right away (by e.g. sending an email to info at fast dot org dot ar).  It will help us with the work we're doing to get cheaper domestic fares.

Friday, August 03, 2012

Smalltalks 2012 invitation and registration

Hello!  You're invited to come over to the wonderful Argentine patagonia to participate in Smalltalks 2012.  This year we will host the conference at a new venue for us: Puerto Madryn.  We're very pleased to see different sections of the Argentine Smalltalk community represented every year.

Go here and register.  See you in Puerto Madryn!

Monday, July 16, 2012

Just write less, will you?

There are Alan Kay's comments re: you cannot understand programs with O(10^7) lines of code, and Ian Piumarta's comments re: ideally you would have programs written very succintly because then you have a chance to read and understand them within your lifetime.

And it's not just code.  There is also the problem of simply saying a lot of words without really saying anything useful.  A significant amount of social media text traffic follows this pattern.  And so does the media's comment section.  Most such stuff is gibberish.  You shouldn't feel entitled to write such things and then think you did "something" useful, just because *you can*.  The issue is that then someone else will have to read all that, and have to make sense out of it.  Sifting through the garbage is extremely time consuming, particularly when there is a lot of it.  Faced with the impossibility of dealing with gibberish, there are two options: either you do not read it, or you delete it.

All of a sudden, doing "something" isn't that helpful, huh?  Either people read it without paying too much attention (perhaps adding their own layer of stuff on top of yours), or nobody reads it, or someone deletes it.  Gibberish is just profoundly useless.  So stop making so much of it, already!  For example...

  • When you write test suites, write the test suite so it enumerates all interesting cases itself.  If you cannot, at least be complete and write test cases for *all* cases.  If not, there's always the lingering suspicion the code is just junk.
  • When writing software in general, and C in particular, pay attention to *every single detail*.  Being productive is not a matter of producing a lot of code, it's more about writing something others can depend on.  If you don't do your homework, then others will have to clean up behind you at great expense of time.  Or will just not read it, or delete it...
  • Stop writing comments that anyone else could write.  Take a hint from G. H. Hardy already: by definition the majority's opinion already has enough exponents, so stop repeating the argument of the majority.  Similarly, follow Dijkstra's advice and solve problems that only you can solve.  If there's somebody else that could fix the problem, then don't do it: chances are many will get the idea to fix whatever at the same time, and it will result in duplication and loss of time.  Therefore: no more comments about what somebody else said, no more "sharing" google results for the sake of grabbing attention, and no more comments about the obvious.
We have an enormous capacity to produce stuff, but we should stop and think if we really should.  Otherwise, just dealing with what we produce will lead to stuff we don't understand because the junk overload makes it harder to concentrate.  And if we don't understand what we have, then we won't be able to use it to achieve our goals.  Even our goals will be distorted by the sheer amount of nonsense.  And who can succeed like that?

Really.  Take your time before you speak.  Make the thought worth sharing before you open your mouth or tap on the keyboard.  And if it's not worth it... just don't do it.  Please.

Sunday, April 01, 2012

New Smalltalk User Group in Vancouver, CA

Here's the link courtesy of Francois Stephany.  Check it out!

Friday, March 30, 2012

Smalltalks 2012 announcement

Hello there!  This would be a good time to get a calendar and try blocking November 7th through November 9th.  You see, we're inviting you to Puerto Madryn in beautiful southern Argentina so you can attend Smalltalks 2012! In the coming weeks we will send out research track calls for papers, calls for participation, and start the registration process. But get those presentations ready, Smalltalks 2012 will be here before you know it.  We look forward to seeing you there!

Tuesday, March 27, 2012

Sunday, March 25, 2012

Fitting sequel to high school math update

I've heard excuses a million times. Math is hard, some people are born with it and some are not... there is even the term innumeracy and the argument that nothing is done about this problem because being numerically challenged is socially acceptable.

Meh. How about this? A 16 year old designs and makes this monster of a computer all on his own. He would have been Charles Baggage 150 years ago, that's pretty good for a 16 year old. Among other things,

The video itself explains that its overall size is more than 5 million cubic meters --- just over 250 x 250 x 100 blocks. It provides 14 functions, BCD input, 2 BCD-to-binary decoders, 3 binary-to-BCD decoders, and 6 rapid BCD adders and subtractors. It also contains floor after floor of live decoders for quick conversions, a 20 bit (output) multiplier, 10 bit divider, a memory bank and additional circuitry for the graphing function.

So you see, when you want, you can.

Sunday, March 04, 2012

An update on high school math education

A while ago, I commented on official year 2000 stats that showed 75% of high school kids cannot use math in real life problems. Obviously this is bad. So what happened since then? I found a convenient 2009 data set in the 2010 education statistics digest published by the US government. You can see the table here.

There have been some changes since 2000. In the current table, math students are graded in 6 levels of proficiency, as opposed to 4 in the earlier tables. Here is a description of the levels, and some summarizing I did on my own.

Level 1: Able to answer questions involving familiar contexts where all relevant information is present and the questions are clearly defined.

There doesn't seem to be anything extraordinary here.

Level 2: Able to interpret and recognize situations in contexts that require no more than direct inference; extract relevant information from a single source; employ basic algorithms, formulae, procedures, or conventions; and employ direct reasoning for literal interpretations of results.

There is nothing interesting here either. Actually it reminds me of rote and training as opposed to education.

Level 3: Able to execute clearly described procedures, select and apply simple problem solving strategies, interpret and use representations based on different information sources, and develop short communications reporting one's interpretations, results, and reasoning.

It feels as if we are getting there, but note people at level 3 tend to follow instructions and do, at most, simple decisions. This is not very good.

Level 4: Able to work effectively with explicit models for complex concrete situations that may involve constraints or call for making assumptions, select and integrate different representations, reason with some insight, and construct and communicate explanations and arguments based on one's interpretations and actions.

Finally, students are expected to reason with some insight about constraints and assumptions. This might be enough proficiency to attempt thinking more or less independently.

Level 5: Able to develop and work with models for complex situations, select and evaluate appropriate problem solving strategies, work strategically using broad, well-developed thinking and reasoning skills, and communicate one's interpretations and reasoning.

We have wait until level 5 to see the first call for "well-developed thinking and reasoning skills". So, under level 5, you do not have well-developed thinking and reasoning skills. Ouch.

Level 6: Able to conceptualize, generalize, and utilize information; link different information sources and representations; perform advanced mathematical thinking and reasoning; develop new approaches and strategies for attacking novel situations; and formulate and precisely communicate actions and reflections regarding findings and interpretations.

And level 6, students can think of solutions on their own. A true independent thinker.

So now, with these levels in mind, let's go over the table. First of all, if you go through the digest you will find several tables that assign scores to students. Looking at the scores alone, you'd think the US is doing pretty well because it scores ~265 of 300. But how do you know the tests actually measure something relevant? That is why we look at actual proficiency.

In the US, level 4 and above are still at about 25% of the students. In other words, 75% of students cannot reason with any insight about constraints and assumptions. Things like "can I buy and pay for house X, with the constraint that I have salary Y, and assuming I'm employed along the lines of Z?" are outside the reach of level 3 and below. They need to be told the procedure, they cannot figure it out for themselves. Are we surprised about the results?

But how does the US compare to other countries? First, some countries that are not doing very well. According to the tables, level 4 and above constitute less than 5% in Argentina. Peru for example, has 2.6% for level 4 and above, and 47.6% are below level 1. In Colombia, there's only 38.8% under level 1 so it seems comparatively better, but level 4 and above is only 1.7%. Brazil and Panama are about the same. Uruguay and Trinidad and Tobago are doing a bit better.

Well so much with Latin America. How about Qatar? Aren't they full of oil? Well yes, but it looks like they are having some challenges of their own because 51% are below level 1, and only 6% are at level 4 or above. Dubai is doing better. Other countries look similar to the US, like the UK, the Czech Republic, the Slovak Republic, Italy, Hungary, Portugal and Spain.

Ok, how about some serious competition? Let's see... in Australia, level 4 and above is close to 40%. Canada's level 4 and above is a bit over 40%. Something similar happens with Belgium, Germany, Japan, the Netherlands, New Zealand, Macao-China, etc.

There is a higher tier as well. Finland, for example, has about 50% at level 4 or above. Switzerland is close to 48%. Liechtenstein has about 49%. Chinese Taipei has about 50%. The Republic of Korea weighs in at about 52%. Hong Kong's level 4 or above is over 55%. Singapore has 58%.

The countries that are doing comparatively well (at least ~40% at level 4 or above) are just a roundup of the usual suspects. But who tops the list? According to the table, Shanghai-China has over 70%. In fact, their level 6 population is larger than that of any other level.

So, it could be done right in the US if we wanted to, but effectively we don't. For all we want to "have an intelligent discussion about our problems", exactly what are we going to propose without putting in the time to understand things first? We are the ones who spend an average of 30 hours a week in front of the TV and Angry Birds, so why don't we start putting in the time where it counts?

None of this should be surprising. In fact, it is well deserved.

Monday, February 20, 2012

Call for Submission on Special issue on "Advances in Dynamic Languages"

Special issue on "Advances in Dynamic Languages"


Special issue of Elsevier's Science of Computer Programming (SCICO)


Context

Over recent years we have seen an increased interest in dynamic programming languages such as Smalltalk, Lisp, Scheme, PHP, JavaScript, Self, Python, Ruby, and so on. These languages have taken a prominent role in teaching, web development, scripting, rapid prototyping, tool building, language engineering, and many other domains.

For this special issue we invite high-quality papers that focus on novel research related to dynamic programming languages and applications of these languages.

We are interested in research that uses dynamic languages in the context of, but not restricted to:

- Aspects, aspect Languages and applications.
- Ambient intelligence, ubiquitous / pervasive computing and embedded systems.
- Compilation technology, optimization, virtual machines.
- Language engineering, extensions.
- Model-driven engineering / development.
- Meta-modeling, reflection and meta-programming.
- Programming in the large; design, architectures and components.
- Programming environments, browsers, user interfaces, UI frameworks.
- Source-code analysis and manipulation (static analysis, refactoring, type inference, metrics).
- Testing, eXtreme Programming / practices.
- Web services, internet applications, event-driven programming.
- Experience reports.

The special issue is associated with the Smalltalks 2011 conference. The Smalltalks series of conferences (www.fast.org.ar) is a lively forum on Smalltalk-based software technologies that brings together more than 200 people from both academia and industry for a period of three days.

Submission Guidelines

Papers should be written in English, in PDF-format and should not exceed 25 pages (including references and figures), using the Elsevier journal format. The LaTeX template for this format can be found at http://www.elsevier.com/wps/find/authorsview.authors/latex

Papers must be submitted through the EES submission system located at http://ees.elsevier.com/scico/default.asp. When reaching the "Article type" step in the submission process, it is important to select "Special issue: Advances in Dynamic Languages".

Each paper will be reviewed by at least 3 experts within the domain. The accepted papers will be published in a special edition of Elsevier's Science of Computer Programming.

Papers submitted must not have been previously published (at least 30% new material) and must not be under review for publication elsewhere. Papers must strictly adhere to submission guidelines. If you have questions, please send an e-mail to Jannik Laval (jlaval@labri.fr) and Andy Kellens (akellens@vub.ac.be).


Important dates (tentatively)

- Submission round 1: March 16, 2012
- Feedback round 1: May 17, 2012
- Submission round 2: June 29, 2012
- Feedback round 2 (final notification): August 17, 2012
- Camera ready version: September 14, 2012

Guest editors

- Andy Kellens (Vrije Universiteit Brussel, Belgium)
- Jannik Laval (LaBRI Bordeaux, France)

Saturday, February 04, 2012

Update on Fundamentals vol 2

I rewrote all I had written for chapter 7, and finished off a chunk that I had not had a chance to complete earlier. The draft is now 178 pages.

Update on scavenger work

Ok, so now we put in another set of cleanups that deleted another ~400 LOC. The total code deletion from the scavenger work is now close to 1000 LOC. And note that this figure does not reveal what truly happened because I had to add a significant amount of code to fix problems that will no longer occur...

Thursday, January 12, 2012

From the strange source code department

Download this source code, compile and execute. Betcha you didn't expect that result.

Friday, January 06, 2012

Update on the scavenger

Back in October I made some comments about the coming work on Cincom Smalltalk's new space generational scavenger. Today, the bulk of the work got integrated into our development branches. Here are some highlights.

  • The generational scavenger is essentially rewritten. Major wins include a net code loss of about 10% for the scavenger alone, elimination of all sorts of weird edge cases, and generally more efficient operation.
  • For 64 bit platforms, we now have a completely new, significantly better and far more concise class table management mechanism for the scavenger.
  • These changes come with about 1850 new VM tests.

In addition, we fixed several smaller bugs that will just never come back to bother us. There are also some performance and stability improvements for the GC / IGC in particular, and the memory manager in general. I still have a list of pending cleanup items, and we still have the opportunity to delete more code and extract more efficiency out of the code.

Moving along!

A couple Smalltalk apps on the web

Via German Arduino, check these two Smalltalk powered websites: Get It Made, and Airflowing. They look really nice, don't they?

Update... these three Smalltalk powered websites: Objectfusion.