Friday, October 29, 2010

Definition of negligence, please?

Check this out: you can sue a 4 year old for negligence if the child crashes into a senior citizen while riding a bicycle with training wheels causing injury leading to death. If that's negligent enough to merit a lawsuit, then what escapes this criterion for what constitutes negligence (other than acts committed by a child less than 4 years old)?

During soccer matches, heart attacks are more likely. Are players negligent because their playing leads to death of others? Or are TV stations negligent for knowingly broadcasting the game? Or are TV manufacturers negligent for creating devices that are too realistic, which lead to excessive stress and death? Is the dead viewer negligent because they committed suicide?

What if the person suffering the heart attack was a train driver listening to the game on the radio? Is the deceased guilty of involuntary (negligent) mass manslaughter because the train did not stop at a signal? Or are the train line managers negligent for not installing proper fail safe train stop devices?

Basically everything you eat or drink has some substance that makes you sick. For example, tap water has traces of several prescription drugs. Are you negligent for drinking? Are the drug companies negligent for not making the drugs biodegradable? Are the municipalities negligent for not filtering the water enough? What about the bottled water companies that merely bottle tap water, are they negligent too? Are those that throw out prescription drugs in the garbage (landfills) or toilet / sink (sewer) negligent?

Assume you vote for someone that does something that in the end is a bad idea. Are you negligent for voting "wrong"? In other words, let's say that in the 2100 elections the Politicians United Party's candidate is elected president, and that during the tenure the PUP's president decides to raise taxes to 200%. Are all PUP voters negligent for voting in this president? What about the opposition, those who voted the Allied Politician Party? Are all APP voters negligent for not winning the election against the PUP? Or, on the flip side, are APP voters now proven non-negligent because their candidate was better, even if the candidate did not get a chance to perform and prove to be a better choice? Does that prove PUP voters were negligent? Or are PUP voters proven non-negligent because the APP's candidate had negligently promised to increase the retirement age to 100?

Hindsight is always 20-20, but there is more than that. The issue is one of expectations. IME, the more you learn about different things, the more you see you cannot take things for granted. In particular, you cannot take impeccable performance as a given. We're merely human, and we make mistakes. So where is this expectation of perfect behavior coming from? Is that even reasonable to begin with? And don't forget that the only way to stop making mistakes at your job is to not do your work.

But what about the senior citizen run over by a 4 year old on a bicycle? Should we think the senior citizen was negligent for not having bodyguards? Why wasn't the senior citizen equipped with a siren / rotating light hat to make children riding bicycles aware of the road hazard? Why wasn't there a slow moving vehicle sign attached to the senior citizen's clothing? Why wasn't the estate of the senior citizen (the plaintiffs in the lawsuit) concerned enough with the senior citizen so as to provide a safe environment for the senior citizen to walk in?

The issue is that if you push the argument too much, then you prove everyone is negligent, the term "negligent" loses its meaning... and everyone can sue everyone else for negligence. That does not seem like a desirable conclusion, so maybe we need to look at things differently. So, is the lawsuit's matter ultimately an issue of taking risks (getting run over) in exchange for quality of life (being able to walk on your own) instead? If so, why is negligence in the picture at all?

Thursday, October 28, 2010

20-25% discount at Lulu

Enter coupon code TRICK305 at checkout and receive 20% off your order. The maximum savings for this offer is $100. Enter coupon code TREAT305 at checkout and receive 25% off your order of $500 or more. The maximum savings for this offer is $500. Sorry, but these offers are only valid in US dollars and cannot be applied to previous orders. You can only use these codes once per account, and unfortunately you can't use these coupons in combination with other coupon codes. These great offers expire on November 1, 2010 at 11:59 PM, so don't miss out! While very unlikely, we do reserve the right to change or revoke this offer at anytime, and of course we cannot offer this coupon where it is against the law to do so.

Tuesday, October 26, 2010

ReferenceFinder 1.25

I added the special object array as a scanning root. Enjoy!

Tuesday, October 19, 2010

Thank you, Benoît

Benoît Mandelbrot has passed away. In reading this article, I ran into the following.

He called himself a maverick because he spent his life doing only what he felt was right and never belonging to a particular scientific community.

Well, exactly what would you do in your life, what you feel is wrong? It's so obvious that you should do what you feel is best, right? And yet, how many of us fall short of this standard for a multitude of excu I mean reasons? This statement reminds me of what Knuth said in an interview: that you should do what you think is valuable because then you will care and, in the long run, your efforts will show. And before you wonder if Mandelbrot had issues with "communities", check the actual Knuth quote:

[...] too often [...] people will do something against their own gut instincts because they think the community wants them to do it that way, so people will work on a certain subject even though they aren't terribly interested in it because they think that they'll get more prestige by working on it. I think you get more prestige by doing good science than by doing popular science because if you go with what you really think is important then it's a higher chance that it really is important in the long run and it's the long run which has the most benefit to the world.

Do not simply go along whatever the "majority" thinks, because clearly this "majority" is not producing original knowledge. Nobody will do your thinking and creating for you, that is something you must develop on your own. But, you know, the clock is ticking. Are you done yet?...

Thank you for fractals, Benoît!

Monday, October 18, 2010

Assessments 1.48

I added timestamps to the file logging result policies. Enjoy!

Saturday, October 16, 2010

Excellent

I had forgotten about this, but I think it's really cool.

So, if they can do that with Excel, then what can we do?

Friday, October 15, 2010

Argentine Smalltalk Community Interview at ESUG 2010

Sometimes people wonder how come Smalltalk is so popular in Argentina. Hernán Wilkinson and Leandro Caniglia explain this at ESUG 2010 here.

Thursday, October 14, 2010

ReferenceFinder 1.24

I have been trying to track down some leak problems lately, so I went back to my ReferenceFinder package to find the source of the problem. In the process, I updated it to use the reflection API as much as possible so it can work around proxy objects without disturbing them. Also, for those interested, I updated the Distinctions package. This package implements Form and Distinction, the objects that Spencer-Brown talks about in his book Laws of Form. The reference finder is implemented on top of Form and Distinction, as per the mentoring course book's chapter 5. Enjoy!

Wednesday, October 13, 2010

That school that spied students with their own laptops...

Remember that Philadelphia school that gave laptops to its students, only to turn on the machines' cameras on the kids at their own homes? And how the people involved, when questioned because they would be witnesses to the school's "soap opera" replied "I know, I love it!"?

Well, here we go. First, the lawsuit was settled in apparently everyone's best interest. However, what is not so clear to me, is that there was no criminal intent in spying the kids. Really? It is a crime to possess a certain lobster, but it is not a crime to spy on students? Huh? I don't get it.

Monday, October 11, 2010

Deutsch's criteria for fixing bugs

Peter Deutsch did a lot of things, such as coming up with a JIT Smalltalk VM. There are other smaller things, like the 8 fallacies of distributed computing, or his criteria to evaluate bug fixes. I couldn't find a reference to the bug criteria, though. Thus, in short, Deutsch states you fix a bug when:

  • you can completely explain how the bug occurs, and
  • you can prove the change you make addresses the verifiable cause of the bug.
Unfortunately, it is common to hear claims of having fixed a bug just because "I made a change and the bug went away, therefore I addressed the source of the bug". In other words, an instance of the post hoc ergo propter hoc logical fallacy.

This type of fallacy comes up quite often. For instance, when working with C, you can make a C pointer aliasing problem evident by changing code that is far away from where the issue manifests. I found a case of this phenomenon not long ago. The inclusion of code that would be optimized away to nothing in one function affected register allocation in some other function that had nothing to do with the first one. In the second function, you would see a bug.

Should you claim that changing the first function fixes the bug in the second one? Or that the change in the first function somehow controls a compiler bug? Or would it be better to determine the source of the bug, in which case you can be sure that the change you make actually addresses the problem?

Alas, since sometimes these investigations take a long time, you often see things like "well I changed the compiler flags and that made the bug go away, therefore it's a compiler bug". Maybe, but you have to prove it, not merely state it. In the above case, the real issue was the source code was in violation of the C99 spec, which defines what the C language is to begin with. Clearly, as far as the compiler is concerned, the "bug" was a case of garbage in => garbage out. But, of course, as soon as you fix the source code so it does not rely on pointer aliasing, the compiler magically produces the intended code regardless of the optimization level. Sigh...

Sunday, October 10, 2010

Ten is a really good number

I've been told that ten is a really good number. Apparently I had missed this completely. But, yes. Today is 10/10/10. Long life ten the number!

Friday, October 08, 2010

Smalltalk 2010 declared of provincial interest

A bit ago, the city of Concepción del Uruguay declared Smalltalks 2010 of municipal interest. Then, the Science, Technology and Innovation Agency of Entre Ríos (ACTIER) agency declared Smalltalks 2010 of interest as detailed here. Now, the Province (State) of Entre Ríos has just declared Smalltalks 2010 of provincial interest on the grounds that:

  • The faculty of the conference's site, the Universidad Tecnológica Nacional site at Concepción del Uruguay, proposes the motion to declare Smalltalks 2010 of provincial interest, as organized by the Fundación Argentina de Smalltalk (FAST), and to be carried out in Concepción del Uruguay, from November 11th through November 13th;
  • Said foundation is dedicated to to support and communicate the work of Argentine Smalltalk developers, faculty and researchers within the international Smalltalk community;
  • Said event has been organized annually since 2007, with local and international attendance including students, developers, faculty and researchers, who offer Smalltalk technology presentations and tutorials of great interest to the audience.
Many thanks go to Entre Ríos' Governor Don Sergio Daniel Urribarri, and to Entre Ríos Secretary of State and Minister of Education and Justice Cr. Adán Humberto Bahl.

New Lulu coupon, expires October 11th

Enter coupon code EXPLORE305 at checkout and receive 14.92% off any order. Maximum savings is $50. Expires on October 11, 2010 at 11:59 PM. Enjoy!

Sunday, October 03, 2010

About Google's new WebP photo format

Google has recently released a new lossy photo format called WebP. The claim is that it compresses photos more effectively than JPG, thus reducing the file size, and yet the resulting file could be mistaken for a JPG file in terms of quality. Sometimes, size gains of about 40% are claimed. How much of this is actually an advantage of WebP?

Generally speaking, there are several problems with this type of assertion. First, over time, it has become clear to me that several sources of JPG files do a terrible compression job. Usually, the problem is that, after the lossy image representation is derived, this lossy representation has to be packed losslessly to produce the JPG file, and the lossless compression method is not very good. For example, let's take Photoshop. If I save an "optimized baseline" medium quality JPG file of the mentoring course book's cover, I get a 194kb file. If I look into it with some sort of hex editor, I see several sections that are obviously not compressed very well. To prove the point, if I compress Photoshop's JPG 194kb file with rar, I get a 108kb archive. Similarly, digital cameras typically produce huge JPG files that, upon inspection, look as if the firmware was prioritizing coding speed over coding efficiency. In short, some JPG files are compressible enough that WebP's file size advantage may be a result of poor internal JPG lossless compression. But then, why not fix JPG's lossless compression?

Moreover, Google used specific JPG encoder libraries in their benchmarks. How do you know these are coding the image effectively? For instance, I remember that an old program called Image Alchemy had a normal JPG mode, and an "optimized Huffman" mode that regularly produced smaller files than the normal mode. The "Huffman optimization" was a secondary pass over the already lossy representation, so this optimization did not result in additional information loss. In addition, JPG provides for arithmetic coding, which should result in smaller files. However, arithmetic coding is not always used for compatibility reasons. And, even if it were, arithmetic coding's representation efficiency is critically dependent on the compression model driving the probability predictions. When Google compares effectively random JPG files from the web with their WebP counterparts, how much of the comparison is between a specific JPG encoder library and WebP, as opposed to between JPG's intrinsic efficiency and WebP's intrinsic efficiency? In the case of JPG with arithmetic coding, how much of the comparison is between a (probably unsophisticated) probability model driving the arithmetic coder and WebP's compression format?

Sometimes, recompressing a JPG file with a more efficient JPG compressor makes a huge difference. I have personally seen 7mb JPG Photoshop files go down to 1mb JPG files that cannot be distinguished from the original. Part of the problem with recompressing JPG files is, well, how do you know that the first JPG compression pass did not make it easier for the second compressor to make a smaller file? Would different compressors produce significantly different files if they started with the same original photo? This is a problem because Google used WebP to recompress existing JPG files. It would have been more interesting to, say, obtain a significant sample of photos stored in raw format first, and then compare the results of packing the raw files with JPG and WebP. As it is presented, Google's presentation of WebP in contrast with JPG is not a true apples to apples comparison.

Finally, different JPG encoders have varying ideas of what "quality" means. For instance, if quality in encoder A is specified with a number between 0 and 100, is using 30 equivalent to encoder B's quality of 3 in a scale from 0 to 10? How do the "quality" settings of various JPG encoders compare to that of WebP? If this is not known, then what does it mean when Google claims WebP produces smaller files? Similarly, I do not think there is a clear notion of what "photo quality" means. How can you tell what JPG's SNR is compared to WebP's SNR if you do not have access to the original photo? And if you do not have SNR information, how do you know that WebP is doing a better encoding job, and how do you assess the smaller file size claim? For instance, the chroma information in the sample photo with the guy against the blue background is obviously very different in WebP. Why? Also, note that SNR is not the only way to measure picture quality. What about the psychovisual enhancements provided by e.g.: DivX and x264? Those can make photos and video (I get to say video because WebP is derived from VP8) look better, even if the PSNR is lower.

For these reasons, it's not clear to me that WebP is necessarily all that it's claimed to be. And if you cannot tell, then why is the introduction of another compression format preferable over providing improved JPG coders that produce results similar to WebP? For example, I took one of Google's comparison JPG photos and repacked it with a more efficient JPG encoder. I got a file size essentially equal to that of WebP. Why bother with WebP if the same results can be achieved with an existing format?

I wish Google would provide a more in depth analysis in its WebP page. I do not mean to imply that WebP is not a better encoding mechanism than JPG. Given that it has the benefit of ~20 years of research compared to JPG's original 1990s specification, it probably is better. However, the main claim of WebP is that it produces smaller files thus alleviating the problem of transmitting JPG files over the internet, and I have not seen enough evidence to support this claim other than "we recompressed a random sample of photos already compressed with random JPG packers and we got smaller files". Sure, but you could have achieved that with a better JPG coder or even gzip. Thus, I can't help wondering whether the format is really an attempt to further popularize their VP8 video codec, from which WebP is derived...

Speaking of video codecs, one of JPG's criticisms is that it introduces blockiness because it uses 8x8 chunks to compress the photo. WebP seems to handle blockiness better. But WebP is derived from a video codec, and many modern video codecs do deblocking when decompressing. If WebP is using a deblocker, then the comparison with JPG is further suspect. In other words, if you added a deblocker to your standard JPG decoder, would you achieve results comparable to that of WebP? Without technical details, how do you know?

For more details, see for example here (and make sure to read the comments!). Also, for an analysis of WebP seen as a VP8 I-frame, see here. Ouch!

Meanwhile, if the goal is to speed up the web, could we please have an HTTP extension such that loading a website requires only one request? Certainly, opening numerous connections for every single page element is going to introduce round trip latencies and other problems such as suboptimal use of frames that do not reach the MTU packet size. If we switched to a single connection, with the individual files streamed over it with some sort of tar transport, then we could run some form of even a very simple compression scheme such as v.42bis or v.44 on the stream so that all the easily compressible information is crunched on the fly for faster throughput... for compression examples, see here (although note this one does not seem to provide the tar capability) and Google's own research here.

Friday, October 01, 2010

Lulu's October 10% off coupon

Lulu's 10% off coupon for October is HARVEST. Enjoy!

Enter coupon code "HARVEST" during checkout and save 10% off the purchase price. Discount cannot be used to pay for, nor shall be applied to, applicable taxes or shipping and handling charges. Maximum amount that may be applied to discount is $10.00 per account. Promotional codes cannot be applied to any previous orders. No exchanges or substitutions allowed. Only one valid promotional code may be used per order. Offer expires October 31, 2010 at 11:59 PM EDT. Lulu.com reserves the right to change or revoke this offer at any time. Void where prohibited.