Saturday, May 17, 2008

Progress on Assessments

The other night I was working on Assessments and something that was bothering me about their implementation. The thing is that for the sake of simplicity I did the reference implementation using exceptions to notify the assessment engine that a check had passed. Running that through a profiler showed me once more that exception handling is comparatively expensive.

In other words, raising an exception should be something truly exceptional, for otherwise the performance hit associated with handling it can be quite stiff. This clearly shows up in profiler runs, particularly when the code being wrapped in exception handlers executes quickly. In these situations, exception handling can easily dominate the execution cost.

One of the design goals of Assessments is to allow a lot of flexibility in how assessments are evaluated --- or, in SUnitesque, how test cases are run. So I thought a good measurement of whether the implementation was actually as flexible as I had meant was to address the performance issue by having a different way to evaluate assessments to coexist with the original one.

The implementation passed the stress test successfully. The change involved a new class and providing a refinement of 3 methods. What's best is that the existing assessments for Assessments itself could be easily refined to test the new evaluation mechanism too with minimal change to the testing code thanks to inheritance and the use of parallel equivalent test hierarchies (see Chapter 3 of the mentoring book). This is good because the fact that the same assertions are being made allows to prove that both evaluation mechanisms behave in exactly the same way, despite the fact that they handle passes quite differently.

Proper implementation of the tests are just as important as proper implementation of the feature.

No comments: