It seems that I got too carried away writing about DHH keynote and forgot to mention other session I’ve been to that day. The first one was Hacking the Mid-End: Unobtrusive Scripting and Advanced UI Techniques in Rails, then Rails Software Metrics and Modeling Denormalization – The Speed You Need, the Order You Crave. I’m going to cover each one below.
Hacking the Mid-End
In Hacking the Mid-End: Unobtrusive Scripting and Advanced UI Techniques in Rails Michael Bleigh argued that there is a growing area between back-end (which is Model & Controller layers in MVC) and front-end (which is View) that he calls Mid-End. In most Web 2.0 applications, the architecture is really something like MVC+I, where I is for Interaction. This area contains all the non-trivial code in the presentation layer that usually can’t be done by HTML/CSS designers. This includes various AJAX calls, progress bars, fancy file uploaders, drag’n’drop support and so on.
Michael said that Mid-End developer facilitates cooperation between front-end and back-end, providing helpers and tools for the front-end designer and building on a structures provided by back-end developers. Mid-end developers goals are to make the application Fast, Accessible, Intuitive, and Responsive. Then he showed two examples of what he understands by that. The first was about making some slow action more responsive. In the original version after clicking the link the user had to wait about 10 seconds before the resulting page loaded.
The second example was about making simple dynamic tabs. When user has JS enabled only one of the tabs is visible and clicking a tab header switches the visible part. With no JS all the information is displayed as a list of sections with headlines and clicking a tab header jumps to the appropriate section.
Rails Software Metrics
Next session was Rails Software Metrics, presented by Roderick von Domburg. Roderick started with saying that he talks only about tools, not some prescribed best practices. The first tool he covered was, quite surprisingly,
rake stats. The results provided by this tool are not very interesting as such, but the key here is that you should graph them over time to make them much more useful.
Roderick showed graphs of lines of code, test-to-code ratio, average methods and lines per class or average lines per methods, all provided by
rake stats. Combining the graphs of these metrics may tell you many useful things about your code (you are testing too little or too much, your methods are too long) without installing any other tools.
The next tool covered was flog, which measures code complexity. Flog works in a “decidedly unscientific” way, assigning arbitrarily chosen values to various constructs (6 points per
eval, 1.2 point per
if) and reporting totals and averages for your classes and methods. And again, while results of running flog once are useful (you can see whether there are methods that require immediate refactoring), it becomes really cool when graphed over time. You can observe negative tendencies and take countermeasures when appropriate.
Rcov was next but since we use it in all our projects, there was nothing new for me. I’m always surprised when people say (like Roderick did) that it’s really hard to get 100% code coverage (I sometimes suspect they didn’t really try it, they just think it’s too hard) and you really shouldn’t try too hard because it’s not worth it and 100% code coverage doesn’t prove anything anyway. They say that with 100% code coverage you test many trivial pieces of code that are not worth it anyway while many other non-trivial pieces are covered only accidentally.
This is mostly true except for the “it’s too hard” part. My team uses TDD methodology and we have no problems with achieving 100% code coverage. We don’t find it too hard or too wasteful either. Oh, and by the way, it’s much easier to keep coverage at 100% if you have 100% from the start.
Roderick then covered briefly heckle, which mutates your code and checks if tests fail. Heckle is still in experimental phase and it’s not something you would want to run on every build but it’s fun to play with, nonetheless. The next tool was saikuro, which is a cyclomatic complexity analyzer. It’s similar to flog but has a more scientific approach. It also generates nice HTML reports similar to rcov’s, so you can inspect any code that generates warnings.
In Modeling Denormalization – The Speed You Need, the Order You Crave Duncan Beevers from Kongregate talked about how to use denormalization techniques to make your data retrievable faster. Denormalization means duplicating data from one model to another and storing the results of each calculation in the database once you calculate it. This technique is present in Rails in form of counter caches.
If your application is write-heavy, you should not go overboard with adding index. Each index slows down writes. Another advice was that real tables with calculated data are better than triggers and views. The problem with triggers is that one update can fire several triggers and you don’t have control about it. It becomes a real issue when you have 20,000 users updating their statistics each second (if I remember the number Duncan gave right). In this case batch processing is much better.
Duncan’s presentation had a little unexpected ending because he fainted after showing the last slide. Some people from the first rows helped him up and gave him a glass of water and he was OK in a minute. That accident awarded him much more cheerful reception from the audience.
I don’t know if this is of any interest to you, but most of this article was written while traveling back from Berlin to Wrocław after the conference. Ah, the joys of modern technology.