Tool Writing: A Forgotten Art?
Merely adding features does not make it easier for users to do things—it just makes the manual thicker. The right solution in the right place is always more effective than haphazard hacking.
— Brian W. Kernighan and Rob Pike
In 1994 Chidamber and Kemerer defined a set of six simple metrics for object-oriented programs. Although the number of object-oriented metrics swelled to above 300 in the years that followed, I had a case where I preferred to use the original classic metric set for clarity, consistency, and simplicity. Surprisingly, none of the six open-source tools I found and tried to use fitted the bill. Most tools calculated only a subset of the six metrics, some required tweaking to make them compile, others had very specific dependencies on other projects (for example Eclipse), while others were horrendously inefficient. Although none of the tools I surveyed managed to calculate correctly the six classic Chidamber and Kemerer metrics in a straightforward way, most of them included numerous bells and whistles, such as graphical interfaces, XML output, and bindings to tools like ant and Eclipse.
As an experiment, I decided to implement a tool to fit my needs from scratch to see how difficult this task would be. In the process I discovered something more important than what I was bargaining for: writing standalone tools that can be efficiently combined with others to handle more demanding tasks appears to be becoming a forgotten art.
Going the Unix way
My design ideal for the tool I set out to implement was the filter interface provided by the majority of the Unix-based tools. These are designed around a set of simple principles (see Kernighan and Pike's The Unix Programming Environment (Prentice-Hall, 1984) and Raymond's The Art of Unix Programming (Addison-Wesley, 2003).
- Each tool is responsible for doing a single job well.
- Tools generate textual output that can be used by other tools. In particular this means that the program output will not contain decorative headers and trailing information.
- Tools can accept input generated by other tools.
- The tools are capable of stand-alone execution, without user intervention.
- Functionality should be placed where it will do the most good.
Apart from the temptations I will describe later on, these principles are very easy to adopt. The 1979 7th Edition Unix version of the cat command is 62 lines long, the corresponding echo command is 22 lines long (double the size of the 1975 6th Edition version).[1] Nevertheless, tools designed following the principles I outlined easily become perennial classics, and can be combined with others in remarkably powerful ways. As an example the 6th Edition Unix, 9 lines long, 30 years old version of the echo command can be directly used, as a drop-in replacement, today in 5705 places in the current version of the FreeBSD operating system source code; we would need the 26 year old and the slightly more powerful 7th Edition version in another 249 instances.[2] Nowadays tools following the conventions we described are also widely available in open source implementations for systems such as Linux, Windows, *BSD, and Mac OS X.
Following the principles I described, the ckjm metric tool I implemented will operate on a list of compiled Java classes (or pairs of archive names followed by a Java class) specified as arguments or read from its standard input. It will print on its standard output a single line for each class, containing the class name and the values of the five metrics. This design allows us to use pipelines and external tools to select the classes to process, or format the output; refer to the tool's web site for specific examples.[3] Given ckjm's simplicity and paucity of features, I was not surprised to find it was both more stable and more efficient than the tools I was trying to use: by ignoring irrelevant interfacing requirements I was able to concentrate my efforts on the tool's essential elements.
Temptation calls
A month after I put the tool's source on the web I received an email from a brilliant young Dutch programmer colleague. He had enhanced the tool I wrote integrating it with the ant Java-based build tool and also adding an option for XML output. He also supplied me with a couple of XSL scripts that transformed the XML output into nicely set HTML. Although the code was well-written, and the new facilities appeared alluring, I am afraid my initial reply was not exactly welcoming.
The perils of tool-specific integration
Allowing the tool to be used from within ant sounds like a good idea, until we consider the type of dependency this type of integration creates. With the proposed enhancements the ckjm tool's source code imports six different ant classes, and therefore the enhancements create a dependency between one general purpose tool and another. Consider now what would happen if we also integrated ckjm with Eclipse and a third graphics drawing software package. Through these dependencies our ckjm tool would become highly unstable: any interface change in any of the three tools would require us to adjust correspondingly ckjm. The functionality provided by the imported ant classes is certainly useful: it provides us with a generalized and portable way to specify sets of files. However, providing this functionality within one tool (ant) violates the principle of adding functionality in the place where it would do the most good. Many other tools would benefit from this facility, therefore the DirectoryScanner class provided by ant should instead be part of a more general tool or facility.
In general, the ant interfaces provide services for performing tasks that are already supported reasonably well as general purpose abstractions in most modern operating systems, including Windows and Unix. These abstractions include the execution of a process, the specification of its arguments, and the redirection of its output. Creating a different, incompatible, interface for these facilities is not only gratuitous, it relegates venerable tools developed over the last 30 years to second class citizens. This approach simply does not scale. We can not require each tool to support the peculiar interfaces of every other tool, especially when there are existing conventions and interfaces that have withstood the test of time. We have a lot to gain if the tools we implement, whether we implement them in C, Java, C#, or Perl, follow the conventions and principles I outlined in the beginning.
The problems of XML output
Adapting a tool for XML output is less troublesome, because XML data solves some real problems. The typeless textual output of Unix tools can become a source of errors. If the output format of a Unix-type tool changes, tools further down a processing pipeline will continue to happily accept and process their input assuming it follows the earlier format. We will only realize that something is amiss if and when we see that the final results don't match our expectations. In addition, there are limits to what can be represented using space-separated fields with newline-separated records. XML allows us to represent more complex data structures in a generalized and portable way. Finally, XML allows us to use some powerful general-purpose verification, data query, and data manipulation tools.
On the other hand, because XML intermixes data with metadata and abandons the simple textual line-oriented format, it shuts out most of the tools that belong to a Unix programmer's tool bench. XSL transformations may be powerful, but because they are implemented within monolithic all-encompassing tools, any operation not supported becomes exceedingly difficult to implement. Under the Unix open-ended specialized tool paradigm, if we want to perform a topological sort on our data to order a list of dependencies, there is a tool, tsort, to do exactly that; if we want to spell-check a tool's output, again we can easily do it by adding the appropriate commands to our pipeline.
Another problem with XML-based operations is that their implementation appears to be orders of magnitude more verbose than the corresponding Unix command incantations. As a thoroughly unscientific experiment I asked a colleague to rewrite an awk one-liner I used for finding Java packages with a low abstractness and instability value into XSL. The 13-line code snippet he wrote was certainly less cryptic and more robust than my one liner. However, within the context of tools we use to simplify our everyday tasks I consider the XSL approach unsuitable. We can casually write a one-liner as a prototype, and then gradually enhance it in an explorative, incremental way, if the initial version does not directly fit our needs (according to Pareto's principle 90% of the time it will). Writing 13 lines of XSL is not a similarly lightweight task. As a result we have less opportunities to use our tools, and become proficient in exploiting them.
Finally, although adding XML output to a tool may sound enticing, it appears to be a first step down a slippery slope. If we add direct XML output (ckjm's documentation already included an example on how to transform its output into XML using a 13-line sed script), why not allow the tool to write its results into a relational database via JDBC—surely the end-result would be more robust and efficient than combining some existing tools. Then comes the database configuration interface, chart output, a GUI environment, a report designer, a scripting language, and, who knows, maybe the ability to share the results over a peer-to-peer network.
Realpolitik
The ant integration and XML output will be part of ckjm by the time you read these lines; probably as optional components. Emerson famously wrote that "A foolish consistency is the hobgoblin of little minds." Spreading ideology by alienating users and restricting a tool's appeal sounds counterproductive to me. Nevertheless, please remember the next time you ask or pay for tighter integration or a richer input or output format for a given tool, whether what you are asking for can (already) be accomplished in a more general fashion, and what the cost of this new feature will be in terms of the stability, interoperability, and orthogonality of your environment.
[1] http://minnie.tuhs.org/UnixTree/
[2] The 7th Edition version supports an option for omitting the trailing newline character. I derived both numbers in less than a minute by combining together seven Unix tools.
[3] http://www.spinellis.gr/sw/ckjm
* This piece has been published in the IEEE Software magazine Tools of the Trade column, and should be cited as follows: Diomidis Spinellis. Tool Writing: A Forgotten Art? IEEE Software, 22(4):9–11, July/August 2005. (doi:10.1109/MS.2005.111)
Read and post comments