Test Coverage Reporting on Oracle 10g

February 17, 2009

In a previous post, I mentioned that my colleague Eduardo Morelli and I had made an effort to bring some of the standard quality tools from the software development world (automated tests, and test coverage reporting) over to the RDBMS world. In that post, I went into some detail on FIT4DBs, our automated unit testing framework. Now I’d like to reveal some of the secrets to our Oracle test coverage reporting tool, called OracleTC.

Test Coverage Reporting

OracleTC was more or less modeled on the Cobertura test coverage tool that we are using for our Java code. The basic idea of any tool like this is to perform the following steps:

  1. Somehow wrap your normal code (via an interceptor, a decorator, or some other pattern, usually in as unobtrusive was as possible) with some special code that can observe what is being executed
  2. Execute your suite of tests
  3. Collect statistics regarding the number of lines/paths/functions executed during your test
  4. Generate a report comparing the executed code to the total set of code that SHOULD have (or at least could have) been executed during your tests

The result is what we call the “code coverage” of your test suite. For example, if you are measuring at the level of functions, and your suite executes 15 out of 20 possible functions, your test suite “covers” 75% of your total number of functions. Ideally, you want to be more finer-grained than that. Just because a function has been executed once, it doesn’t mean it’s been fully tested. A good test suite should cover all possible variations on the outcome of a method or function, including edge cases (errors, exceptions, unusual inputs and so on). So, many test coverage suites report on the line, or statement coverage (did your tests execute every single statement in the function?).

The more sophisticated ones will actually try to capture path coverage, which is even more comprehensive than line coverage. For example, imagine you have the following code:

myObj = new Obj()
if (a == true) {
 myObj = null
if (b = true) {

In the above example, we could get 100% line coverage by running one test with a=true and b=false, and another with a=false and b=true. Does that mean we’ve tested everything? No. Unfortunately, we’ve missed the rather unfortunate case of when both a and b are true, which would be a disaster.

Observing Tests in Oracle

Anyway, back to OracleTC. We wanted to see if we were covering all our bases with our automated tests. To do this, we had to find a solution for the first step above: how could we slip in some code to actually observe what was going on inside Oracle? The usual solution to this problem, when you don’t have access to the guts of your system, is to write some sort of “pre-compiler” that modifies your code (in this case, the stored procedures and triggers and such) to add some sort of tracing to the works (INSERT INTO ORACLETC_TRACE (PROCNAME, START_TIME, END_TIME, PARAMS) VALUES (blah blah…)). This can be pretty intrusive, and will probably only get you as far as procedure-level coverage reporting. It’s more work than we were ready to put into the tool, and I would rather throw in the towel than go through that.

Fortunately, Eduardo (and Oracle) came to the rescue! He found a package called DBMS_PROFILER, which is provided with Oracle. It’s meant to be used as a tool for performance profiling. But look what it can tell you (text copied from reference link):

  1. Total number of times each line was executed.
  2. Time spent executing each line. This includes SQL statements.
  3. Minimum and maximum duration spent on a specific line of code.
  4. Code that is actually being executed for a given scenario.

There you have it! Test coverage reporting nearly out of the box! The package unobtrusively wraps your code executions, once you’ve enabled it for your session, and reports all its data, line-by-line, into some reporting tables that it sets up. We did have to figure out how to enable the profiling for ALL sessions, since our tests are multi-threaded, but the solution is to use a simple trigger (note that we created our own utility package called PKG_PROFILER to make things easier, but you get the idea):

 SELECT sys_context('USERENV', 'OS_USER') || '_' ||
 sys_context('USERENV', 'SID') || '_' ||
 sys_context('USERENV', 'TERMINAL')
 FROM dual;
 ON schema

Querying the Results

Once you execute your tests, reports can be easily generated by running queries on this data. As an example, the following query displays which lines took the longest execution time:

select * from (
 select u.unit_name, d.line#, s.text , numtodsinterval (round(d.total_time/1000000000,2), 'MINUTE') minutes
 from plsql_profiler_data d inner join plsql_profiler_units u
 on d.unit_number = u.unit_number
 inner join user_source s on (s.name = u.unit_name and s.type = u.unit_type and s.line = d.line#)
 where u.runid = 1
 order by 4 desc)
 where rownum <= 5
Top 5 most consuming statements

Top 5 most consuming statements

We weren’t able to go the whole 9 yards and figure out path coverage, but thanks to DBMS_PROFILER, OracleTC easily supports line-level coverage reporting. Here’s an example of a procedure we created to report the line coverage at a package-level granularity:

 | Generates a Test Coverage report based on a range of runids.
 | Before activating this procedure, do not forget to set serveroutput
 | Parameter:
 |       I_RUNID_START: bottom run id
 |       I_RUNID_END: top run id
 | Example:
 | @TODO:
 |       Create a temporary table to store results
 |       Deal with anonymous blocks
 DBMS_OUTPUT.PUT_LINE(RPAD(REG.UNIT_NAME, 30, ' ') || '-------------' ||
 TO_CHAR(V_COVERAGE, '999.99'));

The results of executing this procedure look more or less like so:

PKG_MY_INTERFACE         ------------- 100.00
PKG_MY_RUN               -------------   8.35
PKG_CALC_HIST            -------------  33.33
PKG_COPY_STUFF           -------------  47.89
PKG_PT_OUTPUT_CALC       -------------  16.29
PKG_DATA_VALIDATION      -------------  18.03
PKG_DENORMALIZE_DATA     -------------  42.48

This only shows those packages that were actually used during our test suite. What about those that we missed (i.e. with 0% coverage)? Here’s another query that can list those:

select DISTINCT p.object_name
 from plsql_profiler_units u
 right outer
 join user_procedures p on p.object_name = u.unit_name
 where u.unit_name is null

We can run similar queries to report on triggers as well.

Shiny Happy Reports

But as I said before, our inspiration was the Cobertura project, and they set the bar pretty high when it comes to coverage reporting. Our next step was to try to replicate their web-based reports that show an overview of the results (package-level), and then lets you drill all the way down to the code itself!


Our first task was to figure out how to get all this data out in a format that could be mastigated and munged by simple scripts until it looks and acts like well-behaved HTML. We chose XML, which has tool support in nearly every scripting and programming language around, and is good for structured data. Once again, Oracle comes to the rescue with their package DBMS_XMLGEN, which can format query result sets into a canonical XML format.

We wanted the whole enchilada, including the source code for our packages, along with the line-by-line statistics. So Eduardo created some procedures that:

  1. Join the data from the profiling stat tables to the Oracle user_source view
  2. Store the results in a “staging table”
  3. Convert the rows to XML-formatted CLOBs (one per module or package – equivalent to one per original source file in our code base)
  4. Spit them out into a folder as XML-formatted text files, using the package name as the file name

Here’s the code that populates the staging table for procedures which will later be converted to XML (note that this work must be repeated with some differences to capture trigger executions as well; the way these are reported in USER_SOURCE is actually a bit tricky):

-- XML_CLOBS will store one clob per line. Each line will correpond to a package, procedure or trigger
 create table xml_clobs (name varchar2(30), result clob);
-- Staging area for procedures and package bodies
create table xml_stage as
 select name, line, max(total_ocurr) total_occur, sum(total_time) total_time, text from
 (select s.name name, s.line line, p.total_occur total_ocurr, p.total_time total_time, s.text text
 from user_source s,
 (select u.unit_name, u.unit_type, d.line#,
 d.total_occur, d.total_time/1000000 total_time
 from plsql_profiler_data d, plsql_profiler_units u
 where u.runid = d.runid
 and u.unit_number = d.unit_number
 and u.unit_type in ('PROCEDURE','PACKAGE BODY')) p
 where s.name = p.unit_name (+)
 and s.line = p.line# (+)
 and s.type = p.unit_type (+)
 and s.type in ('PROCEDURE','PACKAGE BODY'))
 group by name, line, text
 order by 1,2;

Here’s the code for creating the XML CLOBs:

-- XML_CLOBS loading
 q dbms_xmlgen.ctxHandle;
 result clob;
 cursor cModules is select distinct  name from xml_stage order by 1;
 for reg_module in cModules loop
 q := dbms_xmlgen.newContext ('select line, total_occur, total_time, text from xml_stage where name = '
 || '''' || reg_module.name || '''' || 'order by line');
 dbms_xmlgen.setRowTag(q, 'LINE');
 result:= dbms_xmlgen.getXML(q);
 insert into xml_clobs values (reg_module.name, result);
 end loop;
 call pkg_utils.PRC_DROP_INDEX('xml_clobs','ix_xml_clobs_01');
 create index ix_xml_clobs_01 on xml_clobs (name);

I’ll skip the code for exporting the CLOBs to the file system, which isn’t too complicated, but a little bit bulky to reproduce here. Once we’re done, we should have a folder full of XML files looking something like this:

<?xml version="1.0"?>
 <TEXT>                           I_FORCE_DROP IN VARCHAR2 := 'FALSE') AS


Next, I created an Ant script to read each of the XML files, and perform an XSLT transformation on each to produce equivalent HTML files. The XSL script is too gruesome to reproduce here (if anyone wants a copy, I’ll be glad to oblige):

  1. Write the HTML headers
  2. Go through the list of procedures, calculate the coverage of each, and generate a table of the totals
  3. Show all the source code, with counts of the number of times each line was executed and how long the total executions took

The transformation script also sprinkles the output with some useful eye candy: it decorates executable lines with either a red or a green background, depending on whether or not the line was executed in testing. It highlights the start and end of each individual procedure in bold, and provides links from the scores at the top down to the specific code listings. Since it’s a simple text transformation, it must do this based on whatever information is contained in the text file itself. This means we had to make certain assumptions about the source code: a line that begins with “PACKAGE BODY” must be the start of a package definition, “PROCEDURE FOO” or “PROCEDURE FOO(” must be the start of the declaration of a procedure named “FOO”, “END;” is the end of a procedure or block, and so on. It’s not infallible, so a little bit of standardization in code conventions can go a long way here.

Example of a package source report

Example of a package source report

Calculating the coverage was much more challenging than it sounds. It’s based on line coverage, which means you should be able to calculate a percentage by taking [100 * lines executed / total executable lines]. The problem is with the word “executable”. In Java, you generally have a 1-to-1 correspondence between lines of code and executable lines (this isn’t true, but you could probably get close enough by making that assumption). In PL/SQL, you may have SELECT clauses that span tens of lines (we have queries that actually span hundreds), but count as only a single executable statement. Fortunately, as you may see in the XML example above, not every line contains an entry for the values <TOTAL_OCCUR> and <TOTAL_TIME>. It was almost sufficient to count only those lines that contain these tags as executable. The problem is that the profiler provides execution data for some irrelevant lines, such as the package declaration and “END;”. Again, we resorted to a hard-coded list of values that could be ignored for this purpose.


Now our coverage reports were *almost* complete, but for one detail. As I mentioned above, the profiler doesn’t report stats for packages that aren’t executed during testing. We were able to extract their source code into the XML reports, but without stats, there was no way to determine which lines are executable and which aren’t. As a result, our reports were showing a score of “0 out of 0”, which doesn’t quite compute. For this, we came up with what is so far our most inelegant solution, given that it requires manual intervention to execute and maintain. The trick is to force the profiler to execute the packages without confusing these results with the real stats generated during testing. For each non-executed package, Eduardo created a “wrapper” script which exercizes the package somehow (it doesn’t matter how – the results will be ignored), converts the profiling data to invalid scores (-1 for execution occurances and scores) so they won’t be confused with real results, then merges this data into the “xml_stage” table where the real test results have already been stored so that they may be extracted in the final reports. Unfortunately, this approach requires an extra post-execution step before the report is generated, and requires a custom script for each package that you know isn’t being tested. If anyone has a better solution, please let us know!


The cherry on top of our coverage reports is the high-level index page, listing all the packages and summing the scores across packages. At this point, I’d had it with SQL and XSLT, and fell back on my trusty Java knowledge. I created a class that reads all the HTML files spit out by the XSL transformation, greps out the total score from the report, sums them up, and generates an HTML file with all totals, plus links to the individual source file reports.

Example report index page

Example report index page

Tying all this together is a Maven project which compiles the Java class, and packages it up with an Ant project containing all the scripts, images, stylesheets and so on needed to generate the reports. Unfortunately, the harness doesn’t automate the Oracle part of the report generation, but once the XML has been exported to files, a single call to “ant” is enough to generate the HTML reports.


It sounds like a lot of work, but we were able to create the initial reports in a couple of days, part-time. The project came together really by accident as a thought experiment that unexpectedly worked. For that, we owe our thanks to Oracle for providing so many out-of-the-box tools and features, to Cobertura for providing an excellent example of coverage reporting for Java, and to Scott Ambler for throwing down the gauntlet in the first place.

Although this tool may seem trivial at first glance, it turns out we now have our hands on something quite valuable. The profiling can be enabled for analysis at any time, and with OracleTC, we can quickly report on the results. While it was created with our automated unit test suite in mind, it is by no means limited to this. For example, we have a type of exercize that we call a “blitz test”, in which we try to simulate realistic production situations. The test entails running some automated scripts while anywhere from 5 to 20 people simultaneously log in to the system and begin executing a variety of test scripts based on different user profiles. With OracleTC, we are able to show afterwards just how much of our PL/SQL was executed during the blitz, allowing us to identify gaps and better focus our tests. You could also conceivably enable this profiling in production for a period of time in order to try and identify functionality that isn’t being used by customers, although I wouldn’t do this if your system is sensitive to performance impacts.

I hope you find this information useful and that some of you are encouraged to try this out for yourselves. I tried to provide enough information to give you a big head start, but this article is already too big for a typical blog post, and couldn’t show you everything. Rather than leave you hanging by breaking this up into multiple posts, I did my best to be brief when possible. My hope is that I’ll have the time (and the permission) to open source the code that we have, and provide adequate documentation. Maybe if enough of you show support, it’ll help get the ball rolling. In the meantime, if anyone out there needs a hand, drop me a line (either reply here, or use the email address in the About page) and let me know. But if you are writing to tell me that Oracle (once again) already has a tool that does this, please be gentle!

Unit tests for the database

December 19, 2008

A while back (actually, over 2 years ago now!), Scott Ambler was here in Rio for the Rio Java Summit that we were hosting. In his usual controversial way, he made the point that people on the database side of things are way behind the curve on the latest advances in software development, especially in relation to agile practices and software development techniques. Of course, he was promoting his then-recently-released book “Refactoring Databases”. But he was certainly spot-on about the fact that people that generally work directly with databases, including programming stored procedures and triggers, don’t follow the same practices as other software developers, especially those in the object-oriented paradigm. Perhaps this is because database schemas are more risky and slow to change than software (his Database Refactoring book shows how hard this is – even “agile” schemas are refactored on a time scale of weeks, months or even years, rather than minutes for software). Or, more likely, it’s because there’s something of an organizational and historical wall between the two camps.

Whatever the cause, one thing is clear: there are very few tools available to database developers for things that software developers come to take for granted: code metrics and analysis, IDEs with support for refactoring, and frameworks for unit testing. What is available is generally either commercial and highly proprietary (Oracle and Quest (the Toad guys) have some fantastic tools for profiling, analysis and even unit testing), or so limited to be almost trivial. Even when the tools exist, almost no one uses them. Ok, I admit I have no data to support that statement, but I’ve never met a pure DBA, Data Architect or database developer that has ever written a unit test harness in the way that software developers know them (have you?). The point is that unit testing and related practices just haven’t made their way into the database community, and as a result there are almost no tools to support them.


As part of Scott Ambler’s presentation, he sent out a challenge to anyone interested to start making these tools on their own. As it so happens, my own team had just recently started a project to adapt the FIT testing framework for use with SQL and stored procedures. We’d looked at DbUnit and some others, but the XML-based test and data declarations seemed somewhat unwieldy to us. The result was a framework I called FIT4DBs. We had plans to open source it, but unfortunately the time just hasn’t materialized. I’ll let you know if we ever get around to it, but for now I’d like to just mention a few aspects of the project.

FIT4DBs, as I already mentioned, is based on the FIT framework for testing. FIT comes with its own philosophy about testing in which the business analysts themselves can define requirements in simple HTML or Excel (or some other table-related format). The developers are then able to write “Fixtures” which essentially “glue” these requirements to the system code. The result are requirements that are actually testable. Sounds fantastic, and when it works, it really is.

Simple FIT test for a web site shopping cart

Simple FIT test for a web site shopping cart

For FIT4DBs, however, it wasn’t this philosophy that interested us – it was the table-driven format for declaring tests. It occurred to me that this format is perfect for when you have an algorithm or procedure that is highly data-driven. If you’ve written a lot of unit tests, I’m sure you’ve dealt with these cases before: some methods only require a few tests to test all the variations, but there are some methods that after 10 or 15 or 20 variations, you keep coming up with more (I find I do this a lot when testing regular expressions, for example).

    public void testCalculateLineItemPriceQty1() {
    	ShoppingCart cart = new ShoppingCart();
    	Item item = new Item(101, "The Best of Wonderella", 10.99F);
    	cart.add(item, 1);
    	assertEquals("Price of one item should equal the unit price", 
    			10.99F, cart.calculateLineItemPrice(item), 0.001F);

    public void testCalculateLineItemPriceQtyMany() {
    	ShoppingCart cart = new ShoppingCart();
    	Item item = new Item(693, "1001 Yo Mama Jokes", 43.00F);
    	cart.add(item, 5);
    	assertEquals("Price of multiple items should follow simple multiplication", 
    			215.00F, cart.calculateLineItemPrice(item), 0.001F);

...and so on...

The problem here is that you have a lot of excess code for what is essentially repetitive work – all you’re doing is trying out different inputs and expecting different outputs. If you come up with a new combination of inputs, you have to copy and paste all these lines of code, just to modify one or two parameters. And after a while, it becomes nearly impossible to see exactly which tests you have already created (not to mention all the ridiculous method names). FIT provides a very convenient alternative to this pattern. All you have to do is list the variations of inputs and expected outputs in a table, and viola! If you want to add a new variation, you just have to add a new row in the table.

Same test with some new exception cases

Same test with some new exception cases

So, the first realization was that database procedures tend to be, well, data-driven. But not in the way that we were talking about here. Usually, what matters is the data that already exists in the database… in the form of tables. And really, that’s the hardest part about testing code that depends on the database: setting up the data the way you want it, and making sure it’s there in a pure and unadulterated form (otherwise, you may get incorrect results). What better way to declare your test data than to put in it tables, just the way it should look in the database? So, we made special set up fixtures that let you declare the name of a table and the columns into which you want to insert your test data. Each row in the HTML or spreadsheet declares a row that will be inserted into the database before the test begins. When the test has completed, the tables that were set up are either truncated (the tests assume they were empty prior to initialization), or the transaction is rolled back. In 9 out of 10 cases, this is sufficient for removing any side effects of the test and keeping each test independent of the others.

A set up table for Items in the database

A set up table for Items in the database

Next, we needed some Fixtures for testing stored procedures and free-form SQL. This was relatively simple. A pure text call to SQL or procedure is transformed into a JDBC call through Java.

Commands to execute directly in the database

Commands to execute directly in the database

The hard part is then comparing the results. While simple JUnit tests can usually limit the scope of their actions to the simple input and output of a single method (some tests require mock objects to test side-effects), stored procedures can affect any number of tables in any number of ways (they can even affect the schema itself). Not only that, but triggers may affect even tables not explicitly included in the scope of the procedure itself (and we DO want to test triggers, right?). This is not an easy thing to test, but FIT4DBs makes a heroic effort to make it simple for you.

What it comes down to in the simplest of terms is looking for “diffs” on the data contained in the tables. This is done by first “registering” the tables and the starting data via the set up tables. That’s right, the data goes not only into the database, but into in-memory hash maps that keep track of what the “before” data looked like. Once the test has been executed, the tables can be re-queried to see if their data has changed in any way. A “SELECT *” is executed on each table, and the results are sucked into “after” hash maps. There are then Fixtures that can be used to declare tables for diff comparisons. Each row that is declared in the test is an expected difference. The differences can be of type “inserted”, “deleted” or “updated”. Comparisons are based on the primary keys, which are declared as part of the table syntax, so if a primary key changes as part of the test, it will look like the row has been deleted and a new one has been inserted.

Testing expected changes in the items table

Testing expected changes in the items table

And that’s pretty much it! There are also facilities for things like declaring your connections, and managing suites of tests by, for example, reusing set up data via a custom “include” facility. DBUnit and similar frameworks work in some of the same ways, but they usually declare their set up data via XML. That just never felt right to me – it’s too hard to edit and to read what you have. Additional benefits of FIT itself include the ability to throw in comments wherever you want (anything that’s not in a table is basically considered a “comment” and ignored – this lets you add formatting and even images to your tests if it can help readability), and the fact that the results show up as exactly your input HTML, with green, red and yellow colors to show passes, fails and errors.

Test Coverage

One other piece that is typically missing from the database development suite are tools for reporting on test coverage of procedural code by the unit test suite. This more or less goes without saying, since unit testing in general is rather neglected. Apparently Quest Software offers a rare tool that can provide this sort of information if you happen to have written your tests with their suite, but most of our tests are defined with the FIT4DBs framework. We also have other automated tests not specific to FIT4DBs which exercise our PL/SQL code, so checking coverage only with that tool would be insufficient.

Eduardo Morelli, our Data Architect, has come to the rescue with an excellent, thorough and flexible solution to this problem. Unfortunately, it’s Oracle-specific, so it will only help you if you work with Oracle. I’ll save the details for another post – stay tuned!

So, Mr. Ambler, I hope this all comes as some encouragement to you. It’s been a long road to rise to your challenge, and it’s still ongoing, but Sakonnet at least is one shop that prefers to use agile development practices on both the software and database sides. I hope that some time soon we can get the resources to open source our FIT4DBs framework for others to use. In the meantime, you can give DBUnit and other tools a shot, or just test your code in an integration environment through JUnit, FIT and other automated harnesses. If you use Oracle, you can even use the techniques I’ll describe in my next post to report on your test coverage. It’s all about the data, so get out there and test the code that’s mucking around with it!

Legaciness revealed!

October 1, 2008

It looks like my neverending quest for a Legaciness metric has already come to a satisfyingly swift end! I was just browsing some blogs on agile techniques when I came across a post about fixing bad developer habits on a blog called Agile Tips. Waaaaay down in the footnotes is a link to a handy little tool in the Google Code site called the Testability Explorer.

I couldn’t believe my eyes when I saw it. I felt like Francisco Orellana stumbling across El Dorado. Could this really be the fabled Legaciness metric after all? I have to investigate more, but what I’ve read so far seems to be exactly what I’ve been searching for: a tool designed to calculate the difficulty of testing Java code based on its static dependencies! And from what I’ve read so far, it looks like I was on the right track with the algorithm:

  1. Each class attribute has to be considered by how “injectable” it is (their term, and I like it)
  2. “Global” variables are considered bad if they’re accessible, but not injectable
  3. There’s a transient penalty for methods which call methods which are not injectable (the “inheritance” penalty that I talked about)

They don’t directly address a number of concerns I have, such as:

  1. System calls that make testing difficult (e.g. I/O calls)
  2. Calls to “new” within a method
  3. The ability to “inject” something using the Legacy Code Patterns like “Extract and Override Factory Method” (if the class itself is considered injectable in a method, at least that method won’t get a penalty)

I’m guessing that some of these concerns, like the calls to “new”, would be covered by the transitivity of the complexity scores. I also like the touch that they’ve included cyclomatic complexity into the mix, although I don’t have a feel yet for if that would outweigh the injectability problems – my guess is no. I also don’t know if they take into account 3rd-party libraries. All in all, it looks like their metric is pretty straightforward, easy to understand and useful. I like the simplicity of their approach.

All that’s left now is to try it out! I’ll give it a shot tomorrow and see what it thinks about a certain 550-line file-reading Excel-interpreting monster of a method that’s been keeping me up at night…


July 29, 2008

I’ve been looking at software metrics recently as a way to track how we’re doing with our code quality. Of course, not all metrics are equal, and they are often about as helpful and accurate as a 7-day weather report… Neal Ford from Thoughtworks recently gave a presentation on his take of software metrics, which he said was something like “reading tea leaves”. A single metric on its own, out of context, may not tell you anything at all (what does a cyclomatic complexity rating of 25 mean, anyways? Is that good or bad? Hell, this rating doesn’t even have UNITS!). It’s only when grouping them together with other metrics and comparing them relatively against other pieces of code that you start to get some sort of idea.

I consider the CRAP4J crappiness rating to be something of an exception. The people who came up with it had a great idea: what if we take two metrics that offset one another, compare them and boil it down into a new value? In other words, what if we do some of the tea reading for you? The two metrics they are using are McCabe’s cyclomatic complexity rating, which gives you a measure more or less of how many different paths there are through a given method (the more paths, the more complex), and a code coverage measure of their own something like Cobertura which can measure the number of paths that are covered by unit tests. This makes perfect sense: if you have a lot of paths, that’s bad, but if you’re testing all the paths, that should be perfectly OK. If you were trying to read these leaves on your own, you would have to paw through all the parts of code that have the highest complexity rating, then try to look up a corresponding report on test coverage…

So my hat’s off to the guys that did CRAP4J.

I’ve also been considering coming up with a metric of my own that I call “Legaciness”. I get my inspiration from Michael Feather’s fantastic book “Working Effectively with Legacy Code”. In his book, he defines legacy code as code which isn’t covered by unit tests. That in itself is a great definition. But the focus of my metric wouldn’t be on the test coverage itself (which we can already measure). The majority of the book focuses on all the tricky issues that can make “legacy” code hard to test in the first place. He lists all sorts of bad practices, like static method calls, doing Bad Things in the constructor, and so on. Then the second half of the book enumerates a series of patterns that can be used in Java, C++ and other languages to safely and slowly refactor your code JUST ENOUGH to be able to slip them into some unit tests, after which you should be able to redesign your code as you wish.

The key here that I’d like to measure is exactly HOW HARD would it be to get some unit tests around your code. In other words, what would be the amount of effort necessary to refactor a class for unit testing; how many time would the Legacy Patterns have to be invoked in order to get 100% test coverage for that class? This measure is what I would call “Legaciness”.

I’ve gone through a lot of the thinking process as to how this sort of thing would be measured, and a number complications have crept up:

  1. Not all static calls are an obstacle to unit testing. For example, a utility method that converts a String to all caps wouldn’t get in the way. So, the metric ideally should take that sort of thing into account.
  2. Explicit calls to “new” are generally a bad thing, since there’s no way to substitute the object being instantiated with a mock object. But, again, if the class being instantiated is a simple Data Transfer Object (DTO) with a bunch of getters and setters, what’s the problem?
  3. Even if a call to “new” is a problem, one of the patterns to remove the obstacle is to simply extract the line of code into a protected method which can then be overridden in a test class. So the metric would have to somehow take into account when one of the legacy code patterns has been implemented to surmount these obstacles.
  4. The Legaciness score would have to be crafty enough to take into account certain levels of indirection. For example, it might be slightly bad that a method instantiates a class via “new”. But it should be considered MUCH WORSE if the class it instantiates also has a really bad Legaciness rating via, say, direct calls to I/O methods and threading and so on and so forth. So, there seems to be some need for an “inheritance” of Legaciness through these dependencies.
  5. When you get down to it, a lot of these bad dependencies are due to I/O calls, threading, container dependencies and so on. These dependencies are very likely outside your own code base (for example, in the base Java APIs). The only realistic measure I could think of is to come up with a base Legaciness score for all known API methods. This seems like more work than it’s worth (actually, Neal Ford mentioned to me that someone did this very thing with Ruby: assigned a score to each base API method for some nefarious metric of their own). Also, I don’t know if the same would be needed for other 3rd-party libraries, or if it would be enough to cover just the java.* and javax.* libraries, and then analyze their Legaciness based on their use of the Java APIs.

It seems to me that Legaciness would be another very useful metric to have, much like CRAP4J. If the latter tells you where your code is complex and untested, the former could tell you just how hard it would be to to give it the test coverage it needs. I think this metric would also be useful to help educate developers as to some of the same best practices that TDD is supposed to encourage.

Unfortunately, I don’t have the time to work out the details of such a metric. If anyone out there’s interested, or has any ideas, let me know!