Redmine Arch Decisions 0.0.9 released

March 1, 2010

Just a quick note to let you know that version 0.0.9 of the Redmine Arch Decisions plugin has just been released. There is no new functionality in this release. Instead, I have taken the time to work on the recently-promised compatibility with Redmine 0.9.x (more specifically, I worked on “trunk”, which is currently 0.9.2). It was hell to get all the tests working (one of those cases when they are more of a pain in the butt than a help), and there were some other changes that had to be made, so I’ve given up on the idea of trying to maintain backwards compatibility. Instead, I have created a separate branch for the 0.8.4 version of Redmine (which I may or may not try to maintain).

More information about the plugin and this release can be found below:


Announcing the Arch Decisions plugin for Redmine

February 23, 2010

I’ve been silent for a long time on this blog for two important reasons:

  1. I’ve decided not to post anything unless I really have some value to add
  2. I’ve been spending my spare time working on an open source plugin for the Redmine platform

So, without further ado, I’d like to announce the release (of version 0.0.8!) of the Redmine Arch Decisions plugin! At Sakonnet, my previous gig, they were using Quickbase to track tasks, specs, and just about everything. It was a snap to add in a new feature to track “architecture” (or technical) decisions, configure notifications for collaboration, and hook them up to our issues trackers for reference and follow-up. ¬†I wrote about this tool in a previous blog post, and I have been known to make the comment before that I couldn’t imagine working on software again without it. Well, when the time came to move on, guess what? No tool for tracking my “arch decisions”.

Fortunately, my current employers at Integritas are open to trying out new ideas, and are using the Rails-based Redmine for their issue tracking. Redmine, as with Rails in general, has a fairly usable plugin framework, and it was a great opportunity for me to get my hands dirty with RoR, so I jumped to it. Now, on the date of the release of the 8th version of my plugin (which we have been using for our projects), I feel I’m ready enough to announce it to anyone who’s looking for a way to record their technical decisions (and discuss them before they get made) without the overhead of stiff formal documents.

The following is a very brief overview of what you get in Redmine Arch Decisions 0.0.8:

Arch Decisions

Listing of Arch Decisions

The plugin includes a listing of the Arch Decisions themselves, which are currently limited to the scope of a single project. The ADs have an ID, a status, a summary, and a “Problem Description” field for more detailed information on the context of the decision. ADs currently follow a very simple workflow that isn’t being enforced, but is still useful:

  1. Not Started
  2. Under Discussion
  3. Decision Made
  4. Work Scheduled (implies that issues and/or tasks have been registered to track the implementation)
  5. Implemented (implies that all said issues and/or tasks have been completed, or at least to the satisfaction of the scope of the decision)
  6. Canceled
  7. Deprecated (implies that there’s another AD out there somewhere to replace it)

Arch Decisions also have a text field called “Resolution” that should be filled out when the status is changed to “Decision Made”. The resolution should explain what the final decision was, summarize why that decision was made, and provide any additional guidance to any developers who will be making sure the AD gets implemented.

Basic information for an Arch Decision

In addition to those basic text fields, there are also important supplemental elements embedded within the decisions that play an important role in the documentation and decision-making process (note that these are a new feature that I didn’t have in the old Quickbase version):

Factors

Factors associated with an AD

One of the most important benefits of tracking technical decisions in this way is the possibility of making all decision points and trade offs explicit. There are so many reasons why this is important :

  • You can see on one place all the reasons for which a decision was made
  • You can weigh them against one another so that no one gets fixated on a single reason
  • You can truly validate your assumptions by making them visible and discussing them individually
  • If any of these reasons change in the future, you can go back and check to see if your decision is still valid

Taking a cue from Craig Larman and others, I call these reasons “Factors”. A factor can be just about anything – a requirement, a hunch, a feature, a factoid – that can be used as a justification for a particular decision. In my personal experience, I have seen these factors tossed about with reckless and wanton abandon, littering the sacred grounds of a design discussion. The RAD plugin attempts to put a little order to this chaos by giving you one place to record this information. In general, it can be detrimental to the flow of a discussion to continuously stop to record these factors, but it can be extremely productive to let the fur fly in the heat of the moment, and then carefully pick out the key factors afterwards when you’re ready to clean house.

Factors have a status, which is important in showing which ones have been “challenged” (by marking them as “Validated” once the discussion has completed), including ones that were later shown to be incorrect assumptions (“Refuted”). There is even a text field called “Evidence” wherein the user can record exactly how they came to the conclusion regarding the validity (via external URLs, quotes from a discussion, or even a lame but honest “because Tim said so”).

Also importantly, factors can be reordered on the AD view page by simply dragging a row and placing it in the order desired. This allows you to explicitly declare which factors have a greater weight or priority, which comes in useful when a trade off must be made.

One interesting thing to note about factors is that they may have varying scopes. Some may be very specific to the Arch Decision at hand (e.g. “We will get a big bonus if we pick Strategy A!” or “The coin said ‘heads'”). Some may related to more than one AD (e.g. “The company has mandated that we use open source tools for this project”). Still others may be “global truths” that can even be applied across multiple projects (e.g. “Amazon EC2 does not support multicast between instances” (can this one be refuted yet?)). Factors can be created on their own (via the separate Factors tab), or right in the AD itself. In the latter case, they are automatically given a scope of “Arch Decision”. But this can be changed to something a little more broad. When this happens, the Factor can then be added to multiple ADs as appropriate.

Strategies

Strategies for an AD

What’s a decision without options to choose from? As with factors, my experience has been that people are good at tossing out ideas, but less good at remembering what they were later on. Or understanding anyone’s ideas but their own. So the RAD plugin also separates out a section just to track what those alternatives were that everyone proposed. Each one has a “short name”, which can be useful as reference (a little better than “wait, are you talking about the one where command comes in as a message which is then republished, or the one where you stick the command in the database and then you have a periodic task to look them up?”), plus a sightly longer summary. Then there is a detailed description for what that stratesugy would really entail.

Importantly, strategies can then be officially “rejected”, with an explanation as to why (in the future, it might be interesting to point to the key Factors). When this happens, they show up at the bottom of the list, with a big red “X” so that no one is confused as to whether or not that possibility is still being discussed (nor why it was rejected).

In some cases, you have a “there can only be one” situation, where a decision could only be considered to have been made when all the other competing strategies have been rejected. In this case, the Resolution will really just be a rewrite of the surviving strategy and its implications. In other cases, you might have multiple winners, each of which composes a part of the final resolution. I find this is especially the case when you are making decisions regarding standards – some will be rejected, while others will be accepted and adopted.

Tracking

An Issue with two related ADs

With this release, ADs can finally be associated with Redmine Issues. This is very important for tracking and governance (making sure the decision gets carried out, and that it is still followed in later implementations. It’s also true that during the course of making a decision, work has to be done on the side. Thus, the association between ADs and issues includes the “type” of relationship that an Issue bears to the AD:

  • Task – the work is a task related to making the decision (e.g. for research)
  • Proof of Concept – partial implementation projects that are required to prove whether or not a particular strategy is viable
  • Implementation – software development work intended to implement a decision (e.g. the creation of a framework according to the design specifications stipulated by the resolution)
  • Governed – implementation of the issue is expected to follow the guidelines laid out by a (possibly previously-existing) decision

Since I often work with issue trackers other than Redmine (and have been too lazy to implement a real integration), it’s also possible to define an Issue by an external URL rather than via a Redmine ID. Although the external tracker won’t have a back reference to the AD, and the AD won’t be able to report on the status of the issue, it’s certainly better than having no link at all.

Collaboration

The heart of the original idea for Arch Decisions was the ability to provide a voice to everyone involved in a decision. Ivory tower type architects would do well to take heed and use this tool. Developers don’t always like to have their instructions handed to them on a silver platter (especially when they think a bowl would be better for the soup they’re expected to eat). The RAD plugin gives developers the chance to speak up by posting comments in the Discussion sections (in fact, there’s one for each Factor and Strategy as well as the main AD itself, for those times when you need to focus on a specific subject). It also gives other project members a chance to respond, since there is a “watch” feature, and change notifications can go out via email.

In the previous incarnation of Arch Decisions, there was also a button on each issue so that a developer could raise a red flag whenever there was an implementation detail that needed to be discussed. Thus, the discussion could go both ways, so that architects are not always kept in the blue about what the developers are doing, and what they need to know. This worked very well at my last place of work. Unfortunately, I haven’t implemented this feature yet, but I’m sure it won’t be long before I do.

Final Details

Installing the plugin is very straightforward: just download Redmine and follow its basic instructions, then download the plugin, stick it in the /vendors/plugins folder, and run “rake db:migrate_plugins” to set up the database. I’ll provide a more extensive guide in another post, but hopefully that’s enough to get you started.¬†Unfortunately, the plugin only works with version 0.8.4 of Redmine. I’d like to get it working for 0.9.x soon, so if that’s important to you, give me a holler to get off my butt.

I’ve got more tips and details to discuss about the plugin, so I’ll try to get around to that as soon as possible. Until then, let me know if you have any feedback, and I really wish you the best in your future decisions!


Do we need Yet Another Method to communicate?

March 10, 2009

Lately I’ve been looking into Web 2.0-type tools to improve our commumication and workflow at work. There’s been some Enterprise 2.0 buzz about Twitter as a possible tool for communication on projects, and I’ve been scratching my head trying to figure out exactly how. One problem with Twitter is that it is wide out in the open, for anyone to read. If loose lips sink ships, Twitter is the Bermuda Triangle of company secrets. It hardly seems wise to be encouraging your employees to discuss the intacies of internal projects in such an open forum.

I recently came across Yammer, which is an enterprise-friendly version of Twitter. Basically, it’s the same thing, but your comments are only visible to members of your “company” (currently defined by the domain name of your email address). There are other features like private groups, but you get the idea.

Now, there’s been a lot of talk recently about the importance of quiet time for productivity (c.f. The Productive Programmer). To many “yammers” (or “yams”…?) might appear to be nothing more than an acronym for “Yet Another Method to Molest Everyone Relentlessly”. If there’s nothing more to this Yammer thing than posting your whims to the world, I wholeheartedly agree. We already use email for correspondence, IM for more immediate communication, Skype for voice and text conferences, web forums for context-related discussions, wikis, and on and on. So who needs yet another open window on our desktop, yet another way to have your train of thought interrupted, one more place you have to look to find a comment or note?

First, let’s take a look at the characteristics of the Yammer style of communication:

  1. It’s written (online)
  2. Very brief (about the size of an IM message)
  3. Can include attachments, but not common
  4. Visible to all in the enterprise (unless posted to private group)
  5. Messages can be tagged for organization/filtering by keywords
  6. Only your “followers” (or followers of a tag used) are “notified” – except for messages targeted at an individual, the author doesn’t choose the audience
  7. Mode of notification is configurable (from in-your-face to I’ll look it up when I remember)
  8. Messages are searchable by all

Twitter is billed as a microblog, where users post random thoughts and actions of the day (the form for posts asks you “What are you doing?”). The main benefit touted by users is that out of this chaos, you end up learning things about your friends you never knew. Yammer seems to be aiming for the same thing in a business setting (they ask “What are you working on?” in the desktop app. Online, they are more open-ended: “Share something with My Colleagues”). Unlike IMs and email, Twitter doesn’t seem to be for everyone; only those with a yen for connectivity really seem to grok it. In a business setting, particularly in a small company such as mine, it seems unlikely that we could achieve the critical mass necessary to make this work.

But there are some problems with communication out there in the corporate world that I have been looking to solve. The biggest issue I have is the need for a bottom-up “news from the trenches”-style way for developers to let me know what they do and I don’t. The types of things that may get mentioned at the water cooler and then forgotten; the fleeting idea that even the developer soon forgets; the magic shortcut command that only one guy in the corner is using; the little detail in the fine print of the new product we’re buying that means it’s not going to work with our software.

One would think that forums would work for this type of communication, but they don’t. Perhaps the problem is that there’s a certain sense of responsibility when you post to one: what if no one responds to my idea? What if people think it’s crap? A lot of people with really good ideas would rather remain anonymous than stick their necks out for what’s no more than a fleeting thought. But I think another part of the problem is just the sheer overhead involved in using forums: first you have to open the page, look to see if anyone’s posted something similar, create a new topic, write up some sort of backgroud story to fill up the big textarea… And who goes around browsing the forums for new posts, anyway?

Here’s where I think Twitter/Yammer-style communication may come in handy. It takes all of 10 seconds to post a single sentence, you can tag it so that it gets found by the people who care (#yammer #productivity hey! That chain icon on the plugin pastes a link to the current page), and there’s no expectation whatsoever that people will respond. The messages are searchable later, meaning that someone that wasn’t previously watching the tag could go and grab all the #productivity tips if they want to.

The other area I’m looking at is with project coordination and communication. An interesting thing that has evolved here at work is the use of Skype chats created specifically for members of a project team to coordinate their actions, ask questions, and so on. The one complaint I’ve heard about this approach is that people on the “outside” (not added to the chat) have no way of knowing what’s going on in the chat, and no way of looking at the history when they finally are. Yammer could be a solution to that problem, although it’s a terrible medium for active discussions.

So will it work? Will people use it freely, without putting a gun to their heads? Will it increase the sharing of knowledge at work and capture those important but elusive thoughts, or will it be just another icon on the desktop and another distraction from real work? I’m just starting to test it out here where I work, so I don’t have a conclusive answer yet, but I can make some initial observations.

First of all, It’s not a no-brainer. The best ideas catch on without any explanation, but with Yammer, people don’t seem to get it right away (me included). Twitter is the same way, so this doesn’t necessarily spell doom for this little experiment. Still, I sent out a general announcement to about 20 people, and got exactly 1 person who signed up (and since then hasn’t posted a single message). I next took the approach of asking people one at a time to join, explaining the basic concept to them in the process. While they seemed interested in the idea, I find I’m still pretty much the only one posting. Now I’m trying a 2-pronged approach of getting all the members of a small project to try it out for project-related communication, while encouraging developers to post some very specific ideas (around #productivity tips) when they have them.

I find that a few good examples help, and as I myself gain experience with it, I’m getting a feel for when to use it. Just today, I was in a meeting for one project, and had a quick thought about an unrelated subject. I grabbed my iPod and quickly “yammered” the idea out to the related tag. Too bad no one out there is listening…

So, perhaps this is an idea that will work, once people get used to it. Or perhaps Yammer’s value is no more that of Twitter: good for connecting the Web 2.0 addicts of your business in chaotic an unpredictable ways. Which is great for a fortune 500 enterprise, but sure feels lonely for us small guys.


Test Coverage Reporting on Oracle 10g

February 17, 2009

In a previous post, I mentioned that my colleague Eduardo Morelli and I had made an effort to bring some of the standard quality tools from the software development world (automated tests, and test coverage reporting) over to the RDBMS world. In that post, I went into some detail on FIT4DBs, our automated unit testing framework. Now I’d like to reveal some of the secrets to our Oracle test coverage reporting tool, called OracleTC.

Test Coverage Reporting

OracleTC was more or less modeled on the Cobertura test coverage tool that we are using for our Java code. The basic idea of any tool like this is to perform the following steps:

  1. Somehow wrap your normal code (via an interceptor, a decorator, or some other pattern, usually in as unobtrusive was as possible) with some special code that can observe what is being executed
  2. Execute your suite of tests
  3. Collect statistics regarding the number of lines/paths/functions executed during your test
  4. Generate a report comparing the executed code to the total set of code that SHOULD have (or at least could have) been executed during your tests

The result is what we call the “code coverage” of your test suite. For example, if you are measuring at the level of functions, and your suite executes 15 out of 20 possible functions, your test suite “covers” 75% of your total number of functions. Ideally, you want to be more finer-grained than that. Just because a function has been executed once, it doesn’t mean it’s been fully tested. A good test suite should cover all possible variations on the outcome of a method or function, including edge cases (errors, exceptions, unusual inputs and so on). So, many test coverage suites report on the line, or statement coverage (did your tests execute every single statement in the function?).

The more sophisticated ones will actually try to capture path coverage, which is even more comprehensive than line coverage. For example, imagine you have the following code:

myObj = new Obj()
if (a == true) {
 myObj = null
}
if (b = true) {
 myObj.doSomethingInteresting()
}

In the above example, we could get 100% line coverage by running one test with a=true and b=false, and another with a=false and b=true. Does that mean we’ve tested everything? No. Unfortunately, we’ve missed the rather unfortunate case of when both a and b are true, which would be a disaster.

Observing Tests in Oracle

Anyway, back to OracleTC. We wanted to see if we were covering all our bases with our automated tests. To do this, we had to find a solution for the first step above: how could we slip in some code to actually observe what was going on inside Oracle? The usual solution to this problem, when you don’t have access to the guts of your system, is to write some sort of “pre-compiler” that modifies your code (in this case, the stored procedures and triggers and such) to add some sort of tracing to the works (INSERT INTO ORACLETC_TRACE (PROCNAME, START_TIME, END_TIME, PARAMS) VALUES (blah blah…)). This can be pretty intrusive, and will probably only get you as far as procedure-level coverage reporting. It’s more work than we were ready to put into the tool, and I would rather throw in the towel than go through that.

Fortunately, Eduardo (and Oracle) came to the rescue! He found a package called DBMS_PROFILER, which is provided with Oracle. It’s meant to be used as a tool for performance profiling. But look what it can tell you (text copied from reference link):

  1. Total number of times each line was executed.
  2. Time spent executing each line. This includes SQL statements.
  3. Minimum and maximum duration spent on a specific line of code.
  4. Code that is actually being executed for a given scenario.

There you have it! Test coverage reporting nearly out of the box! The package unobtrusively wraps your code executions, once you’ve enabled it for your session, and reports all its data, line-by-line, into some reporting tables that it sets up. We did have to figure out how to enable the profiling for ALL sessions, since our tests are multi-threaded, but the solution is to use a simple trigger (note that we created our own utility package called PKG_PROFILER to make things easier, but you get the idea):

CREATE OR REPLACE TRIGGER On_Logon
 AFTER LOGON ON schema
 DECLARE
 V_WHO VARCHAR2(200);
 BEGIN
 SELECT sys_context('USERENV', 'OS_USER') || '_' ||
 sys_context('USERENV', 'SID') || '_' ||
 sys_context('USERENV', 'TERMINAL')
 INTO V_WHO
 FROM dual;
PKG_PROFILER.PRC_START_PROFILING('STARTING PROFILER FOR (OS_USER, SID, TERMINAL): ' ||   V_WHO);
 END;
 /
CREATE OR REPLACE TRIGGER On_Logoff
 BEFORE LOGOFF
 ON schema
DECLARE
 BEGIN
 PKG_PROFILER.PRC_END_PROFILING;
 END;
 /

Querying the Results

Once you execute your tests, reports can be easily generated by running queries on this data. As an example, the following query displays which lines took the longest execution time:

select * from (
 select u.unit_name, d.line#, s.text , numtodsinterval (round(d.total_time/1000000000,2), 'MINUTE') minutes
 from plsql_profiler_data d inner join plsql_profiler_units u
 on d.unit_number = u.unit_number
 inner join user_source s on (s.name = u.unit_name and s.type = u.unit_type and s.line = d.line#)
 where u.runid = 1
 order by 4 desc)
 where rownum <= 5
Top 5 most consuming statements

Top 5 most consuming statements

We weren’t able to go the whole 9 yards and figure out path coverage, but thanks to DBMS_PROFILER, OracleTC easily supports line-level coverage reporting. Here’s an example of a procedure we created to report the line coverage at a package-level granularity:

PROCEDURE PRC_TEST_COVERAGE(I_RUNID_START NUMBER, I_RUNID_END NUMBER) IS
 /*
 |
 | Generates a Test Coverage report based on a range of runids.
 | Before activating this procedure, do not forget to set serveroutput
 |
 | Parameter:
 |       I_RUNID_START: bottom run id
 |       I_RUNID_END: top run id
 |
 | Example:
 |       PKG_PROFILER.PRC_TEST_COVERAGE (I_RUNID_START => 1,I_RUNID_END = 862);
 | @TODO:
 |       Create a temporary table to store results
 |       Deal with anonymous blocks
 |
 */
 CURSOR CUNITS IS
 SELECT DISTINCT UNIT_NAME
 FROM PLSQL_PROFILER_UNITS
 WHERE UNIT_NAME LIKE 'PKG_%'
 ORDER BY 1;
 V_COVERAGE NUMBER(5, 2);
 BEGIN
 FOR REG IN CUNITS LOOP
SELECT ROUND(EXEC.NBR / TOTAL.NBR * 100, 2) COVERAGE
 INTO V_COVERAGE
 FROM (SELECT COUNT(*) NBR
 FROM PLSQL_PROFILER_DATA D, PLSQL_PROFILER_UNITS U
 WHERE D.RUNID = U.RUNID
 AND D.UNIT_NUMBER = U.UNIT_NUMBER
 AND D.RUNID BETWEEN I_RUNID_START AND I_RUNID_END
 AND U.UNIT_NAME = REG.UNIT_NAME) TOTAL,
 (SELECT COUNT(*) NBR
 FROM PLSQL_PROFILER_DATA D, PLSQL_PROFILER_UNITS U
 WHERE D.RUNID = U.RUNID
 AND D.UNIT_NUMBER = U.UNIT_NUMBER
 AND D.RUNID BETWEEN I_RUNID_START AND I_RUNID_END
 AND U.UNIT_NAME = REG.UNIT_NAME
 AND D.TOTAL_OCCUR > 0) EXEC;
 DBMS_OUTPUT.PUT_LINE(RPAD(REG.UNIT_NAME, 30, ' ') || '-------------' ||
 TO_CHAR(V_COVERAGE, '999.99'));
 END LOOP;
 END;

The results of executing this procedure look more or less like so:

SQL> EXECUTE PKG_PROFILER.PRC_TEST_COVERAGE (1,862);
PKG_MY_INTERFACE         ------------- 100.00
PKG_MY_RUN               -------------   8.35
PKG_CALC_HIST            -------------  33.33
PKG_COPY_STUFF           -------------  47.89
PKG_PT_OUTPUT_CALC       -------------  16.29
PKG_DATA_VALIDATION      -------------  18.03
PKG_DENORMALIZE_DATA     -------------  42.48
...

This only shows those packages that were actually used during our test suite. What about those that we missed (i.e. with 0% coverage)? Here’s another query that can list those:

select DISTINCT p.object_name
 from plsql_profiler_units u
 right outer
 join user_procedures p on p.object_name = u.unit_name
 where u.unit_name is null
 ORDER BY 1;

We can run similar queries to report on triggers as well.

Shiny Happy Reports

But as I said before, our inspiration was the Cobertura project, and they set the bar pretty high when it comes to coverage reporting. Our next step was to try to replicate their web-based reports that show an overview of the results (package-level), and then lets you drill all the way down to the code itself!

EXTRACTING THE RESULTS

Our first task was to figure out how to get all this data out in a format that could be mastigated and munged by simple scripts until it looks and acts like well-behaved HTML. We chose XML, which has tool support in nearly every scripting and programming language around, and is good for structured data. Once again, Oracle comes to the rescue with their package DBMS_XMLGEN, which can format query result sets into a canonical XML format.

We wanted the whole enchilada, including the source code for our packages, along with the line-by-line statistics. So Eduardo created some procedures that:

  1. Join the data from the profiling stat tables to the Oracle user_source view
  2. Store the results in a “staging table”
  3. Convert the rows to XML-formatted CLOBs (one per module or package – equivalent to one per original source file in our code base)
  4. Spit them out into a folder as XML-formatted text files, using the package name as the file name

Here’s the code that populates the staging table for procedures which will later be converted to XML (note that this work must be repeated with some differences to capture trigger executions as well; the way these are reported in USER_SOURCE is actually a bit tricky):

-- XML_CLOBS will store one clob per line. Each line will correpond to a package, procedure or trigger
 create table xml_clobs (name varchar2(30), result clob);
-- Staging area for procedures and package bodies
create table xml_stage as
 select name, line, max(total_ocurr) total_occur, sum(total_time) total_time, text from
 (select s.name name, s.line line, p.total_occur total_ocurr, p.total_time total_time, s.text text
 from user_source s,
 (select u.unit_name, u.unit_type, d.line#,
 d.total_occur, d.total_time/1000000 total_time
 from plsql_profiler_data d, plsql_profiler_units u
 where u.runid = d.runid
 and u.unit_number = d.unit_number
 and u.unit_type in ('PROCEDURE','PACKAGE BODY')) p
 where s.name = p.unit_name (+)
 and s.line = p.line# (+)
 and s.type = p.unit_type (+)
 and s.type in ('PROCEDURE','PACKAGE BODY'))
 group by name, line, text
 order by 1,2;

Here’s the code for creating the XML CLOBs:

-- XML_CLOBS loading
declare
 q dbms_xmlgen.ctxHandle;
 result clob;
 cursor cModules is select distinct  name from xml_stage order by 1;
 begin
 for reg_module in cModules loop
 q := dbms_xmlgen.newContext ('select line, total_occur, total_time, text from xml_stage where name = '
 || '''' || reg_module.name || '''' || 'order by line');
 dbms_xmlgen.setRowTag(q, 'LINE');
 result:= dbms_xmlgen.getXML(q);
 insert into xml_clobs values (reg_module.name, result);
 dbms_xmlgen.closeContext(q);
 commit;
 end loop;
 end;
 /
 call pkg_utils.PRC_DROP_INDEX('xml_clobs','ix_xml_clobs_01');
 create index ix_xml_clobs_01 on xml_clobs (name);

I’ll skip the code for exporting the CLOBs to the file system, which isn’t too complicated, but a little bit bulky to reproduce here. Once we’re done, we should have a folder full of XML files looking something like this:

<?xml version="1.0"?>
 <ROWSET>
 <LINE>
 <LINE>1</LINE>
 <TEXT>PACKAGE BODY PKG_UTILS AS
 </TEXT>
 </LINE>
 <LINE>
 <LINE>2</LINE>
 <TEXT>
 </TEXT>
 </LINE>
 <LINE>
 <LINE>3</LINE>
 <TOTAL_OCCUR>0</TOTAL_OCCUR>
 <TOTAL_TIME>0</TOTAL_TIME>
 <TEXT>  PROCEDURE PRC_DROP_TABLE(I_TABLE_NAME VARCHAR2,
 </TEXT>
 </LINE>
 <LINE>
 <LINE>4</LINE>
 <TEXT>                           I_FORCE_DROP IN VARCHAR2 := 'FALSE') AS
 </TEXT>
 </LINE>
 ...
 <LINE>
 <LINE>293</LINE>
 <TOTAL_OCCUR>42</TOTAL_OCCUR>
 <TOTAL_TIME>.212</TOTAL_TIME>
 <TEXT>    IF V_PARTITION_EXISTS = 1 THEN
 </TEXT>
 </LINE>
 <LINE>
 <LINE>295</LINE>
 <TOTAL_OCCUR>23</TOTAL_OCCUR>
 <TOTAL_TIME>4782.023</TOTAL_TIME>
 <TEXT>      EXECUTE IMMEDIATE 'ALTER TABLE ' || I_TABLE_NAME ||
 </TEXT>
 </LINE>
 ...
 <LINE>
 <LINE>1129</LINE>
 <TEXT>END;</TEXT>
 </LINE>
 </ROWSET>

TRANSLATING TO THE WEB

Next, I created an Ant script to read each of the XML files, and perform an XSLT transformation on each to produce equivalent HTML files. The XSL script is too gruesome to reproduce here (if anyone wants a copy, I’ll be glad to oblige):

  1. Write the HTML headers
  2. Go through the list of procedures, calculate the coverage of each, and generate a table of the totals
  3. Show all the source code, with counts of the number of times each line was executed and how long the total executions took

The transformation script also sprinkles the output with some useful eye candy: it decorates executable lines with either a red or a green background, depending on whether or not the line was executed in testing. It highlights the start and end of each individual procedure in bold, and provides links from the scores at the top down to the specific code listings. Since it’s a simple text transformation, it must do this based on whatever information is contained in the text file itself. This means we had to make certain assumptions about the source code: a line that begins with “PACKAGE BODY” must be the start of a package definition, “PROCEDURE FOO” or “PROCEDURE FOO(” must be the start of the declaration of a procedure named “FOO”, “END;” is the end of a procedure or block, and so on. It’s not infallible, so a little bit of standardization in code conventions can go a long way here.

Example of a package source report

Example of a package source report

Calculating the coverage was much more challenging than it sounds. It’s based on line coverage, which means you should be able to calculate a percentage by taking [100 * lines executed / total executable lines]. The problem is with the word “executable”. In Java, you generally have a 1-to-1 correspondence between lines of code and executable lines (this isn’t true, but you could probably get close enough by making that assumption). In PL/SQL, you may have SELECT clauses that span tens of lines (we have queries that actually span hundreds), but count as only a single executable statement. Fortunately, as you may see in the XML example above, not every line contains an entry for the values <TOTAL_OCCUR> and <TOTAL_TIME>. It was almost sufficient to count only those lines that contain these tags as executable. The problem is that the profiler provides execution data for some irrelevant lines, such as the package declaration and “END;”. Again, we resorted to a hard-coded list of values that could be ignored for this purpose.

TYING UP SOME LOOSE ENDS

Now our coverage reports were *almost* complete, but for one detail. As I mentioned above, the profiler doesn’t report stats for packages that aren’t executed during testing. We were able to extract their source code into the XML reports, but without stats, there was no way to determine which lines are executable and which aren’t. As a result, our reports were showing a score of “0 out of 0”, which doesn’t quite compute. For this, we came up with what is so far our most inelegant solution, given that it requires manual intervention to execute and maintain. The trick is to force the profiler to execute the packages without confusing these results with the real stats generated during testing. For each non-executed package, Eduardo created a “wrapper” script which exercizes the package somehow (it doesn’t matter how – the results will be ignored), converts the profiling data to invalid scores (-1 for execution occurances and scores) so they won’t be confused with real results, then merges this data into the “xml_stage” table where the real test results have already been stored so that they may be extracted in the final reports. Unfortunately, this approach requires an extra post-execution step before the report is generated, and requires a custom script for each package that you know isn’t being tested. If anyone has a better solution, please let us know!

BRINGING IT ALL TOGETHER

The cherry on top of our coverage reports is the high-level index page, listing all the packages and summing the scores across packages. At this point, I’d had it with SQL and XSLT, and fell back on my trusty Java knowledge. I created a class that reads all the HTML files spit out by the XSL transformation, greps out the total score from the report, sums them up, and generates an HTML file with all totals, plus links to the individual source file reports.

Example report index page

Example report index page

Tying all this together is a Maven project which compiles the Java class, and packages it up with an Ant project containing all the scripts, images, stylesheets and so on needed to generate the reports. Unfortunately, the harness doesn’t automate the Oracle part of the report generation, but once the XML has been exported to files, a single call to “ant” is enough to generate the HTML reports.

Conclusion

It sounds like a lot of work, but we were able to create the initial reports in a couple of days, part-time. The project came together really by accident as a thought experiment that unexpectedly worked. For that, we owe our thanks to Oracle for providing so many out-of-the-box tools and features, to Cobertura for providing an excellent example of coverage reporting for Java, and to Scott Ambler for throwing down the gauntlet in the first place.

Although this tool may seem trivial at first glance, it turns out we now have our hands on something quite valuable. The profiling can be enabled for analysis at any time, and with OracleTC, we can quickly report on the results. While it was created with our automated unit test suite in mind, it is by no means limited to this. For example, we have a type of exercize that we call a “blitz test”, in which we try to simulate realistic production situations. The test entails running some automated scripts while anywhere from 5 to 20 people simultaneously log in to the system and begin executing a variety of test scripts based on different user profiles. With OracleTC, we are able to show afterwards just how much of our PL/SQL was executed during the blitz, allowing us to identify gaps and better focus our tests. You could also conceivably enable this profiling in production for a period of time in order to try and identify functionality that isn’t being used by customers, although I wouldn’t do this if your system is sensitive to performance impacts.

I hope you find this information useful and that some of you are encouraged to try this out for yourselves. I tried to provide enough information to give you a big head start, but this article is already too big for a typical blog post, and couldn’t show you everything. Rather than leave you hanging by breaking this up into multiple posts, I did my best to be brief when possible. My hope is that I’ll have the time (and the permission) to open source the code that we have, and provide adequate documentation. Maybe if enough of you show support, it’ll help get the ball rolling. In the meantime, if anyone out there needs a hand, drop me a line (either reply here, or use the email address in the About page) and let me know. But if you are writing to tell me that Oracle (once again) already has a tool that does this, please be gentle!


Unit tests for the database

December 19, 2008

A while back (actually, over 2 years ago now!), Scott Ambler was here in Rio for the Rio Java Summit that we were hosting. In his usual controversial way, he made the point that people on the database side of things are way behind the curve on the latest advances in software development, especially in relation to agile practices and software development techniques. Of course, he was promoting his then-recently-released book “Refactoring Databases”. But he was certainly spot-on about the fact that people that generally work directly with databases, including programming stored procedures and triggers, don’t follow the same practices as other software developers, especially those in the object-oriented paradigm. Perhaps this is because database schemas are more risky and slow to change than software (his Database Refactoring book shows how hard this is – even “agile” schemas are refactored on a time scale of weeks, months or even years, rather than minutes for software). Or, more likely, it’s because there’s something of an organizational and historical wall between the two camps.

Whatever the cause, one thing is clear: there are very few tools available to database developers for things that software developers come to take for granted: code metrics and analysis, IDEs with support for refactoring, and frameworks for unit testing. What is available is generally either commercial and highly proprietary (Oracle and Quest (the Toad guys) have some fantastic tools for profiling, analysis and even unit testing), or so limited to be almost trivial. Even when the tools exist, almost no one uses them. Ok, I admit I have no data to support that statement, but I’ve never met a pure DBA, Data Architect or database developer that has ever written a unit test harness in the way that software developers know them (have you?). The point is that unit testing and related practices just haven’t made their way into the database community, and as a result there are almost no tools to support them.

FIT4DBs

As part of Scott Ambler’s presentation, he sent out a challenge to anyone interested to start making these tools on their own. As it so happens, my own team had just recently started a project to adapt the FIT testing framework for use with SQL and stored procedures. We’d looked at DbUnit and some others, but the XML-based test and data declarations seemed somewhat unwieldy to us. The result was a framework I called FIT4DBs. We had plans to open source it, but unfortunately the time just hasn’t materialized. I’ll let you know if we ever get around to it, but for now I’d like to just mention a few aspects of the project.

FIT4DBs, as I already mentioned, is based on the FIT framework for testing. FIT comes with its own philosophy about testing in which the business analysts themselves can define requirements in simple HTML or Excel (or some other table-related format). The developers are then able to write “Fixtures” which essentially “glue” these requirements to the system code. The result are requirements that are actually testable. Sounds fantastic, and when it works, it really is.

Simple FIT test for a web site shopping cart

Simple FIT test for a web site shopping cart

For FIT4DBs, however, it wasn’t this philosophy that interested us – it was the table-driven format for declaring tests. It occurred to me that this format is perfect for when you have an algorithm or procedure that is highly data-driven. If you’ve written a lot of unit tests, I’m sure you’ve dealt with these cases before: some methods only require a few tests to test all the variations, but there are some methods that after 10 or 15 or 20 variations, you keep coming up with more (I find I do this a lot when testing regular expressions, for example).


    public void testCalculateLineItemPriceQty1() {
    	ShoppingCart cart = new ShoppingCart();
    	Item item = new Item(101, "The Best of Wonderella", 10.99F);
    	cart.add(item, 1);
    	
    	assertEquals("Price of one item should equal the unit price", 
    			10.99F, cart.calculateLineItemPrice(item), 0.001F);
    }


    public void testCalculateLineItemPriceQtyMany() {
    	ShoppingCart cart = new ShoppingCart();
    	Item item = new Item(693, "1001 Yo Mama Jokes", 43.00F);
    	cart.add(item, 5);
    	
    	assertEquals("Price of multiple items should follow simple multiplication", 
    			215.00F, cart.calculateLineItemPrice(item), 0.001F);
    }

...and so on...


The problem here is that you have a lot of excess code for what is essentially repetitive work – all you’re doing is trying out different inputs and expecting different outputs. If you come up with a new combination of inputs, you have to copy and paste all these lines of code, just to modify one or two parameters. And after a while, it becomes nearly impossible to see exactly which tests you have already created (not to mention all the ridiculous method names). FIT provides a very convenient alternative to this pattern. All you have to do is list the variations of inputs and expected outputs in a table, and viola! If you want to add a new variation, you just have to add a new row in the table.

Same test with some new exception cases

Same test with some new exception cases

So, the first realization was that database procedures tend to be, well, data-driven. But not in the way that we were talking about here. Usually, what matters is the data that already exists in the database… in the form of tables. And really, that’s the hardest part about testing code that depends on the database: setting up the data the way you want it, and making sure it’s there in a pure and unadulterated form (otherwise, you may get incorrect results). What better way to declare your test data than to put in it tables, just the way it should look in the database? So, we made special set up fixtures that let you declare the name of a table and the columns into which you want to insert your test data. Each row in the HTML or spreadsheet declares a row that will be inserted into the database before the test begins. When the test has completed, the tables that were set up are either truncated (the tests assume they were empty prior to initialization), or the transaction is rolled back. In 9 out of 10 cases, this is sufficient for removing any side effects of the test and keeping each test independent of the others.

A set up table for Items in the database

A set up table for Items in the database

Next, we needed some Fixtures for testing stored procedures and free-form SQL. This was relatively simple. A pure text call to SQL or procedure is transformed into a JDBC call through Java.

Commands to execute directly in the database

Commands to execute directly in the database

The hard part is then comparing the results. While simple JUnit tests can usually limit the scope of their actions to the simple input and output of a single method (some tests require mock objects to test side-effects), stored procedures can affect any number of tables in any number of ways (they can even affect the schema itself). Not only that, but triggers may affect even tables not explicitly included in the scope of the procedure itself (and we DO want to test triggers, right?). This is not an easy thing to test, but FIT4DBs makes a heroic effort to make it simple for you.

What it comes down to in the simplest of terms is looking for “diffs” on the data contained in the tables. This is done by first “registering” the tables and the starting data via the set up tables. That’s right, the data goes not only into the database, but into in-memory hash maps that keep track of what the “before” data looked like. Once the test has been executed, the tables can be re-queried to see if their data has changed in any way. A “SELECT *” is executed on each table, and the results are sucked into “after” hash maps. There are then Fixtures that can be used to declare tables for diff comparisons. Each row that is declared in the test is an expected difference. The differences can be of type “inserted”, “deleted” or “updated”. Comparisons are based on the primary keys, which are declared as part of the table syntax, so if a primary key changes as part of the test, it will look like the row has been deleted and a new one has been inserted.

Testing expected changes in the items table

Testing expected changes in the items table

And that’s pretty much it! There are also facilities for things like declaring your connections, and managing suites of tests by, for example, reusing set up data via a custom “include” facility. DBUnit and similar frameworks work in some of the same ways, but they usually declare their set up data via XML. That just never felt right to me – it’s too hard to edit and to read what you have. Additional benefits of FIT itself include the ability to throw in comments wherever you want (anything that’s not in a table is basically considered a “comment” and ignored – this lets you add formatting and even images to your tests if it can help readability), and the fact that the results show up as exactly your input HTML, with green, red and yellow colors to show passes, fails and errors.

Test Coverage

One other piece that is typically missing from the database development suite are tools for reporting on test coverage of procedural code by the unit test suite. This more or less goes without saying, since unit testing in general is rather neglected. Apparently Quest Software offers a rare tool that can provide this sort of information if you happen to have written your tests with their suite, but most of our tests are defined with the FIT4DBs framework. We also have other automated tests not specific to FIT4DBs which exercise our PL/SQL code, so checking coverage only with that tool would be insufficient.

Eduardo Morelli, our Data Architect, has come to the rescue with an excellent, thorough and flexible solution to this problem. Unfortunately, it’s Oracle-specific, so it will only help you if you work with Oracle. I’ll save the details for another post – stay tuned!

So, Mr. Ambler, I hope this all comes as some encouragement to you. It’s been a long road to rise to your challenge, and it’s still ongoing, but Sakonnet at least is one shop that prefers to use agile development practices on both the software and database sides. I hope that some time soon we can get the resources to open source our FIT4DBs framework for others to use. In the meantime, you can give DBUnit and other tools a shot, or just test your code in an integration environment through JUnit, FIT and other automated harnesses. If you use Oracle, you can even use the techniques I’ll describe in my next post to report on your test coverage. It’s all about the data, so get out there and test the code that’s mucking around with it!


Lessons from JBoss Envers

October 31, 2008

My good pal Daniel Mirilli just sent me a link to the JBoss Envers project. The idea behind it is that you can automatically track version changes of persistent fields using a new annotation called “@Versioned”. This makes something that is otherwise pretty complex – saving a whole history of changes for an object – as simple as declaring that you need it.

This is a great example of what makes for elegant design: you abstract a concrete business need into a reusable concept, then turn it into an “aspect” to get it out of the immediate code. You can now “take it for granted”. Before I got to know AOP I wrote a framework that did all this the hard way: I had parent classes for “entities”, inherited attributes like “ID” and template methods for things like type-specific attibutes: PKs, unique keys (more than one allowed), “natural” ordering, and so on. Three years ago, I would have done this differently: annotations, or AspectJ aspects, or something along those lines. Then, more recently, EJB 3.0 provided this out of the box with their @Entity annotations and so on.

The beauty behind AOP, no matter what the technology, is that it allows you to design elegantly through abstractions – and yet make them concrete.

Anyway, back to JBoss Envers. I haven’t looked at the details, but it seems like an elegant solution to a problem that I’ve seen more than once: “entities” that need to be “versioned”. That is, you want an audit trail for everything you do to them. And you can never delete them (logical delete, maybe). The product I currently work on has some pretty intense versioning requirements, which is why Mirilli sent me the link to this project (thanks, buddy!). But I know enough about the specific requirements of my product to suspect that Envers wouldn’t be able to do it for us.

In fact, my product’s business rules are convoluted enough that I actually spent about a week doing what the creators of Envers did: trying to wrap my head around it all by turning it into generalized abstractions. In the process, I actually created my own UML 2.0 extension language in order to diagram these versions and explain it all to myself. It’s one thing to keep an audit trail, but what if you can look up previous versions? And what if those previous versions relate to other versioned entities? I realized that in such cases there can be more than one type of “relate”: specific version relationships (Foo version 12 refers to Bar version 5), general version relationships (all versions of Foo refer to the latest version of Bar), and even rules-based general version relationships (all versions of Foo refer to the active or editable version of Bar, which may not be the latest)! Also, note I said “refer” here, implying a uni-directional reference. The relationship can be bi-directional, but what if the back reference follows different rules from the forward reference?

Sorry to muck around in the details like that. It’s not your problem (I hope). What I wanted to highlight here is that this process of abstraction and creating a new “language” around some tricky business rules can actually become something practical when whittled with the tools of an AOP framework, annotations, or what have you. And Envers, whether it works or not, has reminded me of that: when Foo refers to Bar, why not do something like this:

public class Foo {

@Versioned(type=VersionReferenceType.GENERAL, rule=VersionReferenceRule.LATEST)

private Bar bar;

}

Instead, I’m currently relying on inheritance from base classes and custom code in various places in the persistence and domain layers that are only loosely tied to the object references. But now Envers has gotten me thinking again…


Legaciness revealed!

October 1, 2008

It looks like my neverending quest for a Legaciness metric has already come to a satisfyingly swift end! I was just browsing some blogs on agile techniques when I came across a post about fixing bad developer habits on a blog called Agile Tips. Waaaaay down in the footnotes is a link to a handy little tool in the Google Code site called the Testability Explorer.

I couldn’t believe my eyes when I saw it. I felt like Francisco Orellana stumbling across El Dorado. Could this really be the fabled Legaciness metric after all? I have to investigate more, but what I’ve read so far seems to be exactly what I’ve been searching for: a tool designed to calculate the difficulty of testing Java code based on its static dependencies! And from what I’ve read so far, it looks like I was on the right track with the algorithm:

  1. Each class attribute has to be considered by how “injectable” it is (their term, and I like it)
  2. “Global” variables are considered bad if they’re accessible, but not injectable
  3. There’s a transient penalty for methods which call methods which are not injectable (the “inheritance” penalty that I talked about)

They don’t directly address a number of concerns I have, such as:

  1. System calls that make testing difficult (e.g. I/O calls)
  2. Calls to “new” within a method
  3. The ability to “inject” something using the Legacy Code Patterns like “Extract and Override Factory Method” (if the class itself is considered injectable in a method, at least that method won’t get a penalty)

I’m guessing that some of these concerns, like the calls to “new”, would be covered by the transitivity of the complexity scores. I also like the touch that they’ve included cyclomatic complexity into the mix, although I don’t have a feel yet for if that would outweigh the injectability problems – my guess is no. I also don’t know if they take into account 3rd-party libraries. All in all, it looks like their metric is pretty straightforward, easy to understand and useful. I like the simplicity of their approach.

All that’s left now is to try it out! I’ll give it a shot tomorrow and see what it thinks about a certain 550-line file-reading Excel-interpreting monster of a method that’s been keeping me up at night…