Complexity Creep: Data Scavenger

May 17, 2009

You’re given a simple task: get some XML data from a URL or web service, convert it to something else, and send it off down stream to some other system. Easy enough, right? Somewhere in the middle of your “80 percent done” report, you realize that the original XML is missing one silly little field – the customer’s middle name, the timezone on the date, the preferred nickname for their online avatar, whatever. Unfortunately, this little detail is critical for completing your task at hand.

If there’s no way to get this information, there’s very little you can do, besides ask for an enhancement and wait. But very often, the data’s there for the taking; you just have to go out and get it from somewhere else. Just as common, however, that “somewhere else” is not as accessible as you’d like. If you’re lucky, you can just pull another object or rowset out of the same database. If you’re not, good luck with that “80 percent” thing…

Here are some of the complications you may come across in a data scavenger hunt:

  • The data’s there, but you have to torture it out of data structures not meant to provide it (e.g. it’s buried in a string of text with no standard format, or in a numerical field with no clear designation for varying units)
  • The data’s on a remote server: performance may be impacted, you have to deal with the problem of making the call (if at all possible), handling errors, and so on
  • You don’t have the privilege to access the data
  • The data is not guaranteed to be transactionally consistent (you may be getting some stale data, or the new data reflects changes that aren’t seen by the rest of your data set)
  • The data is in a log file, system configuration file, admin-only database tables, or some other unholy “do not touch this!” artifact

Each one of these little beauties is a mini-rat’s nest of complexity creeps of their own. What do you do if your application doesn’t have privileges to get the data? Implement a “run-as” feature just to get the one field? Hard-code the root user password? Convince the guys on the other side of the fence to give you a user and wait?

In some cases, you may actually be able to modify the code itself to fit your needs. But that requires a new release of the software, which, if you’re accessing it from the outside, may not be an option (I’ll be posting another Complexity Creep article on this one some time soon). Also, this can lead to problems of its own: the “universal interface”, or the one-size-fits-all façade. I’ve worked with interfaces like this, where every new feature begets a new variation on the same old methods, just to avoid the need for scavenger hunting. It happens with database views, and in XML, too: to avoid making multiple remote requests, you keep adding new fields to your XML entities, until your Company XML document includes a list of all employees barfed up somewhere in the nested tags, complete with their “Address2” field and their in-flight meal preferences. This solution is the evil twin of the Data Scavenger: the Data Tar Baby.

Data scavenging is a common problem in integration projects, which is one of the reasons they can be so tough. But it can happen when building monitoring utilities, reporting features, or just about anywhere information is required. Unfortunately, when working in the “information technology” field, the odds are pretty high you’ll come across this more often than you’d like. And yet, when you do, it always seems to be that one last insignificant detail that turns your routine query into a scavenger hunt.


Complexity Creep: Aspect Infiltration

May 15, 2009

The other day, I was working on enhancements to a type of “job execution harness” that executes parallel tasks in our system. We had started out with the concept of “jobs”, essentially individual commands in a Command design pattern, and had recently evolved the idea of “groups” of jobs for managing dependencies between sets of parallel tasks. (note: just for fun I’m testing out yUML for the images here)

Harness schedules Groups and executes Jobs

Harness schedules Groups and executes Jobs

As with pretty much any complexity creep story, this one starts out with a pretty simple and elegant design. You basically had an execution harness, which took care of scheduling the jobs to be executed, and the jobs themselves, which were totally independent POJOs, with no knowledge of each other, nor of the harness itself. The harness itself provided monitoring capabilities which reported the start and end of the groups, and of the individual jobs.

Harness executes Jobs and reports to Harness Monitor

Harness executes Jobs and reports to Harness Monitor

We were living in separation-of-concerns bliss until the day we were given a new requirement: to support monitoring of job “steps”. “Um, what’s a job step?” we asked. It turns out that some of these jobs can take a looong time to run (several hours). Users wanted a way to see what was going on during these jobs in order to get a feel for when they would be done, and if everything was going ok.

Harness executes Jobs which contain Steps...?

Harness executes Jobs which contain Steps...

We wanted at all costs to preserve the harness-agnostic nature of our jobs. We thought about breaking the bigger jobs up into smaller jobs, but unfortunately, it wasn’t possible since they are essentially atomic units of work. We considered solutions for providing some sort of outside monitor which could somehow generically track the execution, but these steps were basically execution milestones that only the job itself could know. Finally, we knew we were defeated, and gave in: because of one little logging requirement, we were going to have to introduce some sort of callback mechanism to let the once harness-agnostic jobs signal to the harness whenever a step was completed.

Harness and Job both report to Harness Monitor

Harness and Job both report to Harness Monitor

From a pure layering perspective, you can see that we are in trouble if we are trying to keep all the harness code (in orange) in a top layer, and the business code below. So, what are some possible solutions to the problem? We could:

  1. Let the monitor know about the progress of the Job steps through some indirect method (e.g. through special text log statements, or indirect indications via data in the database). While it would avoid placing any explicit compile-time dependencies on the Job class to the harness, it would create a very fragile “know without knowing” relationship that the Jobs would have with the harness. Nasty.
  2. Create a special “StepLoggingJob” abstract class that these Jobs would extend in order to gain access to some special logging facilities. Basically, these Jobs would no longer be POJO classes, in the sense that I used the terms, since they would have to extend a harness-specific framework. Unfortunately, this introduces a circular dependency.
  3. Inject a special “StepLogger” utility class into the Jobs, either as a class member, or as a parameter on their “execute()” (or whatever) method
Option 1: Job writes special logging messages to a common store

Option 1: Job writes special logging messages to a common store

Job extends StepLoggingJob

Option 2: Job extends StepLoggingJob

Option 3: Job calls a StepLogger which reports to the Monitor

Option 3: Job calls a StepLogger which reports to the Monitor

Note that we still haven’t really solved the problem… the Job class still requires something in the “harness layer”. If we were using a dynamically typed language, we could do something of a mix between option 1 and option 3 by using duck typing (the Job would know it was getting SOMETHING that could log, but wouldn’t have to know it’s from the harness layer). In order to really separate the dependencies in Java, which we use, we have to create a new layer, the “harness API layer”, and place only the StepLogger interface there:

Job knows only about the StepLogger interface

Job knows only about the StepLogger interface

So, what happened? To summarize: because we wanted just a little more logging, we were forced to introduce a whole new layer into our application, and break the concept of 100% harness-agnostic commands. Is this an isolated case? Of course not. You see it all over the place, and logging is a great example of this. Have you ever heard someone talk about aspect-oriented programming (AOP), and give logging as an example? It’s PERFECT! With some simple configuration, you can automatically enable logging on all your methods without a single line of code. So, you can get rid of a ton of boiler-plate code related to logging, and focus on just the business logic, right? Wrong. If that were true, we all would have thrown our logging libraries in the garbage years ago. Instead, Log4J is still one of the most heavily-used tools in our library. Why? Because aspects work by wrapping your methods (and so on) with before and after logic, but they can’t get INSIDE the methods themselves.

The really useful logs are written by the method itself

The really useful logs are written by the method itself

I call this Aspect Infiltration: when your non-business infrastructure creeps into your business code. You can see this elsewhere, as well: in the J2EE container whose transaction control isn’t fine-grained enough for you (introduce a UserTransaction interface); in the security service that isn’t sufficiently detailed (give the code access to a security context), and so on. It’s a common issue for any container that wants to do all the work for you. There will come a time when the business code itself just knows better. And you’d better be ready to give it the API that it needs.

Complexity Creep

May 14, 2009

I’ve been silent in this blog for a while now not only because I’ve been busy with family, work, and organizing IASA encounters, but also because I’m reluctant to rehash anything that’s been written before. Fortunately, I think I’ve come across something worth the pixels its written on. While working on software design over the years, I’ve noticed a common pattern, no matter how unrelated design tasks are: every design starts out really simple and elegant, but at some point can grow into a warted perturbed perversion of the original idea. In some cases, in response to a new requirement or complication, a new solution latches on to the side of the core design, like a barnacle on the keel of a ship. Other times, the whole design can be turned belly-up, like the Poseidon after a storm. What’s fascinating to me is how the great upheavals in design are often caused by the smallest additional requirements.

Everyone has heard of the concept of “scope creep”, when the requirements seem to grow faster than you can code. I want to write about what I’d like to call “complexity creep”, those moments when a tiny little requirement can mean a whole lot of extra work for the development team, or even turn your basic design concepts on their head.

Since this will be the beginning of a series of posts, inspired by some of the most agonizing moments from my ongoing work as a software architect, I won’t post any examples here. Look instead for my next posts, coming soon, on two “patterns” (oh no! YADSDPC – Yet Another Damned Software Design Pattern Catalog*) of complexity creep: “Aspect Infiltration” and “Data Scavenger”. If you come across any moments of your own, please let me know!

* Note that I use the term “pattern” here loosely, as in a group of unrelated issues with recongnizable commonalities. I’m not really planning on documenting these like software design patterns.

Causes of Decay: Mutating Design

February 23, 2009

AKA “Partial Refactor”

AKA “Good Ideas”

I have discussed in the past a phenomenon I call “Architecture by Accident”, in which the clarity of the design of a system may be ruined by rampant inconsistencies caused by a lack of attention for standards and reuse as the system evolves. But you don’t have to rely on chance to get there – we can achieve the same results absolutely intentionally.

Let’s say you have a system with a catalog of products, and that each of these products has a listing of parts. It’s probably a common pattern in the system to do something with the product itself, then go through the parts one by one and do a related activity. For example, the web page for the product probably lists the product’s name, code, and a description, and then shows each of the parts one by one in a similar fashion. The printed-out invoice may do the same. And let’s say the order fulfillment workflow does all sorts of funky calculations based on summing up the individual parts for things like calculating shipping weight, checking inventories, provisioning, whatever.

So the system designer goes ahead and says, “Hey everybody! Let’s create an iterator for products and their parts. From now on, whenever you need to do something to products, use a loop with the iterator.” Great. So, the team goes ahead and implements the web page and the invoice sheet using the really fancy iterator, with just a slight change to the contents of the “while” statement. So far, so good.

After a while, this “slight change to the contents” starts giving off a distinct copy-paste smell to the designer. So, one bright day, while browsing through their dog-eared copy of the GoF, they come across the Visitor pattern. “Aha! THIS is what we need!” exclaims our designer. The team has just been asked to implement that product-is-the-sum-of-its-parts weight algorithm I mentioned, and the designer decides it’s a good time to try out the pattern. What do you know?! It’s a fantastic improvement to the way they do things. “From now on, team, we use the Visitor pattern!” And it was so.

Time passes, and after a lot of summing up product parts in all sorts of incredibly meaningful ways, the designer starts to realize that their code base is lousy with one-hit-wonder Visitor classes that are created for some special purpose and are never used again. Fortunately, they are reading a book on the wonders of closures in Groovy. “Aha! THIS is what we need! We can just pass the code to be executed, without having to create a whole new class every time!” The team is all for it (all except one member, who’s forced to quit due to some unfortunate flashbacks to the 60’s inspired by the new language – especially tragic to happen to a young man of only 25), and goes about messing around with their products in Groovy.

Eventually, the team is able to hire a Groovy-compatible replacement for their fallen comrade. On the newbie’s first day on the job, she turns to one of her new coworkers and says, “Hey! I thought you said there was a full-time architect on this system.” Confused, he responds, “There is! Why?” “Well, then, why is this system such a mess? You said I’m supposed to be coding this product stuff in Groovy, but there’s a ton of these Visitor classes, brute-force loops, and all this other copy-pasted code. What up?”

From an outsider’s perspective, there’s little difference between “Architecture by Accident” (a lack of standards) and “Mutating Design” (too many standards). The result is pretty much the same: a patchwork quilt of approaches to solving the same problem in myriad ways. An architect or designer (or team) should strive for clarity in their designs. A system should speak for itself, but not if it’s going to say something different every time it opens its mouth.

So how does one avoid creating a system with a Mutating Design? There are only a few things you can do:

  1. Never change your design. Once you make a decision, write it in stone. This way, it will be easy for everyone to know how things are meant to be done. If anyone strays from the beaten path, it should be easy to identify and put things back on track. Unfortunately, this puts quite a burden on you to get things right from the beginning. This is basically synonymous with “waterfall methodology”, and has about the same chances of succeeding. However, it is worth noting that there may be times where the gain to be had by improving a design is outweighed by the damage the change would do to the clarity of the system.
  2. Refactor everything. The devil in a Mutating Design lies in inconsistency. You can exorcise it by going through a rigorous ritual of refactoring everything that had previously been implemented so that the whole system reflects the new design. This could mean a whole lot of work (and risk of introducing new bugs into previously working code) in the name of clarity.
  3. Isolate the changes. Again, the problem is with clarity, which can be occluded by inconsistency. So is there a way to provide clarity even when the design is incosistent? There is… if you’re clear about scope, and you provide a roadmap.

This last point is not obvious, but worth trying to understand and put into practice. The question you should ask yourself is: if the design keeps changing, how can developers know which pattern to use, and where? Ideally, the system should “speak for itself”, which means developers should be able to infer the design from existing implementations. Therefore, if you wish to change the design, do it in a way that can be consistent within the scope in which developers tend to work. If development teams are divided up by ownership of subsystems, for example, you can experiment with a new design in one of the subsystems – but then change the design for that whole subsystem. It may be inconsistent across the whole system, but in general, developers won’t feel the pain. Even if developers work on the whole system, it may be possible to choose a scope that makes sense to them. If the system is divided by modules, you can choose to change the design for one (entire) module. But then you must make it clear to developers that they should use whichever pattern is appropriate for the particular module they are working on.

This last approach can go really wrong if you don’t provide clear signals to developers as to where they are in the design. Because of this, I am working on a series of techniques (and blog posts) that I call “Visible Architecture”. The idea here is that the development team should be able to see the architecture relative to their code at any time. So, for example, if they are working on a module in which the Visitor pattern must be implemented to work with products, a document on this technique should “present itself” to the developers from within their IDE. If they then switch to a module using the new Groovy approach, the document will switch as well.

There aren’t very many tools that provide this type of functionality. I’m working with one called Structure101 which lets you do just that for layer diagrams. You can define dependency rules for a project, and they will actually show up as diagrams (with enforcement via compilation errors) in either an Eclipse or an IntelliJ IDE. You can publish a different set of diagrams for each Eclipse or IntelliJ project, which means if you wish to change these rules, it’s easy to do it for one project, and leave the old rules in effect everywhere else. I have also written a plug-in of my own for these two browsers called “Doclinks” which doesn’t enforce any rule, but allows you to link URLs to source code based on a wide variety of rules. This, together with a wiki-based architectural documentation, is another way to provide a context-specific roadmap to developers, reducing the confusion that can be caused by a Mutating Design.

I’ve previously shown you how a system can lose its clarity due to a lack of architecture. Now I’ve presented how the same thing can happen when it has too much architecture. As an architect or designer, you need to recognize the importance of standardization, but you also shouldn’t freeze your design in time. What’s important is to recognize that the evolution of the system is best done in stages, rather than through kaleidoscoping changes with no regard to what came before. Before you know it, your code may look like it’s from a B-Movie: The Attack of the Mutating Design!

Lessons from JBoss Envers

October 31, 2008

My good pal Daniel Mirilli just sent me a link to the JBoss Envers project. The idea behind it is that you can automatically track version changes of persistent fields using a new annotation called “@Versioned”. This makes something that is otherwise pretty complex – saving a whole history of changes for an object – as simple as declaring that you need it.

This is a great example of what makes for elegant design: you abstract a concrete business need into a reusable concept, then turn it into an “aspect” to get it out of the immediate code. You can now “take it for granted”. Before I got to know AOP I wrote a framework that did all this the hard way: I had parent classes for “entities”, inherited attributes like “ID” and template methods for things like type-specific attibutes: PKs, unique keys (more than one allowed), “natural” ordering, and so on. Three years ago, I would have done this differently: annotations, or AspectJ aspects, or something along those lines. Then, more recently, EJB 3.0 provided this out of the box with their @Entity annotations and so on.

The beauty behind AOP, no matter what the technology, is that it allows you to design elegantly through abstractions – and yet make them concrete.

Anyway, back to JBoss Envers. I haven’t looked at the details, but it seems like an elegant solution to a problem that I’ve seen more than once: “entities” that need to be “versioned”. That is, you want an audit trail for everything you do to them. And you can never delete them (logical delete, maybe). The product I currently work on has some pretty intense versioning requirements, which is why Mirilli sent me the link to this project (thanks, buddy!). But I know enough about the specific requirements of my product to suspect that Envers wouldn’t be able to do it for us.

In fact, my product’s business rules are convoluted enough that I actually spent about a week doing what the creators of Envers did: trying to wrap my head around it all by turning it into generalized abstractions. In the process, I actually created my own UML 2.0 extension language in order to diagram these versions and explain it all to myself. It’s one thing to keep an audit trail, but what if you can look up previous versions? And what if those previous versions relate to other versioned entities? I realized that in such cases there can be more than one type of “relate”: specific version relationships (Foo version 12 refers to Bar version 5), general version relationships (all versions of Foo refer to the latest version of Bar), and even rules-based general version relationships (all versions of Foo refer to the active or editable version of Bar, which may not be the latest)! Also, note I said “refer” here, implying a uni-directional reference. The relationship can be bi-directional, but what if the back reference follows different rules from the forward reference?

Sorry to muck around in the details like that. It’s not your problem (I hope). What I wanted to highlight here is that this process of abstraction and creating a new “language” around some tricky business rules can actually become something practical when whittled with the tools of an AOP framework, annotations, or what have you. And Envers, whether it works or not, has reminded me of that: when Foo refers to Bar, why not do something like this:

public class Foo {

@Versioned(type=VersionReferenceType.GENERAL, rule=VersionReferenceRule.LATEST)

private Bar bar;


Instead, I’m currently relying on inheritance from base classes and custom code in various places in the persistence and domain layers that are only loosely tied to the object references. But now Envers has gotten me thinking again…

Causes of Decay: Architecture by Accident

September 4, 2008

AKA “Copy-paste Architecture”

I recently read a great answer to the question: “What is architecture?” Or rather, “What elements are architectural?” It’s an important question, and one seemingly impossible to answer: where do you draw the line between architecture and design? Architecture and just good practices? Architecture and “that’s just the way we did it”?

The answer is both obvious and disappointing: architecture is in the eye of the beholder. To an Enterprise Architect, the broad business concepts (domain entities) and system capabilities are architectural. To an Applications Architect, modules and layers, component interfaces and connection protocols are all architectural. To a Network Architect, network topologies and hardware are architectural. There is some overlap in what each cares about, but the point is that each type of architect, each software design and each developer has their own opinion about what is important to know.

But don’t despair! There is a way of defining exactly what is architectural to YOU! You see, what an architect does in the end is specify enough of the elements, patterns, technologies and structure of the system (or systems, or subsystem…) to guarantee that the implementation work that follows will satisfy the functional and non-functional requirements that it is supposed to. The rest is just design details that you can let a designer or developer decide for themselves – what’s important is that the performance, scalability, extensibility and all the other -ilities have been taken care of.

And that’s a healthy thing: to make all decisions up front for everyone else is not only boring, it would take forever, and take up volumes of hard-to-use and impossible-to-maintain documentation. What’s more, you run a big risk of overdesign.

Another definition, or characteristic, I have heard of architectural elements is that they are not generally one-off decisions regarding a specific implementation. What separates an architectural element from a design element is that the former represents a pattern that is meant to instruct and guide the design and implementation. Architecture in this sense is “reusable design”.

So what does all of this have to do with the decay of an application’s code base? Well, let’s say you’ve gone ahead and done your duty to specify those elements of your application that are architectural. Now it’s time for the team to write the software according to their whims, but within the constraints you have imposed. During this time, developers will naturally make decisions about the implementation. “Where should I put this class?” “Which package should I use?” “What should I call the class?” “Should I use a design pattern here? And which?”

Of course, none of these decisions are (you hope) important to the overall architecture of the system. But many of them are to solve common problems, and will likely need to be made again by other developers as they create and maintain the application. So, let’s say we have a simple “8-ball decision” (cf the commandment “Be decisive”) that a developer is making regarding the name of their Data Access Object for saving “Foo” to the database. It has an interface, but let’s say this particular implementation uses Hibernate to get the job done. Our hero could name the class:

  1. FooHibernateDAO
  2. FooDAOHibernate
  3. HibernateFooDAO
  4. HibernateFooDao
  5. FooHibernateDataAccessObject
  6. and so on…

So, our hero decides he likes option #4 because it sounds like an answer to the question “WHICH FooDao?”, and because all-caps acronyms run the risk of run-on acronymtences (Note: it’s always good to keep a justification for a Magic 8-ball Decision in your back pocket, because it ends arguments quicker than, “The 8-ball said its sources say yes.”). The next time our hero makes a DAO to save Bars, you can bet he’ll call it a “HibernateBarDao”.

In the following iteration (your process has iterations, right?), our heroine finds that she, too must write a DAO to save (you guessed it) Bazzes. Fortunately, she noticed that our hero has already done this before, so she copies over the basic code, kindly extracts any commonalities to a base class (you go, girl!), and calls her new implementation a “JdbcBazDao” (say it 10 times fast!). Although this was never specified in the original architecture, you now have the beginnings of a standard AND a framework:

  • All Data Access Objects should follow the naming convention: Implementation approach + Object type persisted + “Dao”
  • Common data management code is implemented in a BaseDao class, which all other DAOs should extend

Remember the previous definition that the architecture is “reusable design”? This is more or less what is happening here, even though it wasn’t intentionally specified by the architect. I call this “Architecture by Accident.” I also call this “Copy-paste Architecture” due to the way these implicit conventions often occur. But don’t get me wrong: there’s absolutely nothing wrong with the example I just gave you. It’s fantastic when standards and architecture and evolve properly on their own.

But let’s say a third developer, our sidekick, needs to write a DAO to save XPTOs. Because these “standards” were never made official, it’s quite possible the sidekick never got the memo. And after consulting his own Magic 8-ball (the official High School Musical edition! Wow!), decides to call his DAO “XPTOJDBCDAO” (DON’T try to say that one out loud). And one more for good measure: what better for saving XYZPDQs than an “XYZPDQJDBCDAO”?

Now enters into the story our villain. Actually, he’s not such a bad guy, but he does have his own incredibly important and convincing reasons for why his DAO must be called a YoMamaDataAccessObjectHibernate.

Our simple application now sports the classes:

  • HibernateFooDao
  • HibernateBarDao
  • JdbcBazDao
  • YoMamaDataAccessObjectHibernate

Confused yet? This is the problem with a “Copy-paste Architecture”. It isn’t so much who is making the decisions, or even why. The problem is that they aren’t being made consistently. As a result, your code base starts to rot. Clarity goes out the window. It’s hard to find classes you know are there because you can’t begin to guess what they’re called. You can’t do any generalized analysis, conversions or rule enforcement because there’s no one rule that can cover all the possibilities. And we’re just talking about the class names!

Of course, Architecture by Accident isn’t limited to just naming conventions. Design patterns, layering structures, library classes and infrastructure solutions are all at risk of having their wheels reinvented if there isn’t someone looking out to identify these patterns and maintain consistency. We’re all familiar with the results: increased complexity, reduced clarity and diminished productivity.

The only solution to this problem is collaboration. The whole team needs to recognize common patterns and solutions as they evolve, make them explicit, and let everyone know. But take my word for it: it won’t happen by accident.

Confessions of an overdesigner

September 2, 2008

I just read an interesting blog post from Guilherme Chapiewski on keeping software design simple (for those of you who don’t speak Portuguese, this might be a good time to try out that translation feature on Ubiquity. For those of you that are native Portuguese speakers, let this be my chance to set the record straight before bad practices become permanent: it’s pronounced “you-BIH-quih-tee”). This is obviously not the first time the need for design simplicity has been discussed in our field, and books are finally starting to come out regarding HOW to do this.

I personally am a confessed overdesigner, and I think it’s worthwhile discussing why this happens so often. No one goes out of their way to create more work for themselves or for others, at least explicitly (well, there are exceptions to this, as you will see). Below are the top reasons I’ve seen that cause a software developer or designer to go overboard on their design:

  1. Inexperience: developers who are just getting their feet wet with design patterns naturally want to sample the new patterns they have learned. There is a bit of healthy curiosity at work here, and a desire to get some real practice in implementing the pattern. But when I look at code like this, I can almost here it screaming, “Look, ma! New pattern!!” This is a tough problem to deal with, because developers really need to have practical experience to fully grok a pattern, but using it where it’s not needed isn’t necessarily what developers should learn, either. So, consider responding to this code with a firm, but loving, “That’s very nice, dear. Now go clean your room.”
  2. Too much experience: developers who have been around the block a few times, like myself, like to think we can “see” where the software is going. Using design patterns becomes automatic. They become so natural that the cost of implementing the pattern is almost the same as using a simpler implementation. But the cost to others may be much more significant in terms of understanding and maintaining the code, especially if there is no corresponding requirement for the flexibility (etc.) that the pattern affords. One team that I’ve been working with has adopted what they call the “red card” for me, whenever they see that I’m getting carried away with a design. Although it can be pretty embarassing, it’s a good opportunity to be humble, and to learn from your own mistakes. “That’s very nice, dear. Now go to your room.”
  3. To impress others: there can be a certain amount of prestige in saying that you used design patterns in your implementation. The same goes for coming up with “clever” (aka counterintuitive) solutions. This can be between developers, but it can crop up in other situations as well. I’ve personally never had to do this, but I have heard anecdotes of people being required to justify their designs to management by listing the patterns they are using. I have personally answered RFPs which ask for this. To me, this is ridiculous. Pretty much every large application I’ve worked on uses almost every design pattern in the GoF book in one way or another, and any design pattern without context is meaningless. Don’t let pride be a deciding factor in your design, nor should you use patterns gratuitously to fill up your forms.
  4. Because someone said to do it: when someone else does the design for you, or just provides some “helpful suggestions”, there is a risk that they will over-specify (cf. the next two entries). This is partly because the person doing the design doesn’t have to sully their hands with unpleasant details like implementing and maintaining their idea. A person who works only with abstractions will also naturally tend towards creating more abstractions, and always look to generalize concepts that currently apply to just a specific situation. It’s just what they DO. This is a fundamental problem with separating the act of design from the act of development. If your team finds this separation of duties useful, just make sure the designers (or architects, or whoever) are there to collaborate and see it through to the end.
  5. Fear of paying the cost later on: I often see myself leaning towards more complex designs to implement extra flexibility in the application when I am afraid that if we don’t do it now, there may not be enough time to do it later. This is silly, of course, since we will have to pay for it now, at a time when it’s not even needed. This fear is less ridiculous when the simple approach differs dramatically from the more complex implementation (e.g. when the two approaches require different architectures). A judgement call must be made in these cases, but very often the simple approach is enough if you can build in a single point of change where the more complex solution can be swapped in later. This is, after all, one of the main purposes of encapsulation and abstraction, and is the essence of the Open-Closed Principle.
  6. Fear that if you don’t do it, someone else will: this is one underlying cause of overdesign that I only recently discovered after many years of therapy and self-reflection. Somewhere in my subconscious was the little nagging question, “if not me, then WHO? If not now, then WHEN?” More than just an affirmation of my willingness to take on the challenge, there is also a hint of mistrust that others WON’T be so willing, or so capable. Better to overdesign now, than to underdesign later, right? This is a tough one to recognize, but in any case the “why” is less important than just recognizing that a solution can be simplified. If you find in yourself that this is one of the reasons why your are tending towards a big up-front design, remember the commandment: Empower developers. If you don’t trust your own team, you should figure out why and fix it, not work around it.
  7. Because it’s fun: OK, let’s just admit it. Programmers, architects, designers and the lot are all in it for the joy of puzzle-solving. A simple solution, like a video game that ends too soon, is just a let down. If you often find yourself caught up in the thrill of the sheer possibilities, go ahead and let your mind wander – that’s what brainstorming is all about. But make sure to timebox your thinking outside the box. When you’ve had your fun, root your feet solidly on the ground and pick a solution that actually fits your problem.

One quote in Guilherme’s post is certainly appropriate for an overdesigner like myself:

Beware though, keeping a design simple is hard work.

I am a natural when it comes to seeing abstractions, and generalizing problems. Whenever I’m approached with a new problem we haven’t solved before, I immediately try to name it. “Hmm… you say you need to temporarily store a file to keep it out of memory? Looks like we need a TemporaryMemoryManagementRepository component.” When I do this, I’m looking to shine some light on the problem itself that we’re trying to solve, rather than look immediately at the solution that was pulled out of a hat. This helps the “thinking outside the box” part of the exercise (“is this the ONLY way we can do this?”). I’m also looking for a way to encourage reuse, or at least to make sure that we only solve this problem once (check out “Causes of Decay: Copy-paste Architecture”, when I finally publish that article). But by doing so, I may have already created some confusion (“Hey, where’s that class that saves and reads temp files for us? It’s called WHAT?).

To me, overdesigning is the easy part. What really takes work is whittling that down to something not only reusable and extensible, but also easy to maintain. But the first step is admitting you have a problem…