Monday, January 25, 2010

My TDD Struggle

I'm a huge fan of the concept of TDD (Test Driven Development).  I've done it a few times with varying success but I intend to make it a constant practice on anything new I write.  If you want to see people doing it right, go watch some videos at  Now on to the words!

TDD is the red/green/refactor process.  Write the test, watch it fail.  Go write the bare minimum of code possible to make it pass.  Refactor the code, and refactor the tests.  Repeat.

This process lends itself to what people call "emergent design."  This is the concept that you don't stress out trying to devise some all encompassing design before you begin coding.  You sit down, you write tests, and you let the design emerge from the process.  The reasoning here is that you'll end up with the simplest possible design that does exactly what you need and nothing more.

That point hits home very strongly for me because of my experience with both applications and code that have been over designed and end up causing all sorts of long term problems.  So the call for simplicity is one I am very eager to answer.

BUT.  Clearly you can't just close your eyes and code away and assume it will all work out.  There is an interesting tight rope walk happening here.  As you are coding you have to be constantly evaluating the design and refactoring to represent the solution in the simplest possible way.  But what TDD is really trying to get you to do is not think too much about what is coming next and instead pass the current test as though there wasn't going to be a next test.

It's that "ignoring" the future part that I really struggle with. The knee jerk negative reaction is that this will cost you time because you're constantly re-doing work.  There are times when this is probably true, but in general the tests lead you through the solution incrementally, affirming you're on the right track each step of the way.  And when you suddenly discover something that causes you to back track, you've got all the tests ready to back you up.

But there are a few things that I don't think this technique is good for.  One is "compact" algorithms, the other is large systems.  We'll take them one at a time.  I was recently practicing the Karate Chop Kata which is a binary search.  My testing process went like this:

  1. When array is null or empty it should return negative one
  2. When array has one item 
    1. it should return zero if item matches
    2. it should return negative one if item does not match
  3. When array has two items
    1. it should return zero if first item matches
    2. it should return one if second item matches
    3. it should return negative one if nothing matches
  4. When array has three items
    1. ...
Numbers 1-3 were all implemented in the straight forward way you would expect.  But when I get to #4, now I have to actually write the binary search algorithm.  So now I have to decide if I'm going to write it with a loop, with recursion, with some form of "slices", etc.  I also have to figure out what the terminating conditions are and verify that my indexes and increments are all correct.  In other words I have to do all the work after writing that one test.  

And worse, these tests are stupid.  What am I going to do, write a test for every array length and every matching index?  I re-factored the tests later to be a bit more generic and more specific to the edge cases of the algorithm in question.  If you'd like to see what I ended up with you can checkout the code on bitbucket.

In general, writing your tests with knowledge of the implementation you're writing is bad, bad, bad.  Like @mletterle reminded me of on twitter, tests should test the behavior of the code, not the implementation of the code.  Bob Martin just recently wrote a post that made the same kind of argument in regards to Mocks.  

Now don't get me wrong.  The tests are still valuable in this example, they're just not as useful in an "emergent design" kind of way.

Moving on, the second thing that the emergent design mindset isn't very good for is complex system design.  Systems that are complicated enough to warrant DDD (Domain Driven Design).  In this case you really want to step back and take a big picture view and do a real Domain Model.  The emergent design approach may lead to a design with fewer objects or something, but this may not be a good thing if you're interested in a design that excels at communication.

With these systems you'd do your Domain Driven Design, then drop into your TDD and allow it to "emergently" design the actual implementation code of your model.  You're kind of getting the best of both worlds this way.  But its important to recognize that TDD is still an important part of this, even though you didn't let it guide the ENTIRE design.

So TDD and emergent design might not be the answer in all circumstances.  But I still think that you'll find a strong place for it in even these circumstances.

Anybody in the blogosphere strongly disagree with this?  Or perhaps, dare I ask, agree?


  1. You definitely need some starting point when it comes to TDD and emergent design. Granted, I've just been reading up on this kind of stuff recently, so my thoughts should be taken with a grain of salt. However, from what I've been reading, it seems like you don't just start coding right away. You still need to figure out a temporary design to figure out where you start with your tests. You don't spend a lot of time on that temporary design, since it's going to be changing rapidly as you start developing your tests.

    Time for a metaphor (probably a poor one. They never seem to work out for anyone on the interwebs)! It's like writing an essay. You usually don't start writing an essay without having some ideas of what you're talking about. You don't just start typing out paragraph after paragraph and have your paper structure itself. You figure out a list of things you'd want to talk about and then you go from there. You'll change that list of things as you write, but as you write, you'll start to see things coming together.

    As far as the Karate Chop Kata goes, you don't really need to figure out the algorithm for doing the search. Whichever is the simplest to implement if the one you would choose. As the requirements change (say you have to take processor cycles into account or shear speed of the search), your algorithm will have to change. Of course, that probably just ends up changing acceptance tests rather than unit tests (maybe?).

    One of the big things I'm still trying to figure out is how to use TDD when you're working on pretty much all GUI stuff. I mean, there's business logic there that we make calls to, but we generally don't alter the business logic since it isn't in our lane.

  2. That's actually a great metaphor! Mind if I borrow it in the next post I'm intending to write on BDD?

    Also, the Karate Chop Kata tells you what algorithm to write. That's part of what makes it kind of weird. Your tests just care that you're doing a search, but your actual "assignment" is much more specific than that.

  3. Of course you can borrow it. I'm sure you can make it a little more eloquent than I did.

    I suppose I should have read the Karate Chop Kata a little more closely than just glancing at the page. I'm not really sure how you would test an algorithm requirement. I guess we just have to accept that certain requirements aren't testable.

  4. I run into this problem a lot when I think too much about a problem before I start writing code. You are right, TDD helps drive you to write the simplest code to solve the problem. It doesn't mean it is the most elegant code or exactly what you were picturing before hand, but it works. This is a problem when you are trying to drive out a specific implementation but can still be helpful to make sure you are on the right track and not breaking it later.

    The real skill, which I by no means am perfect at, is keeping that feedback loop as small and inexpensive as possible. Fail small and fast and learn from the good AND the bad. TDD is about discipline in practice but that doesn't mean you don't need design. Design what is directly in front of you and you know you need.

    Granted, that doesn't make it easy but a good practice. It was really interesting to see how different takes on the RPS challenge produced very different code, all with TDD.

  5. If you write the provably correct algorithm on the first test, then your pair shouldn't have any other test to write since none of them would fail.

    The TDD folk would tell you that you can't just write the full, correct solution right out of the gate since that's not the *simplest* thing that could possibly work.

    So TDD is really a game. Your partner gets to serve first and it's their job to write a test that will break some as-yet-unspecified branch of your code and it's your job to fix that without fixing future tests. When there's no more future tests that could be written, your pair stops and you breathe a sigh of relief.

    Actually, TDD sounds kinda messed up when you describe it that way.