Friday, June 27, 2008

IoC and DI complexity

Inversion of Control Containers (Ioc), Dependency Injection (DI), and the Dependency Inversion Principle (DIP) are huge blogosphere topics these days.

Quickly, Dependency Injection (DI) is a pattern of “injecting” a class’s “dependencies” into it at run time. This is done by defining the dependencies as interfaces, then passing in a concrete class implementing that interface to the constructor. This allows you to swap in different implementations without having to modify the main class. As a side effect, it also causes you to follow the Single Responsibility Principle (SRP), since your dependencies are individual objects which perform discrete specialized tasks.

Dependency Inversion (DIP) is a design principle which is in some ways related to the Dependency Injection pattern. The idea here is that “high” layers of your application should not directly depend on “low” layers. Instead, the high layers should define interfaces for the behavior they expect (dependencies), and the low layers will come along and implement those interfaces. The benefit of following this principle is that the high layers become somewhat isolated form the low layers. This means if some arbitrary change is made in the low layer it is less likely to have to be propagated up through all the layers. Dependency Inversion does not imply Dependency Injection. This principle doesn’t say anything about how high layers know what low layer to use. This could be done by simply using the low layer directly in the code of the high layer, or through Dependency Injection.

The Inversion of Control Container (IoC) is a pattern that supports Dependency Injection. In this pattern you create a central container which defines what concrete classes should be used for what dependencies through out your application. Now, your DI classes will determine their dependencies by looking in the IoC container. This removes any specification of a default dependency from the classes themselves, and it makes it much easier to change what dependencies are used on the fly.

Clearly, these are some very powerful patterns and principles. Basically, DI and IoC remove the compile time definition of the relationships between classes and instead define those relationships at runtime. This is incredibly useful if you think you may need to modify the way your application behaves in different scenarios.

However, if you pay attention to why these patterns are primarily used by the various people talking about them in the blogosphere you’ll see that it’s for unit testing. The reason people are bothering is because they want to create mocks and stubs of their objects so that they can write unit tests. The Dependency Inversion and Single Responsibility principles that arise from this are certainly an added bonus, but not the primary goal. And the ability to swap in different REAL dependencies is not one that anyone planed to use.

Let’s be realistic. How many applications really need to be able to do the same thing in two different ways at the same time? That’s what DI is for, but I don’t think many people really need that capability. It’s much more likely that your application will evolve from doing THIS to doing THAT. DI will make this migration simpler, but only because it forced you into following SRP and DIP. You could have followed those principles without using DI.

The question is, “If your application doesn’t require DI (except for unit tests), should you use DI?”

The question that leads to is, “What's the harm in using DI for unit testing?”

The answer to that is: complexity. Using DI adds complexity to your application. IoC adds even more complexity.

Where does the complexity come from?
  • There are more pieces and components to keep track of
  • It’s harder for a person to understand how everything fits together into a functioning whole
  • There are more restrictions on the things you can do in your code: you can’t new up a dependency; you can’t require fields through a dependency’s constructor, etc
  • Interfaces can’t strictly define everything (will it throw an exception, will it return null, will it display it’s own error dialogs, etc)
  • With some IoC tools, I have to maintain an xml configuration file…
  • There are simply more lines of code
  • It is harder to browse the code and debug the code (because there are more layers and indirection)
When I brought this up in comments on the YTechie blog everyone told me the problem wasn’t with the patterns, it was with my IDE, or it was because I wasn’t commenting my interfaces well enough… This was mostly because the examples I was using to try to indicate the complexity were trivial.

The point I’m actually trying to make is just that there is more complexity! I need a better IDE because of it! I have to write more detailed implementation specific comments because of it! It doesn't really matter if doing those things are a good idea anyway. The point is that now it's complicated enough that I have to do them, I can't get by without like I could before.

To put it simply, I have to do more work. That’s the harm in DI and IoC (and to a lesser extent DIP): complexity -> more work -> more confusion -> more potential for error -> more chaos over time

The next question is, “Is this added complexity enough of a downside to make DI/IoC not worth it?”

This is the real question that everyone should ask themselves before they dive head first into the newest latest thing. Unfortunately, you’ll find a surprising lack of thought about this. Or even willingness to think about it. When people find something new they like, they don't like to admit it may come with some downsides too, however minor they may be. Don't get me wrong! Some people are thinking about it, like Dave^2. But it the blog world, it's always a struggle to get passed the "It's awesome" to the "but...".

The answer to our question is: It Depends. That’s the Computer Engineer’s motto. And it’s hugely important to remember that it always depends. It depends on your circumstances, and the complexity of your application or component, and an innumerable list of other factors. It's not an all or nothing answer either. It could be no problem here, and total disaster there.

Are having unit tests worth the added complexity for you? As long as you recognize the complexity, you’re fully qualified to make that decision. Personally, I've found many circumstances in which it was worth it, and a few others where it was just too much overhead. But let me know what you decide, and how it works out for you.

11 comments:

  1. I agree. I get attacked every time I say that sometimes it easier to "git 'er done".

    I have started to see the great power of DI when used in the right places. Please follow your own advice, and realize when it DOES make sense to use it.

    ReplyDelete
  2. Thanks for the comment. I've even updated the post to indicate that I do use DI in many places because I definitely didn't want this to a be a post arguing against the use of DI.

    ReplyDelete
  3. Thanks for adding to a more nuanced debate Kevin. There are a lot of people screaming "Dependency Injection" these days without necessarily considering alternatives or the added complexity involved (example: http://tech.puredanger.com/2008/06/29/javaone-2008-design-patterns-reconsidered/#comments)

    I have experienced issues with naïve injection (i.e. JSR-296 resource injection) where the added magic does not lend itself to more maintainable and readable code.

    So to quote David Wheeler: "Any problem in computer science can be solved with another layer of indirection. But that usually will create another problem".

    ReplyDelete
  4. You are missing the point totally ... DI is not about unit testing. The fact that you cannot test your classes shows you that you have a dependency issue, which in turn causes maintenance issues.

    DI is not a way to test your classes, it is a way to reduce complexity, and achieve Separation of Concerns, SRP, and OCP

    If you can achieve those without DI, then I have yet to see a way of doing it.

    ReplyDelete
  5. @casper: Thanks Casper, that's a fantastic quote! And it's dead on, and it's frequently forgotten I think.

    ReplyDelete
  6. @casey: I think perhaps you are misunderstanding my point.

    If you take another read through the post you'll see I'm not saying that the only reason to use DI is to unit test. I'm actually saying the opposite. What I did say was that most people who are talking about DI aren't talking about how it can serve to help structure code. They are talking about unit testing, and their only reason for using DI is to allow them to unit test.

    As for achieving Separation of Concerns, SRP, and OCP without DI... You don't need to inject dependencies to create objects that have a single well defined responsibility.

    You can have many classes which directly depend on each other and still achieve Separation of Concerns, SRP, AND even OCP.

    The only thing you can't do without DI is arbitrarily swap in different concrete classes.

    Still, thanks for the comment!

    ReplyDelete
  7. I noticed the heavy blogging on unit testing in relation to DI, IoC, etc also. I would however think that the reason isn't because people aren't focusing on the actual pattern and such, but instead because it is really hard in many situations to come up with an "elegant" way to unit tests these patterns.

    ...at least, hard for many to test these patterns. Much of the time it just seems, even when one does get to testing the patterns, that the testing is not elegant or that useful.

    ...thus, you have tons of blogging about it.

    ReplyDelete
  8. Great post. I am wanting to introduce an IOC container into a large project at work, but I want to be sure I know all the potential disadvantages first. It seems a lot of articles on Dependency Injection tell you when to use it, but not many will help you with when not to use it.

    ReplyDelete
  9. I am very interested in the way you typically write code. Your arguments about how DI make code more complex seem to be arguments against any polymorphism. Do you really never use interfaces or subclasses?

    ReplyDelete
  10. @David interesting point. You're right that in some ways the points I've made here are also points against polymorphism.

    But that doesn't mean you don't use polymorphism. Same reason why I actually DO use DI and IoC in my code. The point though is that all these techniques do have a down side and so they shouldn't be used blindly.

    So I don't create interfaces for every class I write. I only create interfaces for the ones that NEED interfaces (for one reason or another).

    ReplyDelete
    Replies
    1. @Kevin (and David Stanek):

      It isn't necessarily intuitive for all programmers to write highly polymorphic code. For some people (and I adopt this reasoning myself sometimes) the reason for having different classes is that they're different. If the structural parts of such an application had commonalities, they'd just be the same class. Sometimes, particularly in legacy codebases, classes are used as containers for data, and the shared behaviour between them isn't particularly obvious.

      I think it takes a mindset of encapsulating behaviours as types to make deep polymorphism work. Otherwise what you'll end up with is myriad interfaces repeating the sales pitch of a small number of monolithic classes, which adds no value at all.

      Delete

Note: Only a member of this blog may post a comment.