Inversion of Control Containers (Ioc), Dependency Injection (DI), and the Dependency Inversion Principle (DIP) are huge blogosphere topics these days.
Quickly,
Dependency Injection (DI) is a pattern of “injecting” a class’s “dependencies” into it at run time. This is done by defining the dependencies as interfaces, then passing in a concrete class implementing that interface to the constructor. This allows you to swap in different implementations without having to modify the main class. As a side effect, it also causes you to follow the Single Responsibility Principle (SRP), since your dependencies are individual objects which perform discrete specialized tasks.
Dependency Inversion (DIP) is a design principle which is in some ways related to the Dependency Injection pattern. The idea here is that “high” layers of your application should not directly depend on “low” layers. Instead, the high layers should define interfaces for the behavior they expect (dependencies), and the low layers will come along and implement those interfaces. The benefit of following this principle is that the high layers become somewhat isolated form the low layers. This means if some arbitrary change is made in the low layer it is less likely to have to be propagated up through all the layers. Dependency Inversion does not imply Dependency Injection. This principle doesn’t say anything about how high layers know what low layer to use. This could be done by simply using the low layer directly in the code of the high layer, or through Dependency Injection.
The
Inversion of Control Container (IoC) is a pattern that supports Dependency Injection. In this pattern you create a central container which defines what concrete classes should be used for what dependencies through out your application. Now, your DI classes will determine their dependencies by looking in the IoC container. This removes any specification of a default dependency from the classes themselves, and it makes it much easier to change what dependencies are used on the fly.
Clearly, these are some very powerful patterns and principles. Basically, DI and IoC remove the compile time definition of the relationships between classes and instead define those relationships at runtime. This is incredibly useful if you think you may need to modify the way your application behaves in different scenarios.
However, if you pay attention to why these patterns are primarily used by the various people talking about them in the blogosphere you’ll see that it’s for
unit testing. The reason people are bothering is because they want to create mocks and stubs of their objects so that they can write unit tests. The Dependency Inversion and Single Responsibility principles that arise from this are certainly an added bonus, but not the primary goal. And the ability to swap in different REAL dependencies is not one that anyone planed to use.
Let’s be realistic. How many applications really need to be able to do the same thing in two different ways at the same time? That’s what DI is for, but I don’t think many people really need that capability. It’s much more likely that your application will evolve from doing THIS to doing THAT. DI will make this migration simpler, but only because it forced you into following SRP and DIP. You could have followed those principles without using DI.
The question is, “If your application doesn’t require DI (except for unit tests), should you use DI?”
The question that leads to is, “What's the harm in using DI for unit testing?”
The answer to that is: complexity. Using
DI adds complexity to your application. IoC adds even more complexity.
Where does the complexity come from?
- There are more pieces and components to keep track of
- It’s harder for a person to understand how everything fits together into a functioning whole
- There are more restrictions on the things you can do in your code: you can’t new up a dependency; you can’t require fields through a dependency’s constructor, etc
- Interfaces can’t strictly define everything (will it throw an exception, will it return null, will it display it’s own error dialogs, etc)
- With some IoC tools, I have to maintain an xml configuration file…
- There are simply more lines of code
- It is harder to browse the code and debug the code (because there are more layers and indirection)
When I brought this up in
comments on the YTechie blog everyone told me the problem wasn’t with the patterns, it was with my IDE, or it was because I wasn’t commenting my interfaces well enough… This was mostly because the examples I was using to try to indicate the complexity were trivial.
The point I’m actually trying to make is just that there is more complexity! I need a better IDE because of it! I have to write more detailed implementation specific comments because of it! It doesn't really matter if doing those things are a good idea anyway. The point is that now it's complicated enough that I have to do them, I can't get by without like I could before.
To put it simply, I have to do more work. That’s the harm in DI and IoC (and to a lesser extent DIP): complexity -> more work -> more confusion -> more potential for error -> more chaos over time
The next question is, “Is this added complexity enough of a downside to make DI/IoC not worth it?”This is the real question that everyone should ask themselves before they dive head first into the newest latest thing. Unfortunately, you’ll find a surprising lack of thought about this. Or even willingness to think about it. When people find something new they like, they don't like to admit it may come with some downsides too, however minor they may be. Don't get me wrong! Some people are thinking about it,
like Dave^2. But it the blog world, it's always a struggle to get passed the "It's awesome" to the "but...".
The answer to our question is:
It Depends. That’s the Computer Engineer’s motto. And it’s hugely important to remember that it always depends. It depends on your circumstances, and the complexity of your application or component, and an innumerable list of other factors. It's not an all or nothing answer either. It could be no problem here, and total disaster there.
Are having unit tests worth the added complexity
for you? As long as you recognize the complexity, you’re fully qualified to make that decision. Personally, I've found many circumstances in which it was worth it, and a few others where it was just too much overhead. But let me know what you decide, and how it works out for you.