Tuesday, December 22, 2009

People Problems

I'm sorry to have to tell you this, but your job is to solve problems for people.

I don't even have to know what your job is, and I can still say that with a pretty high level of confidence.  Are you an engineer building bridges?  You're solving a problem for people who need to get across that divide.  Are you a cashier at the grocery store?  You're solving a problem for people who need to pay for their groceries (and for people who need to take money from people for groceries, double whammy).  And of course, are you a computer programmer?  You're solving SOME, maybe not very well specified, problem for your users.

It really doesn't matter how far removed from people you are, you're still solving problems for people.  You could be completely devoted to the inner most workings of the Windows kernel.  You're actually worse off!  You're solving problems for users who want to check their email, and programmers who want to write MS Word, and designers who want to make Windows look pretty, and marketers who want to sell Windows as the most stable ever, and finance people who want to show market growth, and the list just goes on and on and on...

Sometimes you are aware that you are solving problems for people, but maybe you don't really know what problem you're solving.  Like the grocery store clerk who is really nice and friendly and makes all the customers happy going through the line.  Maybe they think their job is to make you happy, but it's not.  The problem you want them to solve is to figure out how much money you owe, and take it from you.  And you want them to do it quickly.  If they can do that AND make you happy, then they are a very good grocery clerk.  If they only make you happy, then they just aren't very good.

A similar thing happens with programmers.  I frequently get confused and think that my job is to make high quality software and write high quality code and do good high quality design work.  But its not.  My job is to solve some problem my client has.  As long as I fix their problem, I've done my job.  Maybe I could have fixed their problem better with beautiful code I'd be proud to hang on my wall at home.  That would be a bonus.  But all I'm supposed to do is solve their problem.

That is, unless I'm actually supposed to be developing a long life product, which will require all kinds of future enhancement and maintenance and re-configuring.  In that case my job is to both solve the client's problem AND develop a strong software product.  These are two competing goals.

And that's the thing about People Problems:
  • They are never clear cut
  • They tend to overlap
  • They always involve trade-offs
They're not clear cut because frequently you don't know what problem your supposed to be solving.  Or there are a bunch of problems you're supposed to be solving and you don't know which is most important.  Or different people have different problems and you need to simultaneously solve them all.  Create an app for a user!  Create it in the time your managers wants!  Make the app a platform to build a product on for your boss!

And this is where the trade-offs start.  You simply can't HAVE your cake AND eat it.  See?  Once you EAT it, you don't HAVE it anymore.  Or if you HAVE it, then you haven't EATEN it.  See?  I just thought that up. Pretty good huh?

The "real world" is all about People Problems.  And People Problems inevitably lead to trade offs. And trade offs inevitably lead to disappointment.  For example, if you choose to make the code super high quality, you wont be disappointed, but your boss will be because of how much it cost.

The trick is to understand that above all you are solving people's problems.  Then understand that necessarily involves trade offs.  Then try to take a big picture view when deciding on how to make those trade offs.  Hopefully that will at least help you deal with the inevitable disappointment.  At least you'll know you made the best decision you could for the specific People Problem you were faced with.

Of course, you'll probably be disappointed when you find out later that you didn't have all the facts straight and your decision was based on faulty information or just information that has since changed.  But that's a whole different dimension of dealing with People Problems.

Tuesday, December 15, 2009

WPF UserControl IsEnabled in WinForms host

While debugging today, I ran into something rather odd.  Two things actually.  And since I'm falling behind in my posting I thought I'd share.

Here's what I was dealing with.  I have a WinForms UserControl which contains a WPF UserControl within an ElementHost.  The WPF control is being informed of what is selected on the form and is Enabling or Disabling itself accordingly.  However, there is a special case in which the form knows it wants the control to be disabled all the time.

Before I go any further.  If you're working with WPF controls hosted in WinForms and you're dealing with enabling and disabling things, you should be aware of this bug and this workaround on Microsoft Support.  It will cause all kinds of havoc if you set things to Enabled=false before showing them.

The WPF user control is using MVVM, so it's IsEnabled property was bound to an IsEnabled property on the View-Model.  This is where we run into the first interesting thing: this doesn't work.  The vm.IsEnabled property was changed, and the PropertyChanged event was fired with "IsEnabled" as the property name, but the WPF user control's IsEnabled property did not change.  I found someone else who had this same issue and posted about it on this blog.  His work around cracks me up, I can't believe it works...  WPF is crazy.

My work around was to just hook the PropertyChanged event myself like this:
void _vm_PropertyChanged( object sender, PropertyChangedEventArgs e )
{
  if ( e.PropertyName == "IsEnabled" )
    this.IsEnabled = _vm.IsEnabled;
}
I was stunned when this code didn't work. It fired, it set the property, but after setting it the property value didn't change. And it didn't change after WPF got a chance to run through a layout pass either.

This is the second interesting thing.  Turns out the WinForm's control that the WPF control was in was disabled. Why?  Because the parent form had disabled it due to that "special case" I mentioned.  So when I set the WPF user control's IsEnabled to true it didn't matter because it's parent's Enabled property was false. So it just shrugged it's shoulders and ignored me.

So before I can enable the WPF control, I need to enable it's parent, the WinForms control.

In order to get this whole mess working what I ended up doing looks something like this:
  1. View-Model fires change event for IsEnabled
  2. WPF User Control fires custom change event for IsEnabled (it does NOT try to set IsEnabled)
  3. WinForms User Control sets Enabled = CustomEnabled from WPF User Control
  4. WinForms User Control's EnabledChanged fires
  5. WinForms User Control sets WPF UserControl's IsEnabled property = this.Enabled
  6. WPF User Control's IsEnabledChanged fires
  7. WPF User Control sets View-Model's IsEnabled = this.IsEnabled
This ensures that all Enabled properties on all the objects involved here will always be in sync, no matter which one you change.

Part of the problem here I'm going to blame on bad design.  I don't think the control should be being enabled and disabled in two completely different ways (one from the top (WinForms), and one from the bottom (View-Model)).  The rest of the problem I'm going to blame on Enabled properties being really confusing.  They have mystical relationships with their parents and their values can change at different times.  Most properties when you give them a value either have that value after the set, or throw an exception.  But not Enabled!  It's mystical.  I find that I know this but that I still get bit by it when I'm not expecting it.

Wednesday, December 2, 2009

Usability: Locking Doors

Reading The Design of Everyday Things has caused me to start paying attention to usability issues I run into in day to day life. Some of them are interesting and I'm going to try to remember to share those here.

This one is about doors. DOET talks a lot about doors. The specific part I want to talk about is how they lock. A normal door works something like this:
  • Use the lock mechanism to lock the door
  • When locked the door cannot be opened
  • Use the lock mechanism to unlock the door
  • When unlocked the door can be opened
A while back I encountered a door that worked differently than this though. Unlike the normal door, which when locked can not be opened, this door could still be opened from the inside even when it was locked without unlocking it! But it could not be opened from the outside when it was locked.

You can see where the designers were coming from here. The point of locking the door isn't to keep people inside locked in. The point is to keep people outside from getting in. Letting you open the door from inside even when it's locked aligns more closely with the purpose of locking the door. And you can imagine all kinds of situations where this would be nice: answering the door when someone knocks, opening the door to leave the house, etc.

So this change seems to be a great idea: it fits the purpose more closely, and it eliminates some small annoyances. Unfortunately it introduces a really big annoyance of its own: its super easy to lock yourself out.

All you have to do to lock yourself out of the house is walk out and close the door behind you.

Preventing you from locking yourself out of the house isn't one of the stated purposes of a normal door, but because of how it works it manages it all the same.

But lets look at how a person uses the door the way a software engineer looks at a person using software, in terms of "clicks". Lets assume the door is already locked and a person wants to leave the house and leave the door locked behind them.

With a normal door:
  1. Unlock the door
  2. Open the door
  3. Close the door behind you
  4. Lock the door
With the special open-while-locked door:
  1. Open the door
  2. Close the door behind you
Which of these doors is better designed?

Monday, November 23, 2009

Don't Write Another Line Of Code Unless...

Don't write another line of code unless you know and have been influenced by the following things:

SOLID (from Uncle Bob)
SRP* - Single Responsibility Principle
Every class should have one clear and well defined responsibility
OCP* - Open Closed Principle (from MSDN)
Extend the behavior of a class without modifying that class
LSP - Liskov Substitution Principle
Derived classes must be substitutable for each other (this one is kind of obvious, but still important)
ISP - Interface Segregation Principle
Make fine grained interfaces, not fat interfaces (this goes along well with DIP)
DIP* - Dependency Inversion Principle
Classes define their dependencies, which other classes implement

* The three I've starred I believe are the most important as they make the biggest difference in fighting spaghetti code.

YAGNI (from Wikipedia) - You Ain't Gonna Need It
This is an Extreme Programming concept which basically says you shouldn't add functionality until you actually need it

TDD (from Wikipedia)- Test Driven Development

Loose Coupling/High Cohesion (from MSDN)

Code Smells (from Fowler)
A code smell is a "surface indication" that there may be a deeper problem with your code.  It is very useful to know these, especially when practicing TDD.

Basic Design Patterns - Singleton, Observer, Mediator, Strategy, Decorator, etc
There are lots of books on the major design patterns. I personally haven't read this one, but it has been highly recommended to me: Head First Design Patterns

MVC/MVP/MVVM (from Fowler) - Model View {Controller|Presenter|View-Model}
The biggest piece to take away from this, in my opinion, is the responsibility of the Model and the separation of the UI (the view) from the Application layer (the Controller, Presenter, View-Model).

DI/IoC/Service Locator (from Fowler) - Dependency Injection/Inversion of Control/Service Locator

Law of Demeter (from Wikipedia)
This is a useful concept to be aware of, but one that really shouldn't be thought of as a law. See what Phil Haack has to say.

Database Isolation Levels (from MSDN)
Anyone doing anything with databases needs to know the Isolation Levels, what they do, and when to use them.

Concurrency Models - Optimistic, Pessimistic (from Fowler's Patterns of Enterprise Application Architecture)
You can't write any multi-user application that updates data without understanding concurrency and the various patterns for dealing with it.

Version Control (from Wikipedia)
You should be familiar with centralized version control (like Subversion). And I think you should also understand distributed version control (like Mercurial).

--
You don't have to be an expert in all these things. But any decent developer should have at least a basic understanding of these concepts and be able to understand what they are when someone else mentions one.

Our industry is REALLY weak on education. We go to school and learn about data structures, semaphores, virtual memory, file systems, etc. Then we graduate, get jobs in Software Engineering and promptly never use any of that. I'm not saying it's not useful stuff to know; it is useful to know. But its not what you deal with day to day as a software engineer.

You can't be a software engineer without knowing the stuff on this list. I'm serious. If you don't know it, you're just a dude who's hacking out code.

That said, I'm pretty sure I'm just a dude who's hacking out code... What things do you think I should have on this list before I write another line of code?

PS. There are also lots of good books you should probably read if you're thinking about being, or already are, a software engineer.

UPDATES:
1/28/2010: added Code Smells and re-ordered list

Tuesday, November 10, 2009

DDD is Not About Perfection

The main practice of DDD (Domain Driven Design) is refactoring to deeper insight. The idea is very similar to much of what the Agile practices preach. Namely, when you find you have
  1. misunderstood something or made a mistake
  2. been given bad information
  3. learned something new that changes previous assumptions
You go back and you update your code to reflect your new understanding. In Agile, this is called embracing change, in DDD its refactoring to deeper insight.

This is one of those principles that seems like it shouldn't need to be said. If someone knows that what they've done isn't right anymore who WOULDN'T go back and fix it?! But it turns out this is one place where real life and theory don't line up. The thinking usually goes something like this:
We've spent a lot of time working on this feature, we're out of budget, it works fine in 90% of the cases, and we can just add this little hack that will take care of the other 10%. Therefore it makes more sense for us to just do the little hack.
I'm sure you've seen this line of thinking before, so you're already anticipating that I'm going to say this is stupid. But hold on. Its not stupid. This is actually totally sensible thinking, but there are two problems:
  1. You made up that thing about "90% of the cases." You actually have no idea how often the "edge cases" that are a problem for you will crop up. And for all you know, those may be the more important cases, which your system can now only accommodate with a weird hack.
  2. You haven't considered changes that might arise in the future, or insights you may have in the future. If any do crop up that are related to the "10%" edge cases, you're now going to be forced to build hacks on top of hacks (on top of hacks, on top of hacks, on top of hacks...).
So obviously you should never write the hack, you should always embrace the change and refactor to deeper insight.

Nope, sorry, wrong again. Unfortunately we are here living in the real world. We have real world constraints: Time and Money. If we always took the time to fix everything we discovered we got slightly wrong we would never deliver a product, ever. And shipping is a feature.

So... sometimes we're going to have to hack, which might get us into a world of trouble? And sometimes we're going to have to blow our budget refactoring? How do we know when to do which?

In Strategic Design - Responsibility Traps Eric Evans (the founder of DDD) says "the whole system will not be well designed." This is an inescapable fact if you're working on a large complex system. If you're working on a small simple system, maybe you can pull it off, but even then I doubt it.

How can the guy who's whole development technique revolves around refactoring to deeper insight say the whole system will not be well designed? Well, he has an answer of sorts. Evans says that what you need to do is identify your application's Core Domain. What is it that your application does that sets it apart, makes it important, or provides the most value for the users? That's your Core Domain. Now if a change crops up in the Core Domain, you refactor to deeper insight. This is the most important part of your app! It's the part you'll be building on for everything else in your app. This is the part of your app has to be perfectly designed.

But what about parts that are NOT the Core Domain? Or what if it's not so easy to define your app's Core Domain? Then what do you do? Well, you weigh the options, and take your best guess.
  • Is there time in the budget?
  • How hard will it be to refactor to account for this change?
  • How much better will the app be if you do change it (time saved? # people affected? usability? performance? correctness?)?
  • Can you convince yourself it is unlikely other changes will have to be made in the future that will be related to this change?
Once you have answers to these questions that are as accurate as you can manage, then you have to just guess. Because your answers to these questions are NOT scientific. And you have no way to predict the future. But you have to try anyway, so you guess.

If you think it's going to be very hard to accommodate the change and you think the change wont make your app all that much better, then hack it. If that's reversed, refactor it.

I'm a programmer, and I'm hungry for perfection, so I always lean toward wanting to refactor it. I think refactoring to make it correct anytime you have the time and ability is the right move. You'll end up with an app you're proud of, that works better for your users, and is easier to maintain and update. But we have to face facts! Sometimes the cost is simply too high. Sometimes we're forced to hack it now, and pay the consequences later.

The good news is that DDD helps tremendously with this. If you can define a Core Domain, you're that much better off because you have now decided what your application is all about. This will help you in every decision you need to make.

Further more, DDD is drastically easier to refactor and maintain than spaghetti code. So the cost of refactoring to deeper insight is lowered when you're writing DDD.

But in the end, we have to realize that DDD is not about writing perfect code. Its about writing good code, that makes the complexity of your application manageable. But we know that despite all the benefits DDD can bring, it doesn't promise perfection. The simple truth is that software is very very hard to write, and designing enterprise applications is even harder. So the best we can do is write Good Enough software, like they say in The Pragmatic Programmer. But hopefully DDD will help us to raise the bar on how good that software is.

Monday, November 2, 2009

Knowledge in the Head and in the World

A looooooooong time ago I wrote a post called Theory of Software Usability.  This post was primarily about the tradeoff between “Ease of Learning” and “Ease of Use.”

I used my favorite example of Vim vs. Notepad.  Vim is an advanced modal editor that a n00b wont even be able to get text into if they don’t know what they’re doing.  Whereas Notepad is just about the simplest application you can imagine that anyone can figure out how to use.

My argument was that Vim is extremely usable but difficult to learn while Notepad is extremely learnable, but not really all that usable.  So there is an implicit tradeoff between Learnability and Usability. 

Recently I have been reading The Design of Everyday Things and I can across a concept that is an interesting corollary to the Usability vs. Learnability issue.  The book presents “The Tradeoff between Knowledge in the World and in the Head”.  Knowledge in the world is simply information that is readily available in the world, so you don’t have to learn it, or at least you don’t have to learn too much.  In The Design of Everyday Things an example of a typist is used. 

“Many typists have not memorized the keyboard.  Usually each letter is labeled, so nontypists can hunt and peck letter by letter, relying on knowledge in the world and minimizing the time required for learning.  The problem is that such typing is slow and difficult… But as long as the typist needs to watch the keyboard, the speed is limited.

If a person needs to type large amounts of material regularly, further investment is worthwhile: a course, a book, or an interactive computer program… It takes several hours to learn the system and several months to become expert.  But the payoff of all this effort is increased typing speed, increased accuracy, and decreased mental load and effort at the time of typing.”

At the end the book presents some tradeoffs between knowledge in the head and in the world in terms of 5 properties, retrievability, learning, efficiency of use, ease of use at first encounter, and aesthetics.  It breaks down like this, knowledge in the world is retrievable, requires little to no learning, is not efficient, is easy to use at first encounter, and can be unaesthetic and inelegant.  On the other hand, knowledge in the head is not retrievable, requires lots of learning, is efficient, is not easy at first encounter, and can lead to better aesthetics.  So basically, they are at odds with each other.

Bringing back my Vim vs Notepad example, we can see how this fits right in.  Notepad puts all the knowledge you need “in the world.” All the labeled keys on your keyboard do exactly what you’d expect and the other functions are clearly labeled in the menus.  In Vim on the other hand, you can’t even enter text until you learn the “i” command.  The knowledge must be in your head.  All the tradeoffs listed above apply perfectly to this example.

I think this is a very important concept to keep in mind when doing software design.  Who is your user?  What job are they doing?  Often often will they be doing that job?  Will new people need to figure it out on the fly, or will the same people always do it over and over again?  The answers to these questions will help you decide if you should emphasize knowledge in the world or knowledge in the head.  If you are building a public facing website that many people will visit, you want to emphasize knowledge in the world.  If you are building an application for something like data entry you may want to emphasize knowledge in the head.

The important thing to take away from this is that there is a tradeoff and you have to make a decision one way or the other.  Knowledge in the world is not always better than knowledge in the head, and vice versa.  Pay attention to what you are building and who you are building it for and design accordingly.

Tuesday, October 20, 2009

Intangible Value

Rory Sutherland is a marketer who recently gave a TED talk. The talk is about 16 minutes long and is hilarious, you should totally watch it.


He opens the talk with this gem, "if you want to live in a world in the future where there are fewer material goods, you can either live in a world which is poorer (which people generally don't like), or you can live in a world where actually intangible value constitutes a greater part of overall value."

Seth Godin is another marketer who has a short and sweet blog post called Creating sustainable competitive advantage in which he argues that competitive advantage rarely comes from proprietary technology or technological barrier to entry. In other words, technology alone will not allow a business to succeed because its competitors will quickly be able to copy the technology.

He has a list of things you can do to gain competitive advantage, 3 of which apply here:
  • You can build a network (which can take many forms--natural monopolies are organizations where the market is better off when there's only one of you).
  • You can build a brand (shorthand for relationships, beliefs, trust, permission and word of mouth).
  • You can create a constantly innovating organization where extraordinary employees thrive.
Tying in with Sutherland, these are about adding intangible value. You can gain intangible value by building a network around your product, or by building trust and a name for yourself ("brand"), or by being a constantly innovating organization.

The last is half intangible, half tangible. The actual innovations produced are tangible, but being innovative adds its own intangible value, both to your customers as well as your own employees and even to job applicants! Emphasizing extraordinary employees is a relatively intangible thing which can produce tangible benefits across the board (better products, faster delivery, lower employee turn over rate, and better employees). Thus the competitive advantage.

I think this concept of Intangible Value can be extended into software itself. Possibly the best example is usability. Some software usability concerns are tangible: how long does it take someone to accomplish a task, how many clicks are required, etc. But other usability concerns are intangible: does the user enjoy using the software or does it make them want to shoot themselves in the face.

Seth Godin's post talks about things you can do today to gain competitive advantage, but Rory Sutherland's talk is about how we as a people need to learn to value intangible things more. This is much harder than it sounds.

When you're evaluating two products, you look at the feature lists. If one product has more features, you're likely to decide that it is the better product. But while it may have more features, it may also make people want to shoot themselves in the face. Can we include that on the list of features?

As an example, compare Microsoft Visio to Balsamiq Mockups. Visio is a very full featured product which is ridiculously flexible and powerful compared to Balsamiq. But everyone I know likes Balsamiq better. Why? It's the shoot myself in the face factor. Balsamiq is faster and easier to use. In fact, it's a joy to work with. That is a relatively intangible benefit, but it's real.

As another example, take 37 signals. I have not personally used their products, but I know from what I've heard and from what they've said that their focus as a company is on building slim lined and usable software. There are big box alternatives to their products that have been around for much longer and are far more "configurable," but people love 37 signals. Again, for mostly intangible reasons I think.

So intangible value is a real thing which is often overlooked by the "deciders" but always appreciated by the users. The challenge for those of us who design software is to figure out how to add that intangible value into our products, and how to make potential users aware of it. The challenge for people in general is to recognize intangible value when they come across it and not dismiss it as unimportant.

Wednesday, October 14, 2009

What a Programmer Wants in a Manager

Management is an oddly fascinating subject. It's kind of dirty word these days, but when you distill out all the nonsense and get down to what it's really about, it's interesting. Managing programmers, or "knowledge workers," is, in many ways, a special case and requires special consideration. This is because you can't break a programmer's job into a series of reproducible steps. Programming is an inherently creative job, which is why it's often compared to craftsmanship.

In thinking about how to manage programmers, I tend to empathize with the programmers more than the managers. The way I see it, a programming shop's number one expense AND number one asset is its programmers. So it seems pretty clear to me that you aught to do everything you possible can to keep those programmers working well. The challenge is of course figuring out how you actually do this.

The book First, Break All the Rules, suggests that a manager's job is to discover the talents of his people and direct those talents to the business's goals. I like looking at it this way because it indicates that the manager should recognize what his people are good at, and then let them go be good at it. The trick then is to make sure the things they're being good at are also the things the business needs to be good at.

Joel Spolsky goes even further and says that a manager's "most important job is to run around the room, moving the furniture out of the way, so people can concentrate on their work." At least, he says that's how managers at Microsoft behaved. Again, there is a large focus on getting out of the way so your people can work.

Recently Fog Creek announced a series of training videos they are selling. The video series is $2000 and a little conceited if you ask me... But in the promo video, one of the Fog Creek dudes says, "Developers are assumed to know the right answer. So you don't start from a position of negotiating about whether or not you could possibly have the right answer. You assume they have the right answer and that's there job to explain to you why its the best solution, not why its the wrong solution or the right solution." In this we're seeing some of that "get out of the way" mentality but also a certain amount of built in trust.

I have my own theory on the best way a manager should behave, which is strongly influenced by these references as well as just about every word in Peopleware. I think it breaks down like this:
  1. Trust your people
  2. Value your people's time over just about everything
When I say trust, I'm not talking about blind trust. I'm not a complete idiot! But I am talking about a change in tone. A manager needs to set the goals and objectives people are working toward, and a manager needs to ensure those goals and objectives are being met correctly and effectively. But a manager should trust their people to do the work right and in the best way possible. As a manager, you can ask for proof of why what people are doing is the BEST way. But you need to be careful that you don't demand proof they are not completely wrong. There is a subtle difference here which has a huge effect on morale.

I think this is very important. Programmers want to be treated like experts. They want their opinions to matter. If you stifle that, you will end up with frustrated programmers, and frustrated programmers don't work as well. Worse, since programming is a creative activity, if you're actually stifling creativity you're ruining your product. The best way to improve morale and encourage creativity is to offer up some trust and treat people like adults.

If your people know that they are trusted, and that their ideas will be seriously considered, they're more likely to bring ideas to you. They're more likely to think outside the box. These can only be good things. If instead they feel untrusted and are afraid their ideas will always be shot down, they wont bring anything to you.

Programming can be detailed tedious work. Nothing pisses a programmer off more than the impression that the time and effort they have spent was wasted or simply unappreciated. This is why programmers hate dropping a project in the middle to work on something "new" and "higher priority" that "just came up." New higher priority things DO come up at the last minute. But if a manager just tosses it on a programmer's desk and says, "get this done first" they're not valuing the programmers time.

I firmly believe a manager's job is to push the furniture out of the way so the programmers can get their work done. But, we're not really talking about furniture. We're talking about all the complexity of a project. The inter-relations between different teams, the ever changing client demands, the relative priorities of different assignments. These are things the managers should be focused on working out so that programmers don't have to spend so much time worrying about them.

Again, much of this is just tone. When a manager is laying out the work that needs to be done and the order it needs to be done in, and who needs to do what work when... they can treat this as a power opportunity for themselves. To hand out assignments from on high with little regard for explaining the circumstances to the programmers and a simple expectation that the programmers should take it and get it done. Or they can treat this as an opportunity to indicate that the goal is to optimize the developers time so they can focus and do their best work instead of dealing with the sticky details of the real world that programmers simply don't like.

A manager has to do the same work either way. In the end, much of management really comes down to politics. And since I believe the programmers are the most important asset going for a business, I believe the politics should be oriented around keeping the programmer's morale high, avoiding frustration as much as possible, and engendering a corporate culture where the programmers feel their work is valued and important.

Thursday, October 8, 2009

Responsibility Traps

Over at InfoQ there is a presentation by Eric Evans called Strategic Design - Responsibility Traps. This presentation is full of all sorts of gems like "one irresponsible programmer can keep any 5 top notch programmers busy."

His presentation is all about how to take legacy systems and apply DDD (Domain Driven Design) to them when doing new development. In the process he addresses lots of really important issues like, "the whole system will not be well designed," which I may dedicate a whole post to in the future.

What I want to mention here is what he calls Responsibility Traps. These are traps that responsible programmers (read: good programmers) fall into. In the presentation he mentions two:
  1. Building a platform to make others productive
  2. Cleaning up other people's mess
The reason why these are traps is that "you are not the one who finally delivers the sexy new capability." Instead, what you've done is "made the irresponsible programmers look even better". Ultimately Evans believes this leads to this fact: "Because the best programmers are busy making the platform strong, the actual delivery of the Core Domain is being done by irresponsible programmers."

I had never really thought about this, but I think he's dead on. Your real star players, over time, gravitate to working on more abstract projects like frameworks and platforms and very technology oriented things. This migration makes sense on the surface, because these things are harder, so you want your best people working on them.

The problem with this is that these are not the projects that make the biggest difference to the product as a whole. These things are not part of the Core Domain. They may be important. In fact they may be absolutely essential, but it doesn't change the fact they are not the MOST important. They are not what your application is all about. So what happens is your irresponsible developers end up working on the MOST important parts of the application.

Evans consistently uses the word "irresponsible" instead of "bad" or "weak". I think this is more than a political move on his part. The reason why it's a problem that the irresponsible developers are writing the core domain is that they write it irresponsibly, not that they write it badly. What does that mean?
  • They hack through it
  • They leave it a mess
  • They don't question the design when it stops working well
  • They keep bolting new stuff on top instead of refactoring
  • They introduce performance and maintenance problems
These things are "bad", but for the most part they are not outwardly noticeable. Irresponsible developers may be fully capable of delivering a project with few to no bugs that looks just like what the business people asked for. But because they wrote it irresponsibly it will be a thorn in the side of the project from then on. Unfortunately, the business people wont know that. And if the responsible people tell them, the business people either a) wont believe them or b) wont understand the severity.

There is a certain amount of "the sky is falling!" here. The responsible people say there is a problem in something that has been written. The business people say, ok, go fix it. The responsible people toil away for awhile and return with the issues resolved. The business people don't see a difference, usually there ISN'T an immediate noticeable difference. So it looks like these responsible people keep shouting about the falling sky and then spinning their wheels on nothing for weeks, while the irresponsible people are off getting things done (and creating more problems).

The point Evans is trying to make with this is that the responsible developers are actually being somewhat irresponsible by allowing this to happen. This isn't about being political and trying to make yourself look good. I mean, it's partly about that. But it's really about being truly responsible and embracing the fact that the whole system will not be perfect and focusing on making the most important parts be the parts that are the highest quality. Those are the parts the responsible people should be working on, those are the parts YOU should be working on!

Monday, October 5, 2009

Reading List

Most "technology" books aren't very good.  They tend to just be focused on a specific version of a specific technology.  A book like this can be useful when you're first starting out, but after you read it, you'll never pick it up again.  However, there is another class of programming book that doesn't fall into this specific technology category.  Books of this other class are timeless in nature because they deal with the actual art of programming instead of specific syntax or frameworks or tools.  These are very valuable because they have the ability to dramatically expand your programming horizons and make you a much better developer.

This reading list will only contain books that I feel have a certain timeless quality to them and are mostly independent of language or framework.  I've prioritized this list in order of how influential the book has been for me.

The Art of Programming

Clean Code - Bob Martin
This book is about code.  It's not about what that code does, or code patterns to accomplish things, or code architectures to organize things.  It is just about code and how to write it so it is clean, understandable, and maintainable.  It is likely the single most important book I've ever read about code because it applies to every line of code I write.

Practical Object-Oriented Design in Ruby - Sandi Metz
This book is in Ruby, and it's about Ruby, but it's also the best treatment of OO practices I've ever read.  And those practices are easily applicable to other OO languages, including static languages like C#.  This book has done more to develop the way I approach building OO code than any other resource.


If you've ever read the "Gang of Four" patterns book, or any books that repackaged those patterns you know what a Design Pattern is all about and you're probably bored with them. The patterns in this book are so much more influential and important than the GoF patterns, so don't let the word "patterns" scare you off. Think of this book as the text book for anyone developing multi-user "business" web apps or rich clients. It covers nearly ever major problem you are likely to be faced with if you're building from scratch.  And if you're using a framework, it will explain the patterns used by that framework and their trade-offs.


Growing Object-Oriented Software, Guided by Tests - Steve Freeman and Nat Pryce
This book's approach to building a large application is deeply important.  It covers outside-in development, the importance of TDD, and many useful OO and testing patterns as well.  Skip the part that goes step by step through code, just read the first and last parts.  (The RSpec Book actually does a better job of describing outside-in development, but it's much more tech specific.)

This is the most comprehensive book on enterprise application development I've ever encountered. For me it was a complete game changer. The book itself presents you with concepts and examples and patterns, but it doesn't get bogged down with implementation issues. The result is after reading it you know you HAVE to start writing code this way, but you really don't know how to write it just yet. I no longer actually practice the strict rules of DDD, but the language and patterns of this book still strongly influence by approach to developing complex domain code.

This book wont give you dramatic new ways to write code. Instead it will give you dramatic new ways to think about code and your responsibilities as someone who writes code. It includes what to my mind is the beginnings of Agile Programming and many of the SOLID design principles. It is also packed with parables that seem obvious until you realize they've happened to you at work. It should be required reading for any developer.

Managing Programmers

Peopleware: Productive Projects and Teams (Second Edition) - Tom DeMarco and Timothy Lister

This book is some interesting cross between a book for managers and a book for programmers. Its a great read and is likely one of the most influential books in our industry. It has clearly defined much of the culture of companies like Fog Creek and Microsoft and even Google. You should read it, but if you don't work for one of those companies be warned you might get a little depressed.

First, Break All the Rules: What the World's Greatest Managers Do Differently

If you are in any kind of "Management" role you should absolutely read this book. If you're not, you should still read this book, cause it will help you manage your manager. Even if management doesn't directly affect you at work, you still should read this book, simply because it's interesting and will give you new outlook on all the places you spend money. There is nothing specific about programming in this book, just a really solid and entertaining book on the result of a giant Gallup poll on managers.

Software Development Related


This book is short and a ridiculously fast read. The content is so common sense you might trick yourself into thinking you already knew it. And the truth is you probably DID, but you hadn't thought about it consciously. And for that reason alone, this book is worth reading. I think the most valuable thing about this book is it shows you that you can work on Usability without spending a fortune on a usability lab or outside consultants or long term studies with thousands of volunteers.


UPDATE 5/2/2013: added POODR, reordered list.

Thursday, October 1, 2009

Blogosphere: Duct Tape Programmer

Joel Spolsky is a famous blogger, partly because he knows how to get people so riled up that they'll talk about him... He's done that again which a recent post called The Duct Tape Programmer. As much as I hate to fall into his trap, the debate around this has been really interesting, so I'm gonna go ahead and talk about it!

The point of Joel's post seems to be in this quote from Jamie Zawinski:
"It’s great to rewrite your code and make it cleaner and by the third time it’ll actually be pretty. But that’s not the point—you’re not here to write code; you’re here to ship products."
Joel goes on to say that "Shipping is a feature. A really important feature. Your product must have it."

So, you have to ship a product to be worth a damn. Got it. Of course, the blogging world at large took his post as an attack on TDD, Agile development practices, and striving for Quality. This appears to me to be a classic case of Black Or White Disease. Either you are a "Duct Tape Programmer" who works fast and ships early but who's code is crap, or you are an "Architecture Astronaut" who over designs everything and never ships anything.

But this is not a black and white distinction. Richard Dingwall makes that point in Duct-tape programmers ship... once.

Ayende responds with disbelief that anyone would seriously suggest people abandon good practices in preference of relentless hacking just to ship a product. I'm not sure that was the message Joel was trying to send, though he was certainly saying that Unit Testing isn't worth the added cost.

Most of the debate therefore boils down to Quality, which I've touched on before in The Paradox of Quality. Quality is tricky. Jack Charlton hits on this by saying that quality is really just one dimension of many that must be balanced when developing a product. If fast to market with lots of features is a requirement, the only thing left to give up is quality.

But when you sacrifice code quality you are sacrificing product quality as well, and you are certainly adding long term cost. What shipped project ships once and never has to be dealt with again? Furthermore, what shipped product stays as simple as it was when it was shipped? If you sacrificed quality you're going to end up in a world of hurt when it comes time to dive back into that code and enhance it in unpredictable ways. But maybe the benefits of being first to market or shipping fast will mean that NOW you have the time to go back and raise the quality bar. Its harder to do it that way, but its not impossible.

This is what Domain Driven Design (DDD), and Test Driven Development (TDD) or Behavior Driven Development (BDD) is all about. Attempting to raise the quality of your code to the point where your product doesn't become legacy the day it is shipped (or even before its shipped...). But also attempting to do that without inordinate cost so that you can still ship your product.

This is about real world decisions with real world trade-offs. Developers don't like real world trade-offs, its part of what makes them good developers. They are given to all or nothing, Black Or White type thinking. Trade-offs are unpleasant and unwanted. But trade-offs come with real world issues, so as usual the answer of when to sacrifice some quality to ship a product earlier is: It Depends.

PS. This is the first post in a series of "Blogosphere" posts I intend to start where I write about interesting things I've seen out in the Blogosphere. If you like it, let me know!

Tuesday, September 29, 2009

DDD: Performance Trade Offs

Domain Driven Design is a powerful technique composed of a lot of powerful patterns. It's focus is on designing rich object models that help control Domain complexity, like I talked about previously.

Before I stumbled on DDD I was trying to switch from stored procedure/record set based development to NHibernate/object model based development. One of the first stumbling blocks I encountered was what I refer to as "loading the object web." In a typical domain model your objects will have a lot of associations with each other. If you represent each of these with a property traversal (ex: Employee.Company.Branches.Sales...) things start to get messy.

The first problem you run into is in loading your Model from the database. To load any one Entity you have to load all of it's associations, and all those associations' associations, etc. This is a no go. The first way to handle this is through lazy loading, so the associations aren't loaded until you try to use them. That works, but it's a bit sloppy and it is not expressed in the Domain Model.

Another problem with this object web is determining how to persist changes and what records to lock. I have found database locking issues to be ridiculously complicated and amazingly ignored in the blog-o-sphere. For more on locking and its various solutions you should read Martin Fowler's Patterns of Enterprise Application Architecture. Fortunately DDD has a pattern we can apply to help address the web of objects problem: Aggregates.

The idea is simple, an Aggregate is a group of tightly related Entities. Evans says that all objects within the Aggregate should be loaded together and persisted together. He also says that objects outside the Aggregate can only have a reference to the Aggregate Root.

Roots are now explicitly indicated in our Model and they indicate where the boundaries between the objects in our object web are. This simplifies life considerably and it will also help with our database locking issues.

That's a lot of talk and I haven't even started talking about performance trade offs yet. Well, fortunately that really wont take long. If you are coming from the ad-hoc, off the cuff, procedural style, "Transaction Script" (as Martin Fowler would say) programming world like I am you may be alarmed by the Aggregate concept. You are used to executing a diverse array of procedures (filled with business logic) that return a specific subset of data: only what you need right now. But with DDD and the Aggregate, you're now going to return all the data for the Aggregate. This may be more data than you think you NEED at this point in your code.

There's your performance trade off. Retrieving more data when you don't necessarily need it. Why is this worth it? One simple reason is it allows you to enforce all your invariants (read: consistency rules) in the model all the time. You load all that data so that you can make sure all your data is in a consistent state at all times.

This also allows you to avoid triggers. You don't need a trigger in the database because your Model will do whatever the trigger would have done. Avoiding triggers is a performance benefit of DDD. Ironically, I've even seen that loading the Aggregate at once would have REDUCED the number of trips to the database in my code. This is because usually you don't just load something and be done with it. You tend to have to work with it. And there was a lot of duplication and repeated calls in the way I used to write my code that goes away once I have a Model in memory.

So sometimes the DDD approach can actually be more performant. But to be sure, other times it WILL do more work and WILL retrieve more data than the "Smart UI" (as Evans would say). But its worth it for the increased consistency and peace of mind that you gain, which is what allows you to deal with greater complexity.

Monday, September 21, 2009

DDD: Always Valid Model Objects

One of the rules of DDD (Domain Driven Design) is that your model objects should never be in an inconsistent state. For example, if you have a Contact object and Last Name and Social Security Number are required attributes, DDD says you can not create a Contact without specifying Last Name and SSN.

When I first heard this rule it caused me to dismiss DDD right out of hand. My thinking was, “How dumb! How am I going to bind the Contact to the UI if the Contact can’t be inconsistent? After all, the UI will be inconsistent all the time.”

It turns out the answer to my question is pretty simple, you don’t bind the Contact to the UI. If you’re following the Model View View-Model pattern, you bind a View-Model to the UI, then when the user clicks the save button you use the properties on the View-Model to build a Contact. If there is anything wrong an exception is thrown and you display a validation error through the UI.

In a way this kind of sucks. You have more hoops to jump through now.

  1. You have to “duplicate” the properties of your Model in your View-Model
  2. You have to “map” between the View-Model and Model properties
  3. Constructing a Model object can become somewhat difficult

Eric Evans addresses #3 with the Factory Pattern. Unfortunately he doesn’t say anything about #1 and #2. Interestingly regarding #1 and #2 though, if you were developing for the web you would be forced into this situation regardless because your Model is on the server while your UI is on the client. There would be no way to “reuse” your Model in the UI.

This lack of “reuse” certainly will cause more code to have to be written, but I would like to take a paragraph to say I think this is really a good thing. For one thing, to data bind to your UI you may have to make certain accommodations like changing data types or the structure of your properties. You don’t want concerns like this affecting the design of your Model. Another downside to this “reuse” is you are tightly coupling your UI to the Model. If refactorings happen in the Model they will affect all your views. If you add the View-Model layer only the View-Models will have to be updated (depending on what the refactoring was of course…).

The real question is are these hoops worth it? What’s so good about making sure your Model is always consistent? Simply put, this is an effective way to manage complexity. If your rules for “consistency” are simple, this can seem overkill. But as those rules get more and more complicated, especially as they begin to apply to more than one object in your model, making sure you’re consistent becomes drastically more difficult. By requiring all objects to always be valid, you’re taking a huge load of uncertainty off the shoulders of the Application Developer and you’re making a clear and consistent rule that all Model Developers must follow.

Thus once again bringing that DOMAIN complexity into check, even if it does require more code in the Application layer.

Monday, September 14, 2009

DDD: How to tackle complexity

In DDD (Domain Driven Design) you create an object model that represents your application's domain. This model contains all the relationships and logic of that domain. The purpose of this is to make the complexity of the domain manageable. DDD involves lots of patterns and concepts, but when distilled there are a couple big picture things that really serve to tackle the complexity:
  1. Explicit representation of domain concepts
  2. Continuous Refactoring to "Deeper Insight"
The thing about complexity is its complex. Complex things are hard to understand. If they are hard to understand, they are difficult to get right the first time. And as hard as they are the first time, they're exponentially harder the second, and the third, and you get the idea.

This is the real issue: complex software is hard to understand. That's what leads to a system which everyone is so afraid to update, they would rather start from scratch. Maybe at first you're willing to just go in and add a hack or two. But each hack raises the complexity and increases the ugliness of the code until finally its just not worth trying anymore. That's another way of saying FAIL.

So we must come up with a way to deal with this complexity. The first way DDD does this is by taking advantage of the power of Object Orientation, models, and abstraction. But that's a bit too broad. We need to figure out how to structure those objects and models. That's where DDD applies the idea of explicitly representing domain concepts.

The idea is simple. If there is an import concept in your domain, you should be able to see it in the Model. You shouldn't have to dive into the code to extract important concepts, they should be represented by an object in the Model. Say there is some action in your domain that can only be taken when certain conditions are met. If these conditions are minor, you can just add if statement guards to the method that performs the action. But if these conditions are an important part of the domain, hiding them away in the code isn't good enough. Instead of an introduce a Policy Object that represents the conditions that must be met for that action to be performed. Now the conditions are explicitly represented in your domain.

You'll see this same idea expressed in Factories, Repositories, Services, Knowledge Levels, etc. Its a huge part of what it takes to make your system understandable.

The second idea that makes DDD work is Continuously Refactoring to "Deeper Insight." The deeper insight bit means if you discover something new about your domain after you already have a model, you don't just bolt it on. You figure out if that new thing indicates something fundamentally important about your domain. If it does, you refactor the model to explicitly represent that new understanding. Sometimes these refactorings will be small. Other times they will be big and cost a lot. It doesn't matter, you do them anyway.

If you don't your model will lose its expressive character and become more and more brittle. More and more complex. Harder and harder to understand. So you have to fight to always keep your model as simple and expressive and accurate as you possibly can. If you're lucky, these refactorings can lead to what Eric Evans calls a Breakthrough, where new possibilities or insights suddenly appear that would have been impossible before. That may take some real luck. If you're not that lucky, these refactorings will at least lead to a model that is flexible in the places where the domain requires flexibility. That means it will be easier to handle future insights and refactorings.

The really really cool thing about this is these two concepts form a cycle and feed each other.
The more you Refactor the more Explicit your model becomes. The more Explicit your model, the easier it is to Refactor!

Tuesday, September 8, 2009

Models

Dictionary.com defines Epiphany as "a sudden, intuitive perception of or insight into the reality or essential meaning of something, usually initiated by some simple, homely, or commonplace occurrence or experience." This definition does not include a feeling of "DUH!" or "DOH!" or "Damn, I'm an idiot, how did I not see that sooner?!" though all of those certainly apply to the Epiphany I had a few weeks ago.

In this case, that moment of insight revolves around the concept of a Model. MVC, MVP, MVVM: each pattern starts with Model but what is a Model? For me, it had been nothing more than a data house, a class with get/set properties and MAYBE a couple of recurring little utility methods (format date or whatever). In other words, it was a class representation of the database. It was my LINQ-to-sql or Entity Framework data classes. Or my model classes in Ruby on Rails, which represent and map to the database.

That is a model, it's a model of the database, but is that really a useful thing? Is that what your Model in MVC, MVP, MVVM is supposed to be? The Epiphany came in the form a question from one of my co-workers. He asked, "Where's the OO?"

Indeed, where is the object orientation in my code? And that's when it hit me, there is none.

I mean, I write in C# so technically everything is an object. But I don't have any classes that exist to model a real world thing or concept. They just aren't there. I have utility/framework classes that exist to support the application infrastructure itself. Classes like StoredProcedure and Parameter, etc. These weren't a big leap, I'm basically just copying the .NET Framework. But I'm not writing a Framework, that's not my job. My job is to write a business application for my client, and I don't have a single object dedicated to modeling the client's business.

So effectively, when it comes to the domain, I'm writing procedural code. I might as well be writing in C.

How could this have happened? When I look back at the projects I did in college I was modeling all over the place. I had graphs and nodes and simulations and swarm agents and they were all objects, all models. But when I entered the business programming world and was confronted with Guis and Databases, I didn't transfer any of that. The main challenges then appeared to be 1. dealing with the UI and 2. dealing with the database. So all my efforts were set about those areas, the actual domain logic that the application was really there for was just scattered around.

Enter Domain Driven Design: The key point that I simply didn't understand is that Object Oriented Programming is about modeling the world. Yes, it's about abstraction and encapsulation, but those are really tools used to create a Model.

This is obvious, and if you had said it to me I would have thought I'd always known it and wouldn't have grasped the significance. The significance is that it's a drastic change in perception. If you're writing software, your software is doing something for someone. I would have asked the question, "what is it doing?" But a more appropriate question is "what does it model?" The "doing" is then included in the answer, along with the relationships and the data elements and the rules etc...

And now that you're modeling, you're taking advantage of all the benefits of object oriented programming. There is a very good reason why every modern language is object oriented.

In Domain-Driven Design: Tackling Complexity in the Heart of Software Eric Evans talks about all the reasons why modeling the domain is so important, and also introduces all kinds of software patterns that help make modeling the domain possible. There are two quotes that I think really explain the importance of this concept of modeling and indicate why it is such a shift of perception.
Model-Driven Design has limited applicability using languages such as C, because there is no modeling paradigm that corresponds to a purely procedural language. Those languages are procedural in the sense that the programmer tells the computer a series of steps to follow. Although the programmer may be thinking about the concepts of the domain, the program itself is a series of technical manipulations of data. The result may be useful, but the program doesn't capture much of the meaning. Procedural languages often support complex data types that begin to correspond to more natural conceptions of the domain, but these complex types are only organized data, and they don't capture the active aspects of the domain. The result is that software written in procedural languages has complicated functions linked together based on anticipated paths of execution, rather than by conceptual connections in the domain model.
The last sentence is the biggie for me. I've never heard a better description than "complicated functions linked together based on anticipated paths of execution." That perfectly describes what my code has looked like.

Evans also has this to say:
In an object-oriented program, UI, database, and other support code often gets written directly into the business objects. Additional business logic is embedded in the behavior of UI widgets and database scripts. This happens because it is the easiest way to make things work, in the short run.

When the domain-related code is diffused through such a large amount of other code, it becomes extremely difficult to see and to reason about. Superficial changes to the UI can actually change business logic. To change a business rule may require meticulous tracing of UI code, database code, or other program elements. Implementing coherent, model-driven objects becomes impractical. Automated testing is awkward. With all the technologies and logic involved in each activity, a program must be kept very simple or it becomes impossible to understand.
Not only is it the easiest way, it's the way the Microsoft tools encourage you to work. I now understand why the ALT.NET community was so pissed about Entity Framework. But that's the standard Microsoft approach: catering to the least common denominator. And as long as what you're doing is simple, you'll be fine. In fact, that's probably the right way to go because you'll have less code, less overhead, and still be able to understand and change it.

But I haven't worked on an app that was that simple, in, well ever. And I've been focusing so heavily on the technology, database access and the UI framework, that I forgot the real complexity was in the domain.

Of course now that I understand this I have to figure out how to write applications with a rich Domain Model layer. How do you persist changes from the domain? How do you do change tracking in the domain? How do different elements of the domain communicate with each other? How does a rich domain oriented application work with a service layer? And most important of all, how do you start adding a rich domain to an application that was written without one? Hopefully the DDD book will have the answer to most of these questions.

Monday, August 31, 2009

Engaged Employees

First, Break All the Rules is a fantastic book about management and companies and I highly recommend it for managers and non-managers alike. Toward the end it makes the case that good managers lead to engaged employees.

I was totally enthralled by that term engaged employees. It's a perfect description of one of the characteristics that can make one person so much better at their job than another. I'm sure you've seen examples of this too. One person can be ridiculously smart, and yet do bad work. Or they might be amazingly talented, and still they don't do good work. You see examples of this every day in the grocery store checkout clerk, or the person at the front desk of your hotel, or the cleaning staff at your office, or the police officer directing traffic.

Sometimes when a person isn't performing well in their work, the failure is attributed to them being lazy, or immature, or even that they're just not challenged enough and so are bored. That may all be true, but the problem is really that they are simply not engaged in their work, for reason or another.

It turns out this concept of employee engagement has a rich history dating back to at least 1993 when it was described as "an employee's involvement with, commitment to, and satisfaction with work."

This really goes to the heart of what Peopleware was all about. Every manager wants their employees to be involved and committed to their work, but they all too often forget that they also need to be satisfied with it. Peopleware talks about this all over the place. And probably the two best real world examples are Google and Fog Creek.

Companies with disengaged employees are simply bleeding money. Its like the difference between people who drive the speed limit and coast into all their stops and people who are constantly speeding up to the bumper of the car in front of them and hitting the brakes... The second dude is just throwing gas away and killing his mileage. Every time he pulls around someone and floors it he feels good, but then he has to slam on the brakes again. And when he fills up at the pump and calculates his mileage, he'll blame his car or traffic conditions for the low mileage, anything but himself. This same short sightedness is the reason why companies want to skimp on amenities for their staff or the quality of their product. But its costing them engaged employees.

So engaged employees are important, the challenge is in getting them. Some people need someone around to keep them actively engaged, but others don't. The people that don't need help are your "self-motivated" people. The kind who need to be engaged to be happy. I would argue that most people in the world fall into this category, they just might be engaged in things other than their work. Look at sports. I'm not into sports. At all. I enjoy watching a game, but I just can't get into the stats and the history and memorizing years and events and people and on and on and on... I'm simply amazed by people who can learn all that stuff and talk and argue about it endlessly. The amount of energy and dedication required for that is phenomenal and it takes some serious smarts. Imagine if you could harness just a little bit of that energy and apply it towards something slightly more productive.

And therein lies the rub. How do you get people to be engaged, and more importantly how do you get them engaged in the right things?

I've had the fortune of knowing and working with a lot of smart people. Most have been naturally engaged. But it's interesting to see what that results in. If there is a project at work that excites them, they're all in. But when it doesn't, or when some other factor is causing them to dislike it, they'll find other things to get into: side projects at home, or frameworks or "minor" bugs or related "enhancements" at work. Expending energy on these things is extremely valuable experience to the individual but maybe not so much to the company.

But on the other hand, the greatest breakthroughs can frequently come from people messing with unrelated stuff. Just look at Google's 20% time and the making of ad sense. Ad sense practically single-handedly funds Google, and it was invented by someone in their 20% time. So having some leeway is equally important.

There is a line to walk here. If you have a lot of people who are very engaged, but always in things that don't help you achieve your business goals, you're screwed. And if most of your people are not engaged at all, you're screwed.

Walking this line is going to be difficult, but I would guess that most companies aren't even aware of it. Maybe their employees are engaged anyway, because of good managers or just because of luck. But that doesn't seem too likely.

So how do you get people to be engaged? Peopleware, and First, Break All the Rules, and Joel Spolsky, and 37 Signals can answer that question better than I can, so I'll refer you there.

In the mean time I'm curious, does your company engage you?

Monday, August 24, 2009

Optimize for Success

Lets pretend you are writing some code. This code does something. Occasionally, it may fail for some reason, but we intend it to succeed, and it will succeed more often than it fails.

As an example, lets say we're going to update a record in a database, but before we do we need to a do a get so we can see if any of the values changed. If some values have changed, we'll do some stuff after the update. Maybe we'll send an email or something. We're going to do a concurrency check by optimistic concurrency (see if it changed in any way using a last update timestamp column) before we do our save.

Now, this is just an example so I can make a larger point, so bear with me here. Lets go ahead and pretend we're using LINQ-to-sql to do the update, so LINQ will also do the concurrency check for us.

Our psuedo code looks kind of like this:
Get record
Update record
if record.prop changed:
send email

Now, if the update fails due to the concurrency check, this will just bomb out.

But notice that when the concurrency check fails, we still did the Get operation. This kind of sucks because we didn't need to do it. And we can't do the Update before we do the Get, that would defeat the purpose. So we're going to have to do the concurrency check manually.

Wait. What? Why am I getting all upset about this? WHO CARES if I do a Get I don't need to do in a failure scenario. Unless there is something unusual about this failure, like it's going to happen all the time, or it has some record locking implications, none of which apply here. I'm optimizing for the wrong thing. I should be optimizing for success, not for failure (While avoiding premature optimization, of course).

If I could remove the Get completely, so the method would Succeed and not need it, that might be something worth talking about. But it is totally not worth adding any code complexity just to optimize this method for a failure case.

Thus: Optimize for Success, and don't get too worked up over failure.

Monday, August 17, 2009

Another C# Fluke

Here's some code:
var l = new List<int?>();
l.Add( null );
( ( System.Collections.IList)l ).Add( null );

You would expect those two Adds to be equivalent and that after executing them l.Count would be equal to 2.

Instead you get an ArgumentException on the second Add. Turns out System.Collections.Generic.List implements both Add(...) and System.Collections.IList.Add(...) and the interface specific one does some input validation not done by the other Add. This validation doesn't understand Nullable types, so you get an exception.

I guess its a probably a bug, due to the fact that Nullable<> behaves kind of strangely through reflection.

Wednesday, July 29, 2009

A C# Language Quiz

I was writing some code today when I suddenly realized that I didn't how a certain very fundamental part of the C# language would behave.

Here's an example:
using System;

namespace TestNullCastToObject
{
class Program
{
static void Main( string[] args )
{
Test t = null;
object o = t;
if ( o is Test )
Console.WriteLine( "a Null Test is a Test" );
else
Console.WriteLine( "a Null Test is _NOT_ a Test" );

Console.ReadLine();
}
}

class Test
{
public int Id { get; set; }
}
}


If you compile and run that sample what do you think the output will be?

No really, think about it.

Ok, I'll tell you what I thought the output would be. I thought the output would be "a Null Test is a Test".

Ok, now I'm going to tell you what the output is.

"a Null Test is _NOT_ a Test"


Does that surprise you as much as it did me? I think I'm actually happy that it behaves this way, but I'm still surprised.

Tuesday, June 30, 2009

SQL Deadlocks: More with child data

In last week's post, SQL Performance: Child data, I wandered through an issue involving caching data about a parent table's child data. In that post I talked about writing SQL that would save a foreign key on the parent table to the most recent child record. This is very simple and would look like this:

begin tran TxExample
BEGIN TRY

declare @NewChild int

insert into Child ( blah, blah, blah ) values ( @blah, @blah, @blah )

select @NewChildId = SCOPE_IDENTITY()

update Parent set CurrentChildId = @NewChildId where Parent = @ParentId

if @@TRANCOUNT > 0 commit tran TxExample

END TRY
BEGIN CATCH
if @@TRANCOUNT > 0 rollback tran
END CATCH

This is relatively straight forward. It inserts the child, then updates the parent's cached data. Those two operations are wrapped in a transaction and a try catch to ensure that if anything should fail for any reason, both statements will be rolled back. This ensures data integrity.

And now it's time to talk about deadlocks. This code is susceptible to deadlocks. As a relatively contrived but none the less possible example suppose the following SQL could also be run:

begin tran TxExample2

update Parent set blah = @blah where ParentId = @ParentId

...

select * from Child where ChildParentId = @ParentId

commit tran
If these two queries were to run at the same time, operating on the same parentId, and SQL Server were to context switch them at just the write moment, they would deadlock. Specifically, if the first query completed its insert statement (line 6) and then SQL switched to the second query, we would deadlock.

This is because when the second query tries to select from the Child table, it will wait because the first query has inserted a new row and SQL Server's default isolation level is read committed, which means dirty data will not be read, instead it will wait for the data to be committed. So it's going to sit there, waiting for the first query to commit.

This isn't a deadlock yet. The deadlock happens when SQL switches back to the first query and attempts to execute the update on the parent. When it does this, it will try to obtain an exclusive lock on that parent row, but it won't be able to because the second query already has an exclusive lock from it's update. So it will wait for the second query to commit.

The first query is now waiting for the second query which is waiting for the first query and you have yourself a deadlock.

Before we fix it, we should ask ourselves "is this a big deal?" The answer is, it depends, but in general yes. If all your SQL is small and all your transactions complete quickly and you don't have very many users banging on the system concurrently then you probably wont see any deadlocks. But unless you can guarantee that all those conditions will remain the same you have to be at least a little worried. And if those conditions don't apply to you, you definitely have to be worried.

So how do we fix it? First thing we could do is to commit the transaction in the second query before executing the select. If this is possible, then it's a good idea. You want your transaction to commit as quickly as possible and you want to touch as few objects as you can while in the transaction. That said, there are plenty of reasons why you might not be able to commit the transaction after the update. For example, maybe you're reading the child data because you need it to perform another update, and those two updates have to be in the same transaction. In that case, there is nothing you can do to fix query #2.

But even if you could fix query #2, someone could some day come along and write query #3 which would introduce the same problem again. So what we really need to do is fix query #1. The way we do that is by having query #1 obtain a shared lock on all the resources we know it will need to touch, immediately at the top of the query.

Add this code after the BEGIN TRY:
set transaction isolation level repeatableread
select ParentId from Parent where ParentId = @ParentId
set transaction isolation level readcommitted

With this code in place, query #2 will not be able to execute it's update until query #1 completes. Thus, preventing the deadlock and saving the day!

This example was simple but the deadlock was still subtle and hard to see. This problem just gets more complicated the more complicated your SQL gets. And your SQL will get more complicated in direct relation to how complicated your data schema is. So you really have to be on the look out for this issue.

Before I wrap this up, I should mention that if you need to lock more than just one row in one table at the top of your query (like we did in query #1), life can get interesting. If the tables you are locking are all related you can lock them by inner joining to them. But if they are unrelated, things get interesting. If they're unrelated, you can't join from one to the next, so you need to execute separate select statements. And if two queries need to lock the same records in two unrelated tables, but they lock them in different orders (A, B vs. B, A) you can end up with a deadlock! For these cases you have to resort to what you learned in your operating systems class: always lock all your resources in the same order. Good luck with that.

I'll leave you with some rules of thumb, which apply to most cases but, of course, not all:
  1. Keep your transactions as small as possible by touching as few objects as possible
  2. Keep your transactions as fast as possible: if you have a query that can execute on n records in a single transaction where n is unbounded you are likely to find yourself in a world of hurt
  3. Obtain shared locks on everything your transaction will eventually require exclusive locks on before you acquire any other locks
  4. If you need to do any reads that don't need to be repeatable, do them before you obtain any shared or exclusive locks (this is really just in keeping with #2)
  5. If you set the transaction isolation level to repeatable read make sure you're setting it back to read committed (even if its the last line of your query, this will make sure triggers don't execute in repeatable read)
Now, all of this has been learned by trial and error, experimentation, and a lot of reading. If you know of other ways around these issues, or if you have a different take, I definitely want to hear about it.

Monday, June 29, 2009

SQL Performance: Child data

In this post I’d like to talk about a specific issue in SQL and all the various ways you could approach it. Specifically, I’d like to talk about dealing with a parent entity’s child data. As an example lets use a very simple document database that stores documents and their versions. Obviously, a document will have many versions but each version will have only one document, as in this diagram:

Document Db Diagram

This is not a complete database, clearly, but it indicates that the DocumentVersion table is a child of the Document table.

Now we get to the part where this gets at least partly interesting. Lets say we’re going to write a search that returns documents. In the results we want to display information about the current version of each document as well as the total number of versions for each document.

This is a surprisingly non trivial query…

select d.Id, d.Name, v.Id, v.FileName, v.Extension, v.FileSize,
( select count(Id) from DocumentVersion where DocumentId = d.Id ) as numVersions
from Document d
inner join DocumentVersion v on d.Id = v.DocumentId
where v.Id = ( select max(Id) from DocumentVersion where DocumentId = d.Id )

Now, there are a bunch of ways to write this, but this is a perfectly good example. Notice we have an inner select in the select clause to get the number of versions and we have another select in the where to get “latest” version. Here I’m depending on SQL Server’s Identity Specification to give me the latest row because it simplifies the query. If we didn’t want to do that, I’d have to either “select top 1” while ordering by the inserted date (which isn’t on the table in our example) or use a row number function and get the row where the row number = 1 again ordered by the inserted date. Both of these query are correlated, meaning they're run for each document in our results.

This query is ugly, but it works. We could optimize it and tweak the way its written to try to get the best possible performance out of it. But is this really the best way to do this? If we think about it, we’re going to be looking at all the versions for every document returned in our search. The more documents we return, the worse this is. But worse, we’re going to do WAY more reads than we are updates in this case. New versions simply are not going to be added that often. So it seems silly to be constantly looking up information about the versions over and over and over and over again when we know its unlikely it will have changed from the last time we looked at it.

Wouldn’t it be better to cache this information on the Document table so we don’t have to keep calculating it repeatedly, thereby simplifying the query and improving its performance?

To do this, we simply add “NumVersions” and “CurrentDocumentVersionId” columns to the Document table. But now we have to keep these columns up to date. There are a few ways to do this:

  1. Trigger on DocumentVersion updates Document’s cached columns on Insert/Delete
  2. Code that does inserts or deletes to DocumentVersion must update cached columns
  3. Cached columns are calculated columns that use a function to lookup values
We'll take these in turn. #1, using triggers, has these benefits:
  • Ensures the columns will always be up to date, no matter how the version records are changed
  • Code will be in just one place and we wont have to worry about it again
However, triggers have these drawbacks:
  • Slow (like, REALLY slow. Batch operations like inserting or deleting many versions will slow to a crawl)
  • Potential for subtle and hard to track down deadlock problems
  • Increased code complexity because the trigger must be written to handle ALL cases, even if you only use some (ex: inserting many versions at once)
On #2, updating cached columns when updating Versions, we have these benefits:
  • Simplest possible code
  • Performant
  • Deadlock issues are easier to see and handle
But it comes with it's own downsides:
  • Same code may end up in many places (ex: Insert and Delete stored procedures, if using sps)
  • Potential for error if someone inserts/deletes versions from a new location and forgets to update the cached columns
#3, using calculated columns, is the same as putting the lookup logic in the query (since you can't persist the value) but has the overhead of a function.

So, between #1, #2, and #3, which is the right option?

I used to use triggers, because of fear that someone would forget to update the columns if I went with #2. But the performance and deadlocking issues with triggers has now caused me to go with the "API layer" approach of #2.

I think the answer, as always, is it depends. If the tables you're using are likely to be always accessed through an API layer, then you should go with #2. But if many people will be manipulating those tables from many different areas and there is no central API layer, you're pretty much forced to go with #1.

And the question remains, is it really worth caching the data this way, or should you just keep the lookups in the queries. Once again, my favorite theme for this blog: it depends. The big question is really performance, and that depends on how the queries will be used. Are you going to be returning thousands of results, or just hundreds? Are you going to be running this query often?

In SQL there is no one size fits all rule. And worse, SQL is so complex it has to be treated as a black box, meaning you really can't reason about it. Therefore, your only hope is to test and test and test. You pretty much have to write the query every way you can imagine and then performance test each one... And that takes a lot of time.

As Scott Hanselman would say, "Dear Reader," what do you think? Have you been faced with this issue? What did you do?