Friday, December 21, 2012

Slicing Concerns: Implementations

In Slicing Concerns And Naming Them I posed a question about how to go about separating different concerns while still maintaining a clean and relatable code base.  Some interesting conversation resulted, and I wanted to follow up by investigating some of the different approaches to this problem that I'm aware of.

public class Task : ActiveRecord
  public string Name { get; set; }
  public int AssignedTo_UserId { get; set; }
  public DateTime DueOn { get; set; }

public class NotificationTask : Task
  public override void Save()
    bool isNew = IsNewRecord;
    if (isNew)

public class TasksController : Controller
  public ActionResult Create(...)
    new NotificationTask {...}.Save();

  public ActionResult CreateWithNoEmail(...)
    new Task {...}.Save();
This works, and the names are reasonable. But of course, inheritance can cause problems... I wont go into the composition over inheritance arguments as I assume this isn't the first time you've heard it!

public class Task : ActiveRecord
  public string Name { get; set; }
  public int AssignedTo_UserId { get; set; }
  public DateTime DueOn { get; set; }

public class NotificationTask
  Task task;

  public NotificationTask(Task t)
    this.task = t;

  public void Save()
    bool isNew = t.IsNewRecord;
    if (isNew)

public class TasksController : Controller
  public ActionResult CreateTask()
    new NotificationTask(new Task {...}).Save();
This is not really the decorator pattern... At least not as defined by the GoF, but I have seen it used this way often enough that I don't feed too terrible calling it that. Really this is just a wrapper class. It's similar to the inheritance approach, except because it doesn't use inheritance, it opens us up to use inheritance on the Task for other reasons, and apply the email behavior to any kind of task.

The naming is a bit suspect, because NotificationTask is not really a task, it just has a task. It implements only one of the task's methods. If we extracted an ITask interface we could make NotificationTask implement it and just forward all the calls. This would make it a task (and a decorator), but would also be crazy tedious.

public class Task : ActiveRecord
  public string Name { get; set; }
  public int AssignedTo_UserId { get; set; }
  public DateTime DueOn { get; set; }

public class CreatesTask
  Task task;

  public NotificationTask(Task t)
    this.task = t;

  public void Create()
This service represents the standard domain behavior for creating a task. In an edge case where you needed a task but didn't want the email, you would just not use the service.

The naming is pretty nice here, hard to be confused about what CreatesTask does... However, this path leads to a proliferation of <verb><noun> classes. In the small it's manageable, but as they accumulate, or as they start to call each other things get confusing. For example, if you know nothing about Task and you have to start working on it, would you know you should call the CreatesTask service? Would you know it exists? And would you be sure it was the correct service for you to be calling?

Dependency Injection
public class Task : ActiveRecord
  public string Name { get; set; }
  public int AssignedTo_UserId { get; set; }
  public DateTime DueOn { get; set; }

  INotifier notifier;

  public Task(INotifier notifier)
    this.notifier = notifier;

  public override void Save()
    bool isNew = t.IsNewRecord;
    if (isNew)

public class TasksController : Controller
  public ActionResult Create(...)
    new Task(new EmailNotifier()) { ... }.Save();

  public ActionResult CreateWithNoEmail(...)
    new Task(new NullNotifier()) { ... }.Save();
I'm going to ignore all the complexity around the fact that this is an ActiveRecord object which the ActiveRecord framework will usually be responsible for new-ing up, which makes providing DI dependencies difficult if not impossible...

The idea here is to pass in an INotifier, and then when you find yourself dealing with a task you'll build it with the notifier you want it to use.  If you want no notification, you use the Null Object pattern and pass in an INotifier that doesn't do anything (called NullNotifier in the code example).

But this has the ORM-framework draw back I mentioned above.  Plus it requires the code that is constructing the task to know what behavior the code that is going to save the task will require.  Most of the time that's probably the same code, but if they aren't, you're out of luck.

Operational vs Data Classes
public class TaskInfo
  public string Name { get; set; }
  public int AssignedTo_UserId { get; set; }
  public DateTime DueOn { get; set; }

public class TaskList
  public TaskInfo Create(TaskInfo t)
    return t;
Here I've separated the data class from the operational class. I talked about this in the Stratified Design series of posts.  This separation hides ActiveRecord, giving us the control to define all of our operations independently of the database operations they may require.  If we needed to save a task without sending an email we could just call TaskInfo.Save() directly from whatever mythical operation had that requirement.  Or we could do some extract method refactorings on the Task.Create method to expose methods with just the behavior we need.  Or we might extract another class.  Naming is going to be hard for these refactorings, but at least we have options.

If I missed anything, or if you see an important variation I didn't think of, please tell me about it!  As always you can talk to me on twitter, and you can still fork the original gist.

Monday, December 17, 2012

Slicing Concerns, And Naming Them

Naming is hard.  Especially in OO.  To name something, you have to understand it at it's deepest level. You must capture it's true essence.  This is hard when you're giving a name to a thing that already exists, but it's orders of magnitude harder when you're simultaneously creating the thing out of thin air, and trying to decide what to call it.  Which is after all what we do when we're designing code.

The "essence of things" correlates closely with concepts like Separation of Concerns and the Single Responsibility Principle.  You can slice any object into ever smaller concerns or responsibilities.  You can slice it right down to it's constituent atoms!  Many design problems, like tight coupling and loss of flexibility, are in large part due to having concerns and responsibilities defined at too high a level.  Could this be so common simply because it's so hard to find names for the smaller concepts?  It's frequently easy to see what those separate concepts may be, but terribly hard to think what to name them!

Let's have an example:
This is entirely fictional code, but it's not so different from a lot of real code I've seen in the wild. And it illustrates this problem of slicing concerns very well.

At first glance, it seems very simple. The domain has a Task concept which has a default due date (set in the constructor), and which sends a notification email after it's inserted (using an ActiveRecord hook).  This very nicely and completely describes what a task is and how it behaves in our system.  And the names make it very intuitive.

Or do they?  Is it really the case that every single time we insert a task in the database it should send an email?  Unlikely.  We should slice that behavior out and put it somewhere else:
public class _WhatShouldThisBeCalled_
  public class _WhatShouldThisBeCalled_(Task t)
    Email.Send(t.AssignedTo_UserId, "New Task", "You have been assigned a new task");
This is an incredibly simple refactoring, but I have no idea what this class should be called. The method is a bit easier, it could be InsertAndNotify(Task t) or something similar. But what is this class? What concern does it represent?

No, really, I'm actually asking you.  What would you call it?

Or how else would you write it?  Maybe you'd do something like a fire an event and have someone hook it?  How would they hook it?  Maybe we need an EventAggregator?  This is getting awfully complex for such a simple requirement!

And it's not done, because it's not really so great that it defaults the DueOn date in the constructor.  Is every single task really due tomorrow?  Or is it just a certain kind of task, or tasks created in a certain way?  And where will we put that code, what will it be called?

I sincerely believe this is both a significant design problem, and a significant naming problem.  I want to know how you'd tackle it.  Please do leave a comment or tell me on twitter or even better, fork the gist on github!

These concerns need to be separate!  But what a cost we pay for it!  The simple OO domain model of a Task has turned into something much less relatable.  Either it's event driven spaghetti code with strange infrastructure objects like EventAggregators.  Or it's a hodge-podge of service or command classes, none of which actually model a relatable thing...  They only model functions, features, behaviors, use cases.  Or maybe we try applying inheritance, and then we end up in a whole different world of confusing names and surprising behaviors.

Can't we do better?  Is there some way we can do the slicing of concerns we need but still maintain the modeling of real relatable things?  Even if that may require a different way of thinking or not using the design patterns that led us here (Active Record, in this case).

Friday, December 14, 2012

The Fundamental Software Design Problem

The most fundamental software design problem, that this the the most important problem which underlies all design decisions, is:

Choosing the right amount of abstraction

Say you're starting a brand new project that you don't have any previous experience with.  What sort of architecture should we apply?  We have a lot of choices, some listed here, ordered in increasing complexity:
  • SmartUI
  • MVC w/ Active Record
  • Ports and Adapters
  • SOA
  • CQRS
For some problems just a glance is enough to know it needs a more abstract and complex solution.  Equivalently, some problems quite clearly should be as simple as possible.  But most problems lie somewhere in between.  And generally there's really no way up front to know exactly where on the complexity scale it will lie.

Worse still, in a large enough application different portions of the application might be more or less complex.  Some areas could be simple crud with no logic, while other areas involve heavy data processing and complex workflow and queries.

And even worser, this is a moving target.  If I had a dollar for every time something I thought was pretty straight forward became much more complicated either because of changing requirements, scope creep, or just misunderstanding...  Well, I'd have quite a few dollars!

As I see it, there are basically two strategies for dealing with this problem:
  1. Start as simple as you possibly can, and evolve to more complicated designs as things change
  2. Start slightly more complex than may be strictly necessary so that it's easier to make changes later
I would expect people from the Agile and Lean communities to balk at the very mention of this question.  They'd probably bring up stuff like YAGNI and evolutionary design.  And I agree with this stuff, I agree with it completely!

But I also think boiling frog syndrome is a real thing.  Even a great team with the best intentions can easily find themselves stuck in the middle of a big ball of mud.  That's just life.  Little things change, one little thing at a time, and you do "the simplest thing that could possibly work" because hey, ya ain't gonna need to do a big overhaul now, this will probably be the last tweak.  And next thing you know, everything is a tangled mess and all your flexibility is gone!

To add insult to injury, when you find yourself wanting to do a significant refactoring to a more abstract design, it's frequently your unit tests that are the primary problem spot holding you back.  Those same tests that were so useful when you were building the code in the first place are suddenly locking you into your ball of mud.

I can hear you now.  You're looking down your nose at me.  Huffing and puffing that if I'd had more experience it never would have come to this!  If I'd just listened to my tests, the ball of mud wouldn't have happened.  If I'd just understood the right way to build software!  blah blah blah.  Sorry, I don't care.  I build real software for real people with a real team, I'm not interested in idealism and fairy tales.  I'm interested in practical results!  I'm interesting in making the correct compromises to yield the best results while constantly striving to do better!

And that's ultimately my point!  No matter what design I start out with, I want it to allow me to strive to do better.  If the simplest thing that could possibly work is going to be hard to evolve into something more flexible, that's a problem.  Accounting for change doesn't necessarily mean doing the simplest thing, in some cases it means doing something a little more complicated, a little more abstract, a little more decoupled, or a little more communicative.

If this ticks you off, please come argue with me on twitter!

Thursday, December 13, 2012

The Dizziness of Freedom

"A man stands on the edge of a cliff and looks down at all the possibilities of his life.  He reflects on all the things he could become.  He knows he has to jump (i.e. make a choice).  But he also knows that if he jumps, he'll have to live within the boundaries of that one choice.  So the man feels exhilaration but also an intense dread."  - Jad Abumrad quoting Kierkegaard

Wednesday, December 12, 2012

Neat F#: Custom Operators

F# has support for custom operators.  The best use of this I've seen so far is in the canopy web testing library.  Canopy allows you to write code like:
"#firstName" << "Kevin"
"#firstName" == "Kevin"
That code is the same as this code written with the Coypu web testing library:
As you've likely deduced, the "<<" operator has been overridden to lookup the field and set it's value, while the "==" operator has been overridden to lookup up the field and assert on it's value.

In this case, both of these operators do exist already in F#, but they obviously aren't usually used to drive a web browser.  So this is a powerful use of operator overloading.  But F# allows you to define custom operators that have no definition in F# as well.  They can be any combination of a certain set of characters.

For example, there is no "=~" operator in F#, but you could define one to do a regex match as follows:
open System.Text.RegularExpressions
let (=~) input pattern = Regex.IsMatch(input, pattern)
And you'd use it like:
"some input" =~ ".*input"
And you could also define one that is case insensitive:
let (=~*) input pattern = Regex.IsMatch(input, pattern, RegexOptions.IgnoreCase)
These operators are not overloaded, they're just custom defined.

There is clearly a tradeoff here between explicit code and concise code.  Look back at the first example from Canopy again.  If you knew that was web testing code, and you recognized "#firstName" as a css selector, you would probably figure out what it was doing.  And this conciseness is going to be really nice in a situation where you're executing the same type of operations over and over and over again (say, like, in a Selenium web test!).  So while there's no mistaking what the Coypu code is doing, I'd rather write the Canopy code!

However, in the regular expression example, since =~ and =~* are not part of the language, how would you know what they do.  Certainly there's a similarity to ruby, but I've never seen a =~* operator.  So introducing stuff like this to your code base runs the risk of making your code harder to understand.

In the end, I think it's an awesome feature to have at your disposal.  And I think a good rule of thumb is to be willing to try some custom operators when you have a high and dense repetition of operations.  That is, it's not a one off operation, or it's not used always by itself in far flung sections of code.

In any case, this another powerful, and very neat, feature of F#.

Tuesday, December 11, 2012

Neat F#: Inferred Return Types

What is the return type of this F# function?
let hello name = sprintf "Hello %s" name
If you guessed string, you're correct! I know this syntax can be confusing at first glance, so here it is one element at a time:
  • let hello name =
    let: the "let binding's" job is to associate a variable name with a value or function.  It BINDS things to names
    hello: hello is the name of the function
    name: hello takes one parameter, and it called name, and it's type will be inferred
  • sprintf "Hello %s" name
    sprintf: basically F#'s version of .net's String.Format, it's a function that takes a format string with placeholders and values as arguments and returns a string.  In fact, this function is so neat it deserves it's own post.
    "Hello %s": the format string, %s tells the compiler a string parameter is required
    name: argument to sprintf
sprintf returns a string, therefore the hello function returns a string.  Notice there's no explicit return statement, whatever the last statement returns the function returns.  However, this is made more interesting by the fact that just about everything in F# is a statement that returns a value, including if statements:
let hello name =
    if name = "kwb" then
        "'sup KBizzle!"
        sprintf "hello %s" name
This function still returns a string, because the if statement returns a string. Note this means in F# the if and the else must return the same type!

Also note, there's nothing wrong with that last code sample, but it's my impression that if statements are generally frowned upon in F# in favor of pattern matching. A true F# dev would probably have written that last using the pattern matching function syntax like this:
let hello =
        | "kwb" -> "'sup KBizzle!"
        | _ as name -> sprintf "hello %s" name
That's really outside the scope of this post, but it makes me happy!

So this brings us to the _really_ neat part: functions with changing return types.  All the functions we've seen so far have had a single static return type.  But what about this function?
let crazy f =
    f 4
What is it's return type? Maybe this will help?
let somestring x = sprintf "the number %d" x
let someint x = x

crazy somestring
crazy someint
Crazy's return type is different depending on what function we pass to it!
mind blown

Monday, December 10, 2012

Neat F#: Pipe Forward

Functional programming dates back to the 1950s, but from my perspective it seems to have been garnering more attention in the software engineering community recently.  I first got really interested in it last year at CodeMash when Ben Lee introduced me to a little bit of Erlang.  It was so fascinating that I decided I had to dive in deeper.  F# being a .NET language, it was the obvious choice.  So I bought an ebook of Programming F# and started writing a few little programs.

Along the way there have been a few things that completely blew my mind, or that I thought were just flat out neat.  You could (and should) learn about these things from much better sources, but I love to share!

So to kick it off, this first post will cover the first thing in F# that really truly blew my mind: the Pipe Forward operator.  You see it used in F# all the time, and what it allows you specify the last parameter's value first, thus writing your statements in a more logical order.

So for example, this:
let testInts = [1;2;3;4;5;]
let evenInts = List.filter (fun i -> i % 2 = 0) testInts
Can be re-written as this:
let testInts = [1;2;3;4;5;]
let evenInts = testInts |> List.filter (fun i -> i % 2 = 0)
This example is trivial of course, but where this really starts to shine is when you can effectively describe an entire program in one chain of function calls:
let oddPrimes = 
    |> filterOdds
    |> filterPrimes
Basically what's happening here is that the value on the left of the pipe forward operator is being passed as the last parameter to the function on the right. In the first example, List.filter takes two parameters. The first is a function, and the second is a list. You'll find that all the functions in F# modules are structured so that the parameter most likely to be passed down a chain like this is defined as the last parameter.

At first I didn't think this was mind blowing at all. It just looked like a simple compiler trick. But then I learned this isn't in the compiler at all. In fact, |> is just a single line F# function. It's definition is nothing more than:
let (|>) x f = f x

F# also has a pipe backward operator:
let evenInts = List.filter (fun i -> i % 2 = 0) <| testInts
And it is defined as:
let (<|) f x = f x
If you've followed along this far, I hope you are boggling your mind as to what possible purpose this could serve! I know I was. The answer comes from the fact that operators have higher precedence in F#. So the pipe backwards operator allows you to avoid wrapping an expression in parenthesis. For example:
let output = "the result is: %d" (2 + 2)
Those parenthesis are such a drag. So we can rewrite this without them as:
let output = "the result is: %d" <| 2 + 2

Monday, December 3, 2012

Stratified Design

The last posted ended by presenting a style of OO in which the objects only exposed operations which communicated via data classes.  We arrived at this design by thinking more deeply about encapsulation.  I asserted that there were a number of benefits to an object structured this way, but promised to also talk about the architectural benefits of applying this practice throughout your code.

A Stratified Design means writing all of our objects in this behavior-only style, and passing data classes between them.  There are lots of detailed decisions to be made around what exactly those data classes should contain, but for now I'm going to stay at a higher level.

You may be thinking to yourself, "Hey!  Those are just layers!  I've been duped, there's nothing novel or interesting in this except a fancier name!"   While what I'm advocating here is very similar to the traditional layered architecture there are some very critical differences.
  1. Some definitions of layers include a restriction that lower layers may not reference higher layers.  When applied to domain modeling this tends to lead to ridiculous restrictions where your data layer is not allowed to return domain objects because the domain is a higher level concern than the database.  A stratified design agrees that lower stratum should not use behaviors of higher stratum, but that doesn't mean they can't share the same data classes!
  2. Layered architectures usually prescribe an exact number of layers for specific purposes.  In a stratified architecture, there aren't any consistent named layers.  Instead, there's just a series of classes calling into each other as needed.  Each of those classes is defined at some level of abstraction, and calls into it's dependent layers as needed.  And that's as prescriptive as it gets.
There are a lot of awesome things that follow from having objects calling other objects in this way:
  • Decoupled: clean interfaces communicating with data is about as decoupled as you can get (and therefore insanely easy to unit test!)
  • Simple, not complected: each object knows only about the interface of it's dependencies.  And it accepts small data classes, and outputs small data classes.  There's no static or global knowledge, no god objects.  And each object represents one concept defined at a consistent level of abstraction.
  • Behavior only where needed: the simplest example is you will only be passing small data classes into your view, not ActiveRecord objects (which expose query and data persistence behavior).
  • Somewhat side-effect free: not entirely side-effect free, but because there is little to no shared state, it's difficult to be surprised by a side effect.
  • Intuitive: If you do a good job separating your levels of abstraction, you will find that when you are looking for something, it's always right where you expect it.  Or if it not right there it's one explicit function call away.  Contrast this with OO designs that are riddled with inheritance and misapplied strategy and state patterns...  Or compare it to "light weight" Active Record based designs where logic might be in an AR hook, or might be in a service class, or might be in a controller...
The inspiration for this Stratified Design came from a number of different sources.  But the primary ones were:
  1. Rich Hickey's Simple Made Easy and The Value of Values
  2. Bob Martin's Architecture the Lost Years
  3. David West's book Object Thinking (which I reviewed before)
These don't spell this out exactly, but they contributed certain concepts.  And as always, remember that "architecture" is dangerous.  Lots of people might be excited about CQRS, but that doesn't mean it should be applied to a mostly read only content management site.  And Rails might be an efficient platform for quickly building a web app, but that doesn't mean it's right for building a space shuttle control panel.  And the same goes for Stratified Design.  The architecture of your application should reflect the nature of your application.

But that said, if you give Stratified Design (or something similar) a try, I'd love to hear about your experiences with it!

Wednesday, November 14, 2012

Encapsulation: You're doing it wrong

In the last post, I investigated just what the devil encapsulation actually is.  I may not have answered that question, but I did decide that whatever it means, there's a subtle but important distinction to be made around encapsulating "data".  The example that launched that distinction was a Queue which stores data from the caller in some encapsulated implementing data structure.  Notice the distinction between the caller's data, and the Queue's implementation.

One way of approaching a new OO design in a "business" environment is to ask, what data do I have?  Then create a "model", and add a property for each data element.  C#'s { get; set; } properties highly encourage this, and ORM and ActiveRecord tools require it.  So now we have little data classes, structures basically.  But we know that we're not doing OO unless we're doing encapsulation, and that means we need some methods!  So we add some methods to our little data classes that usually either modify that data in some way, or perform some calculations with it.

But what is this class encapsulating?  All the data is fully exposed, and the methods are restricted to simple operations on the same data.  Clearly it's trying to represent something, but we started from some data which more than likely corresponds directly to a database table.  So what is it representing?  At best, one data thing.  And what is it encapsulating?  Some logic about that data.

But looking at this again from the perspective of encapsulation as bundling implementation details instead of data, we could go a different route.  When thinking about a Queue, I don't think about it's internal implementing data structure.  I think about the operations I want it to perform for me.  So instead of asking, "what data do I have?", "what operations do I need to perform?" could be better starting point.

What if all the properties were moved off the object onto their own little class -- or structure -- or in F#, record.  The original object would then be left with operations only.  And one of those operations would have to be getting the data, and that would just be a simple method that returned the little data class/structure/record.  This class is encapsulating the implementation details of those operations you decided you needed to perform!  And just the like the Queue, there is now a clear distinction between the caller's data and the class's implementation.

A number of interesting benefits follow from this:
  • Enables a coarser grained interface, which is especially useful for data access.  You gain the control to define operations to retrieve as little or as much data as you need.
  • Designing around encapsulating implementation details leads to objects that are well defined with intuitive behaviors and clear purpose.  Ultimately that means it's easier to find the behavior you want, and extend behavior when needed.
  • The resulting clean behavioral interface, passing and returning data, immediately results in simple and flexible decoupling, which is great for unit testing.
And these are just the benefits realized at the level of just the one class we modified.  In the next post I want to look at what happens when this architecture is applied through out in what I call a stratified design.

Tuesday, November 13, 2012

Encapsulation: What the devil is it?

I love the word 'Encapsulation.'  It's a big fancy word and I feel smart when I use it.  Unfortunately, I'm not really sure what it means, and neither is Wikipedia.  "Encapsulation is to hide the variables or something inside a class."  I lol'd when I read "or something," what a specific definition!  So, what the devil is it?

The most naive OO definition might be:
A language feature that bundles data and methods together.
You might extend that to say that it hides the data from public consumption, but that part muddies up the water, as the Wikipedia article demonstrates.  My favorite example of Encapsulation is a Queue class.  You get push, pop, and peek operations to call, but you don't know what data structure the uses to implement those operations.  It could be an array, it could be a linked list, whatever.  In this we can easily see the beauty and the power of encapsulation: "data" and "methods" together.

But wait, what did I actually encapsulate in that queue?  Was it the "data"?  I pass my data into and out of the Queue, and the Queue hides its implementing data structure from me.  Maybe it's a Queue<string>, and I'm all: q.Push("encapsulation"); q.Push("is"); q.Push("about"); q.Push("data"); Assert.AreEqual("encapsulation", q.Pop());  For me the data are those strings, but those strings are clearly not what the Queue is encapsulating!

That word "data" in our definition is a tricky one.  It can be applied to too many things to be really useful.  But does replacing "data" with "data structure" in the definition fix the problem?
A language feature that bundles data structures and methods together.
It clears it up, but it introduces another problem.  For example, what if my object is a database gateway?  Certainly there's a data structure somewhere in that database, but my object isn't directly encapsulating that!  No, it's probably "encapsulating" ADO.NET procedural calls, or some other data access library.  The procedural calls are neither data nor data structure...  So could it be that thinking about data is completely misleading?
A language feature that bundles implementation details and methods together.
This is a rather large step though!  Instead of just talking about data, or data structures, this now includes just about anything in the definition of things that can be bundled with methods to achieve encapsulation!  Maybe there is some value in restricting what the word "encapsulation" applies to, but if there is, I doubt it's something that is going to prove useful for Software Engineers.  So while I admit this definition could be a perversion of the word "encapsulation," I find it more useful.

The other definition Wikipedia gives for encapsulation, which I've neglected to tell you about until now, is "A language mechanism for restricting access to some of the object's components."  This is more similar to the definition I just ended on.  I take some issue with the word "restricting" and the word "components" is ambiguous enough to be a problem.  But I don't think it's a stretch to think "components" could include both data and dependencies.

So, perhaps we've arrived at a better understanding of encapsulation.  One that recognizes that data is not the all important concept.  The next step I'd like to take is to extend this slightly deeper understanding outside the realm of data structures and into more typical "business" scenarios.  That will be the next post.

Thursday, September 13, 2012

I'm Not Trendy

In October of 2011 we had a Burning River Developers meetup that I unfortunately missed (because I was on a plane returning from Europe!).  I learned from people who attended that during his presentation Dan Shultz said that I am "not trendy".  I guess he was saying something about how Knockout is not as trendy as Backbone, and that I'd appreciate that.

Here's the thing, he's right!
When I bought my mac, it was probably one of the few examples of a time where I allowed the popularity and trendiness of something to sway to my decision.  And I've given it lots of time and lots of patience, but last night I finally admitted that OSX is rubbish.

  • Everything about Finder is broken.  EVERYTHING.
  • Window management is garbage.  How do you minimize and restore w/ out the mouse? (I know about command-H, it's no good either)
  • The only app I like is chrome.  Guess what, it runs on every OS!
  • is sluggish and ugly
  • iCal can't sync correctly with google
  • iPhoto doesn't scale and doesn't organize in a way I find useful
  • iTunes music file management is totally inflexible, to the point of unusable
  • The dock is dramatically inferior to Windows 7's task bar
  • Lion is slow to boot, and has worse battery life
  • I see no compelling reason to upgrade to Mountain Lion at all (except to remove the stupid skeuomorphic stich graphic from iCal, which as mentioned above, I don't use anyway)

The only thing I like is the swipe motion to switch spaces.  But the only time I use that is when I'm running windows in a VM, so if I was in windows I WOULDN'T NEED IT.  The only other rare time I use it is to maximize a window, which just goes back to how crappy window management is on OSX.

I thought I might like that it was a unix, but the truth is it's a crappy incompatible unix (and it doesn't even have a decent package manager that doesn't break across upgrades).  And any time I've wanted to do something unix-y, it's just been a headache.  

The hardware is pretty though!  And the trackpad is the best I've ever used, so it's certainly got that.  Also the keyboard is pretty great.  So bully to the hardware.  But, I'm being honest here, that's all I can give it.

So why is the mac so popular right now?  Just because it's popular.

Even as I write this, I'm aware of how unpopular this opinion is.  But it's just my opinion, and you should be skeptical of it.  Especially because like Dan said, I'm not trendy.  

Everyone likes Dynamic Languages?  I prefer static and functional.
Everyone's excited about Node?  Looks like an immature waste of time to me.
Backbone is all the rage?  I'll stick with Knockout, thanks very much.
IPhone?  "Meh" at best, I'm happier with Android.
Pair programming?  Code reviews.
Startups want to cash out quick?  Nonsense, build a sustainable business that tackles hard problems and makes an actual difference!
Pop music? Techno? Dubstep?  It's jazz for me.  And even within jazz, I don't like bee-bop (arguably the most popular right now), so I'm totally fringe there too.

This is totally a rant, and if I have a point at all, it's this:  1) I'm not trendy 2) OSX is garbage.  And I guess my other point is just that it's OK to not get caught up in the trends.  You don't have to join the Lemmings, you're allowed to have you own taste and opinions!

I reserve the right to change my mind about all opinions contained in this blog post at any time without notice.

Thursday, August 30, 2012


Conceitedly assertive and dogmatic in one's opinions.

Excessively proud of oneself; vain.*

I'm not sure who first used the word "opinionated" to describe a software framework in a positive light, but I know the place I encounter it most often is in reference to Rails.  The good news is, I bet DHH fully understands the meaning of these words, and is still more than happy to identify by them.  At least he knows what he's doing.  But I still think this is a stupid thing to be proud of.

Unfortunately, writing "opinionated" code has since just become the thing to do.  And I don't think most people who have jumped on that bandwagon bothered to look it up in the dictionary first.  As a result, I now have the impression that when someone says their framework is opinionated what it really means is they're claiming it's general purpose, but it really only works in a very specific scenario, and they don't even understand that other scenarios exist.

Imagine you're walking down the aisles at Home Depot looking for a tool to help you complete a job.  Maybe you need a drill bit extension to reach into a tight corner.  You have a certain kind of drill that accepts standard shaped bits, and your corner has certain dimensions, and you need to set a certain diameter screw.  A helpful sales associate comes up and asks if he can help.  You give him a quick high level summary, and he smiles knowingly.

"What you need is the XYZ fixed flexible dongle attachment!  It's the only choice.  All the other options are total bullshit, I can't believe we even stock them."  He's kind of starting to foam at the mouth now...
"Believe me, this is the one."

OK, you think.  It costs 3x as much as the other items on the shelf, but this guy clearly knows what he's talking about.  You get it home and come to find, it doesn't fit your drill because it has a non-standard bit shape, it doesn't fit your bit because it's built for larger bits, and it's too short to reach into the corner anyway.  And what does the guy say when you take it back to return it?

"Oh, well, this is an opinionated drill bit extension, it's not meant for your job."

Code that does one well understood, well defined thing, is exactly what I want.  But misunderstanding that well defined thing and advertising the code as the solution for everything is stupid.  And being dogmatic and conceited about it isn't helping anyone.

* Definitions from Google

Sunday, August 19, 2012

Blogs are Little Islands

You are not blogging enough. You are pouring your words into increasingly closed and often walled gardens. You are giving control - and sometimes ownership - of your content to social media companies that will SURELY fail.  - Scott Hanselman, Your words are wasted
I enjoy blogging.  I've been doing it since April of 2007 (that's 5 years at the time of this writing!).  For me, it's a great way to work through problems and ideas.  It's kind of a "learning out loud" thing.  And lately, it's been just a way to have some fun with writing.  That's why I'm still doing it, but it's not why I originally started doing it.

I first got into blogging because I wanted to be a part of the community of tech people who were on the interwebs learning from each other and arguing with each other.  That didn't happen, because it turns out a BLOG is not a community.

Blogs are little islands, owned by little dictators.  They've all got large towers built right in the center with mega phones mounted on top, and they're shouting out to sea.  

There's this weird aggregator of shouts out there somewhere, we call it Google.  It archives your shouts, so people searching for a solution to a problem can have a chance of finding the echo of something you yelled long ago.  Of course, that echo has been bouncing around for awhile, and it's probably not terribly accurate any more.  Because of that, we don't shout solutions to problems any more, we do that on StackOverflow now.

But we're all still shouting, so it must be because we want someone to shout back.  But if you even hear my shout, and if you do bother to shout back, the chances I'll hear it are slim.

So instead, maybe you'll fly by my island and drop a leaflet on the beach.  I might pay attention to that, and if I do, I'll leave a leaflet for you on my beach in response.  But you'll never see it, it's my beach and you're not there.

If we're really going to talk, you'll have drive your boat over to my island and stay awhile.  But what a big decision that is for you!  Why would you spend your tourist dollars on my island when there are so many other islands to choose from?  And some of them are much larger, and have many more tourists!

Every island starts out abandoned, with just a lone dictator.  If that dictator is willing to shill for tourists through aggressive marketing, he might attract a bit of a crowd.  But the dictator will still be the dictator and the tourists just tourists.  That's not a great format for interesting conversation...

The little islands model just isn't conducive to building community and having great conversations.  Twitter isn't either, but for different reasons.  And Facebook?  Well it's Facebook.  G+?  *cricket cricket*.  If there's a solution to this problem I don't know what it is.  But I'm pretty sure the solution isn't blogs.

Steve Jobs on Experience

"A lot of people in our industry haven’t had very diverse experiences. They don’t have enough dots to connect, and they end up with very linear solutions, without a broad perspective on the problem. The broader one’s understanding of the human experience, the better designs we will have.” - Steve Jobs, Wired, February, 1996

Monday, August 6, 2012


I was thinking about chores today.  Maybe a strange thing to spend much time thinking about, sometimes your mind wanders to weird stuff when you're not paying attention.  Anyway, I was thinking about chores; specifically household chores.  You know, stuff like:
  1. Vacuuming
  2. Doing the dishes
  3. Unpacking the groceries
  4. Taking out the trash
  5. Putting stuff away
These chores kind of fall into a few different categories:
  1. Regularly recurring (trash)
  2. Sporadically recurring (groceries, dishes)
  3. Uncompelled recurring (vacuuming, putting stuff away)
With regularly recurring you have to take out the trash on trash day, every trash day, on the same day every week.  Sporadically recurring is, well, sporadic.  Some nights there aren't any dishes to do.  But when there are dishes to do, those dishes have to be cleaned.

But uncompelled recurring is different.  There's no fixed external requirement that forces you to vacuum the floors, or clean all the stuff off the coffee table.  You could do these things on a regular schedule, but that would simply be your option, it's not an innate requirement.  And unlike the sporadically recurring chores, the line at which the chore must be done (the dishes are dirty) is not as clear (the floor is dirty?).  How dirty does the floor have to be before I *have to* vacuum it?  How much stuff must be laying on the coffee table before I clear it off?

I'd like to illustrate another interesting thing about the uncompelled recurring category with a story.  When I was growing up, my dad used to harp on my brother and I about putting stuff away after we'd used it, especially tools.  This was one of those classic dad things, he NEVER put his tools away, but he'd be on our case to clean up our stuff.  One day we'd just finished some project around the house, and he was going into the whole "lets get this cleaned up" routine, but then he did something different than usual and explained why he was on our case about it.  He said was that he personally had the bad habit of leaving stuff out, which not only meant stuff was cluttered but also meant he could never find anything when he needed it, and he hoped that he could instill in us a better habit, which he wished he had himself, of keeping everything it's right place, so we wouldn't have the same trouble. And to this day, I'm pretty fastidious about putting stuff back where it belongs, especially tools.

I take a couple of things away from that.  One is, explaining your motivation can be a more persuasive and effective method than just telling people to do something.  But the one that's relevant to this discussion is that with uncompelled recurring chores, you don't have to wait for the chore to pile up and do it all at once, you can proactively do a little bit of the chore a lot.

Can you believe I just wrote a whole blog post about chores?  Ridiculous!  But my point is really simple, keeping code clean is a lot like keeping a house clean.  It's a chore, and different parts of it may fall into the same categories.  But I think it's clear that most code related chores are of the uncompelled recurring category.  That means there is no clear event at which the chore must be done (like trash day).  And there is no obvious state which forces your hand (like dirty dishes).  Which means it's all discipline.  But also means you can do a little bit all the time and stay fairly well on top of it.

To be honest, I think this a better metaphor than "technical debt."  Which is really too bad, because I hate doing chores.

Monday, July 2, 2012

The Illusion Of Simplicity

If there is one thing I've learned in my 11 years (holy crap 11 years?!) of developing software professionally it's that nothing is simple, not even the simple things.

I've come to understand that a big part of the reason for this has to do with the way our brains work.  We're capable of holding different and inconsistent mental models of the world in our heads at the same time, switching back and forth between them, and we're not even aware of it.  This is why when you ask a client, "how do you do <thing>?", they say "we do x, y, z", but later you find out that what they actually do is more like "if a then x, y, z; if b then y, z; if c then x, z; if d then w".

Sometimes we think our job is to discover and implement this complexity.  But our job is actually more than that.  Yes, we need to discover and model all that complexity, but our most important job is to then hide it away behind a simple facade, maintaining our user's illusion of simplicity.

And lest you blame this on stupid users, we suffer from the same problem when implementing algorithms!  At least I do.  This is one of the things I struggled with in solving the Word Ladder problem.  I kept over simplifying the problem, which presented itself when my attempted solutions ran into some condition it didn't account for.  Actually, I was lucky those solutions didn't work.  Sometimes a solution that doesn't really understand the problem does work, but as the problem changes overtime, that solution rapidly degrades.  It's the same issue, I allowed my illusion of simplicity to cloud the true depths of the problem.

Embrace the user's illusion of simplicity!  Fight your own!

Monday, June 25, 2012

Language Envy

At work, I build software in C#.  At home I play with languages like Ruby and F#.  I still believe C# is an amazing language.  It has seen enormous advances in .NET 3.5, 4, and the upcoming 4.5.  And I suspect alot of the people who claim they don't like it probably haven't used it since 2.0.

But for how wonderful it is, there are still lots of great things about some of the other language paradigms out there, and I definitely have a bit of language envy for some of their features.  I'll list some of these features, in context with why I think they're useful.  And though I'm not a language designer, I'll also mention how I could see C# accomplishing some of this.  Eric Lippert eat your heart out!

Dynamic Envy:
Constructor Stubbing
I'm a big believer in TDD, which requires object mocking.  But mocking in a static language can be annoying because it requires:
  1. Defining an interface for every class that must be mocked
  2. Injecting an instance of the interface into the objects that use them
This is why IoC is so popular.  And while this is annoying (I've written enough about this in the past), it works.  One of the most painful things about this, at least for me, is that you can't call a constructor, which precludes simple code like: var thing = new Thing(otherThing);  Instead, a factory class has to be introduced: IThingFactory.New(otherThing) : IThing.  And you inject an instance of IThingFactory into the class that wanted to call Thing's constructor.  UGH!

But in a dynamic language, none of this is necessary.  Anything can be mocked, and the most paradigm shifting example of this is mocking the constructor of a class so it returns a stub!
MyClass.Stub(:new) {...}
Something like this can be accomplished in .NET through IL re-writing, as seen in the new Fakes (previously Moles) from MS in what they call Shims.  But I haven't used this yet, because when it was Moles, it required the code to be run in an "instrumented" process, which couldn't be accomplished with the NUnit GUI.  I think this is still true in Fakes.  But I hope this Fakes framework keeps getting some attention, because this is just what I've always wanted:  To be able to tell my runtime, "Instead of the class name "MyClass" actually meaning "MyClass", I want it to mean "MyStubClass" for the duration of this test."

Partial "Compilation"
Sometimes when doing TDD or refactoring you'll want to change the public API of a class.  Maybe by changing a method name, or the parameters a method takes, etc.  If you have many calls to that method, this will cause lots of compiler errors in a static language.  And this makes it impossible to TDD out the API change without fixing all the calls first

Dynamic languages don't suffer from this problem because they don't have a compilation step.  Only the files you load at execution time are interpreted, and only the lines that are actually executed will fail due to api signature issues.  This can be a blessing or a curse, but in the example I gave above it's a blessing.

I would love to be able to tell my compiler, "I'm only going to be executing the tests in this one file, so just compile that file and it's dependencies and leave everything else alone, K? Thx!"

Not only would that allow me to update my APIs calls one at a time, running tests along the way, but it might also speed up the code -> compile -> test loop!

Sentinel Values
Gary Bernhardt used this technique in the Sucks/Rocks series in Destroy All Software.  It's a technique for dealing with null, but it's not the same as the Null Object pattern.

As an example, lets say you were implementing a solution to the Word Ladder problem.  What should the "Ladder FindLadder(string startword, string endword)" method return when it doesn't find a ladder?  In C#, it would return null.  The only allowable values for an object of type "Ladder" are a Ladder instance or null.

Since a dynamic language infers types at runtime, the method doesn't have a typed return value, so you can return anything you want.  The Sentinel Value technique takes advantage of this and instead of returning nil, it returns an instance of a NoLadder class.  NoLadder is an empty class with no methods or fields or properties.  How is this different than returning nil?  It's different in the exception you'll get. Instead of "NoMethodError: undefined method `first' for nil:NilClass" you'll get "NoMethodError: undefined method `first' for #<noladder:0x000001010652f0>".

That's awesome!  It says right there that your problem is you're holding a NoLadder.  And NoLadder only comes from one place in your code, so you know exactly and immediately what the problem is.  Contrast that with a null reference exception, which could come from anywhere.

In a static language we can approximate this with the Null Object pattern by creating a Ladder singleton instance called NoLadder.  But this is not the same thing.  The Null Object pattern usually returns an instance which wouldn't cause an exception, but instead would do nothing.  Personally, I've always found this a bit confusing and scary, especially if the object returned would normally have behavior.  The other major difference, is there may be times where the sentinel is very specific to a given function,  and defining a null object on your class for just one little function isn't very cohesive.

In C#, null is not an object like it is in Ruby.  But if it WAS, maybe we could do something like:
public class NoLadder : Null { }
Then we could return "new NoLadder()" in place of null.  Crazy I know, but the ability to put a name on null would be huge!

Just look at ActiveRecord and you'll see the amazing power of Metaprogramming in Ruby.  C# has reflection, but it can't even come close to the ability to generate types.  Ruby style metaprogramming can't exist in a static language, so I think what I really want instead is Macros.

Or if not full fledged Macros, then F#'s type providers.  If you haven't seen these yet, you should totally take a look.  They're amazing!  They generate types at compile time.  That may not seem too exciting, until you see the IDE integration...  They generate types AS YOU TYPE!  It's very cool, and you never need to re-generate a generated code file or anything, it's totally seamless.  Like ActiveRecord, but with compile time types!

Functional Envy:
No Nulls
What's the most common exception you encounter in a static language like C# or Java?  NullReferenceException.

F# does have null, but only for interoperating with .NET.  F# code itself doesn't allow null, values must have a value at all times.  This is accomplished with option types, which are the same as .NET's Nullable.  So with enough discipline, you could do this in C#.  But I think it would be awesome if you could turn on a compiler switch in C# to disallow null values.

I should point out that this is a much more strict form of the Sentinel Values mentioned above.  Sentinel Values still blow up when you hit them, they just make it easy to understand why.  No Null prevents the blowup entirely.  Between the two, I think I'd go for No Nulls because it completely eliminates an entire class of programmer error.  However, the cost is some more syntax noise.  Nullables arn't pretty:
var ladder = FindLadder("nice", "mile");
if (ladder.HasValue)
Sentinels would be "nicer" and "more elegant" and more "aesthetically pleasing" in cases where the null is a rare degenerate case.

Also worth noting is that while the compiler doesn't exactly have that "no nulls" switch I mentioned, you can approximate this with Code Contracts and static checking.  I haven't tried this yet, but it looks pretty useful and it's on my list.

After NullReferenceException, I'd wager the next most common programmer error stems from side effects: "How did the value of this variable change?!"  As I've been studying and practicing functional programming I've been stunned by how ingrained mutable side-effect programming is in my head.  Simply put, I think that way, and thinking any other way is very difficult.

I'm not convinced yet that immutability is better for all problems in all places, but I am convinced that it's a less bug prone way to develop.  So I wish C# had widely used immutable data structures.

Of course, what this really means is you're writing recursion instead of loops.  I find recursion to be more declarative, especially when combined with pattern matching, which I like.  But I also find it requires more thinking to understand, which makes it "harder".

Pattern Matching
I don't think I really need to say much here.  Pattern Matching is just completely and totally awesome.  It requires dramatically less code, is much easier to understand and read, and just generally rules.  However, it does require more advanced language-integrated data structures.

Advanced Integrated Data Structures
Can we get a freaking dictionary syntax?!

Admittedly, F# doesn't have one either, but F# DOES have a beautiful list and tuple syntax, which can be used to easily get you a map:
let d = [1, 'a'; 2, 'b'; 3, 'c'; 4, 'd']
let m = Map.ofSeq d
I desperately wish I had nice syntax for basic data types like this.  This is one of the things I love about powershell too!  And when you have pattern matching that also understands this syntax, that's an amazingly expressive and powerful mix!

So there's a sampling of some of my language envy.  I'm sure there are others I forgot to include, and I bet you have yours too, so leave 'em in the comments, or on your blog, or tweet at me!

Wednesday, June 20, 2012

Don't trust your instincts

Ours is a young discipline with no rules or laws except the ones we choose for ourselves.  We are still living in the wild wild west of software development.  There is no exam to pass to become a software developer.  There are no standard evaluation procedures, or checks and balances.  If you can make your software execute, you can install it, host it, and sell it.  This is one of the things I find very attractive about our industry, but also occasionally frustrating.

There have been many attempts to standardize engineering in the form of processes and methodologies.  But process is only a small -- and very uninteresting -- portion of building software.  Code itself is far more challenging, interesting, and diverse.  But it is very lacking in recognized rules or techniques or principles or even just ideas.  Certainly there are some code principles and practices, but how successful or accepted are they?

It's hard to get a true sense of the opinions and practices of our industry, but there is clearly a very vocal minority that eschews "software engineering" practices in favor of a loosely defined aesthetic.  I'll use "software engineering" as a label for structured principles, patterns, and practices.  For example, consider the Gang of Four's design patterns, or Bob Martin's SOLID principles.  But the vocal minority, which seems to me at least to be getting increasingly vocal these days, would argue these concepts (patterns and principles) are more harmful than helpful.  That a better approach is to simply take the time to feel the pain in your code, and adjust, rewrite, and refactor as needed.

A really solid example of this argument being made can be heard in this Ruby Rogues podcast interview of DHH.  If you stick with it, the conversation covers a lot of really interesting topics including how DHH applies this thinking to rails and basecamp, YAGNI, thoughts on education and the necessity of stubbing your toe to learn, and more (Thanks to Lee Muro for referring me to that podcast).

I agree that stubbing your toe is a good teacher, but I don't think it's the only way to learn.  I agree that abstract concepts are easy to over use and misapply, especially after first learning about them, but I don't think that's inevitable.  While I find the refactoring and continuous learning part of this attitude very pragmatic, there is one element I do disagree with: the idea that we don't need abstract rules and principles and guidance and science.  That all we need is our sense of aesthetic.  The idea that by simply looking at some code, maybe comparing it to a different version, you can derive an intuitive understanding of which code is better.

I don't buy this, because I don't think that's how humans work, as outlined by Malcolm Gladwell's book Blink and this article by Jonah Lehrer.  I recommend them both, but if you're short on time, just read the Jonah Lehrer article as it's short and the most directly relevant.

Blink is all about the influence our subconscious mind has on us.  We like to think that we are rational and in full conscious control of what we do and what we think.  But Blink has plenty of research to prove that this simply is not the case.  We depend on our subconscious to make snap decisions and influence our general mood and thoughts much more than we realize.  And Blink goes to great lengths to present the fact that this can be both very powerful and harmful.  Your mind is capable of "thin slicing" a situation, pulling out many relevant factors from all the thousands of details, and coming to a conclusion based on those details.  But, not surprisingly, you need both extensive practice AND exposure to all the needed factors for this to work.  And it's worth mentioning that even when it does work, your conscious mind may never understand what it was your unconscious did to come to it's conclusion!

You might read that and think, "Experts can use their unconscious to recognize good and bad code, the vocal minority is right!"  I believe that is true, but only on a local level.  When you look at code, you are always drilled in to the lowest level.  I think you could intuit a fair amount at this level, but it's the higher concepts that have the larger influence, and I'm not sure you can effectively thin slice that stuff.  Many of the concepts of good architecture are about higher level structure: low coupling, high cohesion, SRP, ISP, DRY.  But if I showed you one code file and asked you to tell me if the application suffered from coupling issues, you wouldn't be able to say.  And that's because you haven't been provided with enough information.  And without that information, how can you possible thin slice your way to an intuitive understanding of good code?  I worry that a focus on "aesthetic" and "elegance" leans too heavily on this intuitive feel for code, and carries a serious risk of leading you down a path that feels smooth and easy, but ultimately leads straight into the woods.

But I would take this argument even further.  Jonah Lehrer's article tells a story of a psychology experiment that went something like this.  Study participants were shown two videos, both showed two different sized balls, one larger than the other, falling toward the ground.  In one video the balls hit the ground at the same time, and in the other the larger ball hit the ground first.  The participants were asked which video was a more accurate representation of gravity.

And the answer is: the video where they hit the ground at the same time is the correct one.  This is not intuitive, most of us would expect the larger ball to hit first.  So the way the world actually works comes as quite a surprise.  But where this gets interesting is in the second part of the study.  This time, the participants were all physics majors, who had studied this and learned the correct answer.  The participants brains were being monitored with an fMRI machine and what the researchers discovered is that in the non-physics majors a certain part of the brain was lighting up which is associated with detecting errors, the "Oh-shit! circuit" as Jonah calls it.  When they saw the video of the balls hitting the ground at the same time, their brains raised the bull shit flag.  So what was different with the physics majors that allowed them to get the right answer?
But it turned out that something interesting was happening inside their brains that allowed them to hold this belief. When they saw the scientifically correct video, blood flow increased to a part of the brain called the dorsolateral prefrontal cortex, or D.L.P.F.C. The D.L.P.F.C. is located just behind the forehead and is one of the last brain areas to develop in young adults. It plays a crucial role in suppressing so-called unwanted representations, getting rid of those thoughts that aren’t helpful or useful.
This other section of the brain allows us to override our intuitive primal expectations, the Oh-shit! circuit, and replace them with learned ones.  But in order for this circuit to work, you must have studied and learned the material!  Which requires that there be something to learn!

The connection to the aesthetic instinctive approach to software should be pretty clear.  If you shun what "science" our industry has to offer, however admittedly weak and young it may be, you're not training your brain to suppress the intuitive but worse-for-you-in-the-end code!

So I think it's important to be cautious when relying on your intuition and sense of aesthetic, especially in an industry as young as ours with so little widely accepted guidance.  We need to follow that pragmatic approach of continuing to learn, but at the same time we have to continue to question our intuition.  And just as important, we should take the science/engineering of our industry seriously, even while recognizing it's limitations.  

Software is hard, be careful how much you trust your instincts!

Monday, June 11, 2012

Word Ladder in F#

Anthony Coble sent me his solution to this in Haskell before his Haskell talk at Burning River Developers.  The problem was to find a Word Ladder between two words.  A word ladder is a chain of words, from a start word to an end word, in which each word in the chain is one letter different from the word before it and is a real word.  The goal is to find the shortest ladder between the start and end word.  It's a neat problem, made even neater by the fact that it was invented by Lewis Carroll!  So I couldn't resist giving it a try in F#.  And fortunately, I don't know Haskell so I couldn't understand Anthony's solution, so I had to figure it out all on my own.

Speaking of which, this is a fun problem.  So if you want to give a try, you should stop reading here and come back after you've solved it!

I found this problem to be surprisingly difficult.  I kept coming up with ideas but inevitably they were over simplified and wouldn't work.  I was visualizing it as a tree.

It's an interesting problem because it has three different cases to consider.  It's normal for a problem to have two cases, like the first item in the list, and every other item.  But at least with the way I approached it, this problem had three cases: the first node (which has no parent node), the nodes in the first level of the tree (which all have the same parent), and every level after that (where there are many different parents).  This kept causing me to come up with ideas that only worked for the first level, or ideas that worked for one of those levels and not the others...

I knew that I needed a breadth first search but I was really struggling with how to implement it while also keeping track of the path to each node.  Usually a breadth first search effectively just loops over all the nodes in each level, and then calls the next level recursively.  But I need to know what the parent of each node is, and what that parent's parent is, etc up to the root.  My solution to this was to represent each node as a tuple containing the list of it's full path from root to that node, and the word of that node.  This is what I passed through the recursive calls, therefore I always knew the full path to each node.  This was simple, but has the downside of wasting memory since it duplicates a significant portion of the parent list on each node.

Another interesting element of this solution is that I prune words out of the tree that I've already seen, like the two red nodes in the picture above.  This means I don't have to worry about "infinite loops" (they're not really loops, but you get what I mean) in the graph, so I can search until I run out of nodes.  And its safe, because I need to find the shortest ladder, so if I've already encountered a word earlier in the graph, that same word can't be part of the solution later in the graph.

The git repo with my solution is here:
And here's the code:

Here are some notes on some of the things I found interesting about this code:

  • The array slicing code in the findChildren function was fun.
  • And using the range operator in the generateCandidates ['a'..'z'] was fun too.
  • The findValidUniqueChildren function is an excellant example of something I've been struggling with in F#.  What parameters does this function take?  What does it return?  It's NOT easy to figure this out, is it?  This is also the first time I've used function composition for real!
  • Notice in the queuechildren method how I'm using the concatenate list operator: "let newSearchNodes = searchNodes @ childnodes"?  The Programming F# book says if you find yourself using this it probably means you're doing something in a non-idiomatic way...  I suppose I could have written buildNodes to append the nodes to a provided list, but that seemed awkward.
  • The match of findLadderWorker is pretty typical, but demonstrates another little pattern I keep finding.  For the second match, I created a function to call since it's longer than 1 line, so I had to make up a name.  I went with "testnode" which I don't really like, but I had to name it something!
Here's Anthony Coble's Haskell solution.  His is very different and shows a great example of a different way to approach the problem.

If you solve it, in any language, send me the link to your solution and I'll add it to the post!

Wednesday, June 6, 2012

Book: Object Thinking

Object ThinkingObject Thinking by David West
My rating: 2 of 5 stars

There were two things I really enjoyed about this book.  The first was the discussion of different schools of thought in philosophy and how those ideas appear in software.  The second was the history sidebars that introduced different computer scientists and explained their contributions to the field.

The basic thrust of the book was simply that you should write your applications as a a bunch of objects whose intercommunication results in the emergent behavior of your application.  And further, that your models should attempt to model the real world objects and concepts of your domain.

That's great and all, but the book provides no concrete examples.  None.  And it makes a huge number of assertions about how much better this approach is and how everything else is inferior, but with nothing to back those statements up.  Nothing.

So in the end, I'm left feeling like there are probably some good ideas in there, but I'm totally unconvinced that the author has ever written a real business application.  And further, I think he might be just a grumpy old dude who's sad that Small Talk lost out to more mature and practical languages like C++ and Java.

View all my reviews

The primary things I found interesting and took away from this book are:

"According to the hermeneutic position, the meaning of a document—say a Unified Modeling Language (UML) class diagram—has semantic meaning only to those involved in its creation".  The author argues that XP methods are influenced by Hermeneutics and are therefore better suited to software creation than traditional software engineering formal methods.  "One of the most important implications was the denial of “intrinsic truth or meaning” in any artifact—whether it was a computer, a piece of software, or a simple statement in a natural language. This claim is also central to the school of thought that has been labeled postmodern. It is also one of the core claims of all the hermeneutic philosophers."

Traits of Object Culture
  • A commitment to disciplined informality rather than defined formality
  • Advocacy of a local rather than global focus
  • Production of minimum rather than maximum levels of design and process documentation
  • Collaborative rather than imperial management style
  • Commitment to design based on coordination and cooperation rather than control
  • Practitioners of rapid prototyping instead of structured development
  • Valuing the creative over the systematic
  • Driven by internal capabilities instead of conforming to external procedures
Given how little of the rest of the book I was able to buy into, I was surprised by how closely this list of culture traits aligns with my own ideals.

Emergent Behavior
"Traffic management is a purely emergent phenomenon arising from the independent and autonomous actions of a collectivity of simple objects—no controller needed."  This is really the core of what the entire book is arguing for.  That the behavior of the system should emerge from the communications between simple objects.  It's a very interesting concept.  But I'm not 100% sure it's one I'm ready to totally buy into.  He uses a model for an intersection with cars and a traffic light as an example.  The traffic light doesn't know anything about the cars.  It's just a glorified timer that notifies any subscribers by lighting different colored lights.  Cars, in turn, don't know anything about the other cars, or the other streets.  They just monitor the traffic light.  There are two huge benefits I see to this.  First, the loosely coupled nature of the design allows you to introduce new kinds of cars (trucks, motorcycles, even pedestrians!) without changing any of the other participating objects.  And second, it allows arbitrarily complicated intersections to be modeled without requiring any complex code.

But in the back of my head, I'm always a little bit nervous about this...  The fact that the behavior is emergent is a benefit, but also a draw back because there is no one code file you can read that will describe the behavior.  You must figure it out by running simulations in your head of how all the participants interact.  There are certain problems where these clearly would not be acceptable, and Object Thinking does make this point: "Specific modules in business applications are appropriately designed with more formalism than most. One example is the module that calculates the balance of my bank account. A neural network, on the other hand, might be more hermeneutic and object like, in part because precision and accuracy are not expected of that kind of system."  So the bigger question for me, not addressed in the book, is what types of problems would this emergent be acceptable for?  Ultimately I suspect it's a razor's edge issue, at some point the complexity of the solution may make the switch to emergent result in simpler and more understandable code.

Monday, June 4, 2012

Finding Connected Graphs in F#

I've been learning F# and functional programming.  I first got interested in functional languages when Ben Lee introduced me to Erlang at CodeMash by doing the greed kata with me.  Then I bought Programming F# and started to learn the F# language.  As I was going through the book, I was playing with a little Team City/FogBugz integration script in F#.  Then Anthony Coble told me he wanted to do a talk at Burning River Developers on Falling In Love With Haskell.  His talk was great, and Haskell looks like a mind-blowing language.

So that is how I got interested in playing with this stuff.  In order to actually learn it, and not just read about it, I've been finding and inventing little exercise problems and solving them in F#.  F# is a multiparadigm language, but I've been focusing on the pure Functional portions of the language for now.  I thought I'd share these problems and my solutions on the blog.  The best thing that could come from this is if you, dear reader, would also solve these problems in the language of your choice and share back your solution.  Or if you know F# but don't want to actually fully implement a solution to these, I'd love feedback on how you might have solved the problems differently.

The first problem I want to share is a Graph parsing problem:
Given a graph G, return a list of all the connected graphs in G: [G1, G2, ...].
Here's an example input graph and the expected output:
 The git repo with my solutions is here:

And here's the code:

Some of things I really enjoy about this code:

  • I love nesting simple little functions inside other functions to give names to boilerplate code (ex: isSeen and notSeenOnly)
  • The recursive walk method is stunningly elegant
  • "Map.ofList g" is a cool line of code.  g is a Graph record, but it's defined as a list, and so can be treated like a list.
  • List.collect combines all the returned lists into one list
  • It's also cool that the map variable is in scope inside of the walk function because it's closed over from the findConnectedGraph method.
  • The recursive list comprehension of findAllConnectedGraphsWorker totally blows my mind.  Using yield as expected, but then calling itself with yield! is crazy!
I'm sure there is a lot about this code that could be improved.  There are probably better algorithms too. I'd love to hear your ideas and read your implementations of this problem!

Wednesday, May 30, 2012

Book: Windows Powershell in Action

Windows Powershell in ActionWindows Powershell in Action by Bruce Payette
My rating: 5 of 5 stars

One of the most enjoyable specific technology focused books I've ever read.  Usually books that teach you a language or a framework are pretty dry and uninspiring, but this one was great.  The examples used are good at illustrating the points without going overboard.  But by far my favorite part were the little asides where the author explains difficult design decisions the PowerShell team had to make.

View all my reviews

Tuesday, May 29, 2012

Minor Issues: Query Results vs. Models

I want to take a look at a minor issue that crops up in a very common application structure where you have a list of data, possibly from a search, that the user selects from to view details.

There are some minor issues that must be addressed, and they all have to do with queries especially when we're dealing with SQL.  There will be a query that returns the list by gathering all the data, maybe doing some formatting, and joining to all the relevant tables.  For example, if it's a list of books it well return the title, publish date, author (join to author table; format name), and genre (join to genre table).

Apart from listing the books, the app also needs to be able to add new books.  This will work as follows:
  1. A dialog pops up with all the fields to fill-in
  2. On save, if everything validates, the book is saved in the database
  3. The new book is added to the list with AJAX (did I mention it's a web app?)
Since I don't want to leave you hanging, here are the "minor issues" I'm going to look at:
  • Query performance (N+1 Select)/Query complexity
  • Formatting logic
  • Type conversion
To illustrate my points, I'll use the Active Record pattern.  Using the book example, a naive implementation of the query might look like this:
var books = Books.All();
foreach(var book in books) {
  // display the data by accessing it this way:
Some things to note about this code:
  • It suffers from the N+1 Select problem because for each book it does a query to lazy load the author and another query to lazy load the Genre (technically that's N+2).
  • It formats the date with a .NET format string.
  • It formats the author name using the format logic built in to the Author class in the FormattedName property
The first is a serious issue that we *must* correct, but there isn't anything inherently wrong with the other two.  

Query performance/complexity
To fix the N+1 Select problem, eager loading could be applied.  Eager loading is a tool of ORMs that includes joins in your query an expands those into referenced objects without a separate database call.   Entity Framework, for example, as a nice method called Include so you could write .Include("Author").Include("Genre").  NHibernate allows you to define this as part of the mapping.

This solves the N+1 Select problem, and is generally good enough for a simple example.  But when the query is more complicated using the ORM to generate the SQL can be troublesome.  And it's worth pointing out that written this way, the SQL will return all the fields from all the rows it joined to and selected from, even if only a small subset is needed.  This may or may not affect performance, but it will impact the way indexes are defined.

The N+1 Select problem can also be solved by not using Books.All(), and instead writing a SQL query to do the necessary joins and come back with only the required data.  There are two clear benefits to this:
  1. Using SQL directly means there are no limits on what features of the database can be used.  Plus, the query can be optimized however needed.
  2. Only the required data fields need to be selected, instead of all the fields.  And data from more than one table can be returned in one select without fancy eager loading features.
To represent the results, a Query Result class can be defined.  This class will be very similar to the AR models, but only contain properties for the returned fields.  

Formatting Logic
But this is where those two other bullet points from earlier come into play.  Remember how the date was formatted with a .NET format string?  In a custom query, this can easily be moved into the query result object.  It's the formatting of the author name that is going to cause some trouble.

Pretend there are three columns that represent name: FirstName, MiddleName, LastName.  There are three choices for how to format this into a single name display:
  1. Put the formatting logic in the select statement of the SQL query (duplicates the logic on Author)
  2. Put the formatting logic in a property of the query result object (duplicates the logic on Author)
  3. Refactor Author and call it's method to format the name (awkward)
To explain, here's what Author might have looked like:
public class Author {
  public string FormattedName { get { return FirstName + " " + MiddleName + " " + LastName; } }
This formatting logic is coupled to the fields of the Author class, and so it can't be reused. To make it reusable, it could be refactored into a function that takes the fields as parameters. One way might look like:
public class Author {
  public string FormattedName { get { return FormatName(FirstName, MiddleName, LastName); } }
  public static string FormatName(string first, string middle, string last) {
    return first + " " + middle + " " + last;
This is now in a format that could be used from within our query result object:
public class BookListResult {
  public string FormattedName { get { return Author.FormatName(FirstName, MiddleName, LastName); } }
Part of me loves this, and part of me hates it.

Type Conversion
The other issue that must be dealt with when using the Query Result approach, deals with the AJAX part of our scenario.  Remember how we wanted to add the book to the top of the list after the add?  Well our view that renders the list item is going to be typed to expect a BookListResult, which is what the query returns.  However, after the Add, the code will have a Book instance, not a BookListResult.  So this requires a way to convert a Book into a BookListResult.  I usually do this by adding a constructor to BookListResult that accepts a Book, and that constructor then "dots through" the book collecting all the data it needs.

From a certain perspective, this can be viewed as duplicating the query logic because knowledge of what fields the QueryResult's data comes from appears in two places: once in terms of the physical SQL tables in the SQL query, and again in terms of the Active Record objects.

Yet somehow I still prefer the Custom Query approach to the eager loading approach...  I just like to have that absolute control over the SQL query.  The cost of the boilerplate code here is worth it to me if it means I can directly leverage the query features of my database (like row number, and full text, and CTEs and pivots, etc etc).

As in the last "Minor Issues" post (constructors and MVC controllers), I'd love to hear your thoughts or experiences with these patterns.