Sunday, December 18, 2011

Stories of Productivity

The first time I tried pomodoro, it was exhausting.  Staying completely focused and working for 20 minutes straight tired me out!  I couldn't believe it!  I thought I was very focused, all the time.  I thought my productivity was good.  I couldn't even work for 20 minutes!

--

I used the demo of TimeSnapper for awhile once.  It's a neat program.  It monitors the applications you use throughout the day.  It can even play back a video of what you did all day, greatly sped up of course.  You tell it which applications are productive, and which aren't, and it has this neat timeline graph that shows green for productive time, and red for unproductive time.  In using it I quickly discovered something that I was not consciously aware of, but was very interesting.  As I was working, if I hit a point where I had to either wait for the computer, or I didn't know exactly what to do next, I would switch to an unproductive application.

For example, if I was coding an algorithm, and I hit a particularly difficult part of it, I'd pull up twitter.  Or hit up google reader.  Or check my email.  It was like some kind of weird nervous twitch.  Any time I had to ACTUALLY think, I'd go do something that didn't require thought.  And I was totally unaware that I was doing it.

--

Recently I was taking a screencast of myself doing some work at the prompt.  It was just a proof of concept, so I hadn't planned it out, and I was sitting in front of the TV at home.  I knew what commands I wanted to record, but I hadn't really thought through the examples.  You could see I was typing fast and quickly moving from command to command.  But then I'd hit a part where I had to make up some nonsense content to put in a file, or think up a commit message, and there would be this really long pause.  The pause was way longer than it actually took me to come up with the content.  What was happening was, as soon as I needed to do some creative thinking, I'd glance up at the TV and get lost for a few seconds.  And again, I was totally unaware this was happening until I watched the video.

--

One of the things I've been struck by when I watch Gary Bernhardt's Destroy All Software screencasts, or the Katacast he did, is how fast he is.  Now, he practiced this stuff, it's not like you're watching it come off the top of his head.  But even still, he's FAST.  But I realized, the thing I'm most impressed by is really not how fast he can type.  I mean, he can type fast, and very accurately.  What's most impressive is how he is always prepared for the next step.  He always has the next thing he needs to do queued up in his brain.

Once I noticed this, I started trying to figure out how to get closer to that during day to day development.  In a surprising twist, what I've found so far is the best way to go fast is to go slow.  That's kind of a cliche, but it's overwhelmingly true.  If I give myself the time to think things through, I waste a lot less time in starts and stops and blind alleys.  And if I take just 1 second longer to fully visualize all the steps of what I'm about to do, I'm able to execute it faster, smoother, and with a lot less stress.

--

We recently re-did our office arrangement.  We tore the walls down and made sure everyone was sitting with their teams.  There have been some nice benefits.  For one thing, it's way more fun.  There are many times when spontaneous design and organization decisions are made just because everyone can hear you.  And I think we've built a better sense of team in the process.

Of course there are downsides.  It can get noisy and be distracting.  Especially when random conversations and jokes break out.  I think it's just human nature to have this desire to not be left out of conversation.  You can put in head phones, but I find sometimes even music is enough of a distraction that I can't get my thoughts straight.  And because I don't want to be left out, I usually keep the volume just low enough so I can track what's going on around me.

So there is a trade off with this open spaces, everyone together layout.  You gain some productivity in instantaneous meeting-less decisions.  You gain some camaraderie and some fun.  But you can't close the door and shut out the world so you can fully focus when you need to.  I'm still not sure how I feel on this one.  The focus and productive obsessed part of me likes Peopleware's advice of everyone in their own office with a door.  But the social part of me likes the team room model.

Thursday, December 1, 2011

Powershell: Extracting strings from strings

I was playing with NCrunch.  It didn't work for our solution due to some bug w/ named parameters in attributes.  So I removed it.  But it left behind all kinds of little .xml files.  I could see these files in hg st as "?"'s and I wanted to remove them.

So I used this simple powershell command:
hg st | %{ [regex]::match($_, ". (.*)" } | %{ $_.Groups[1].Value } | %{ rm $_ }
The regex captures the name of the file, skipping the "? " at the beginning of the line.  The Groups[1].Value extracts that file name.  And rm removes it.

That version is using the .NET regex class directly and piping the object that matches outputs.  You can make this shorter, though slightly more confusing in some ways, using powershell's -match operator:
hg st | %{ $_ -match ". (.*)" } | %{ rm $matches[1] }
This is using the magic $matches variable which is set to the results of the last executed -match operator.  The reason I say this is slightly more confusing is that it depends on the order of execution of the pipeline.  This wouldn't work if the pipeline ran all the -match's and then ran all the $matches.  But because the pipeline is executing each %{} block once for each object, it does work.

If hg outputted objects instead of text, this would have been much easier.  But this shows how you can lean on regex's when you have to deal with strings.

Friday, September 23, 2011

Powershell and Hg Magic

I moved a bunch of files that were in an hg repo and did an hg addremove -s 100.  They all should have been recorded as renames, but hg summary showed me that 1 of them wasn't.  But which one?

Powershell to the rescue!
$s = (hg st -a -C) -join "`n"
[regex]::matches($s, '^A.*$\n^A.*', "Multiline")
Lets break this down:
  • hg st -a -C: lists all added files including what file they were copied from.  Hg st considers a rename to be a copy and a remove.  For each renamed file this will output two lines:
    A <file\path\here>
      <copied\from\path\here>
  • $s = (...) -join "`n": takes the array of strings resulting from the hg st command and joins it into one big string in the $s variable.
  • [regex]::matches($s, '...', 'Multiline'): Runs a multiline regex on the string
  • '^A.*$\n^A.*': Regex matches a line that starts with an A, followed by anything to the end of the line, followed by a line break, followed by another line that starts with A, followed by anything.  In otherwords, this will match if two lines of output both start with A.  In this case, that means the first line is the line that was not recorded as a rename!

Tuesday, August 23, 2011

.NET is Stale?

Here's dhh on twitter: "Wish someone would study the cultural inhibitions in Denmark that binds it to stale, conservative platforms like .NET"

.NET is stale?  Fuck you!

Not to mention the language features of C#:

Is C# the most elegant language ever invented?  No, but it is one of the most elegant I have used, especially for a statically typed language.  And the language itself is clearly one of the most advanced available.  This is stale?

Did all of these ideas originate in .NET?  No, but what the hell difference does that make?!  The .NET community finds and adopts the best ideas, whether they started in Java, Ruby, or Python.  This is stale?

Are there companies still using .NET 2.0 and little to no open source software?  Yea, there are also companies on the bleeding edge, using all the tools listed above.  From organizations with strict upgrade guidelines, to organizations that wait for the first service pack, to organizations that go to production on beta releases.  You'll find it all in the .NET community.  This is stale?

Ruby is a joy to program in.  Dynamic languages are more fun to do TDD with.  Percentage wise, I'm sure more Ruby programmers participate in the open source community.  There are a wide array of really great things about Ruby (and Python, etc etc).  There are also plenty of shitty things (poor backwards compatibility, poor documentation, poor tutorials, elitist attitude, etc etc).

But this bullshit attitude that .NET is stale, outdated, joyless, or somehow dramatically inferior is nothing but short sighted and stupid.  Get over your buyer's remorse and go build some software that contributes to something larger than yourself.

* Did I leave off your favorite fresh .NET tool or feature?  Leave it in the comments.

Friday, August 5, 2011

Windows Console Colors

I just got bit hard by this, so I'm documenting for the future.

I want my cmd, powershell, and console2 colors to all be the same.  Sounds simple, but it's a bit confusing how this all works in windows.

Here's what you need to know:

  1. Your background and foreground color settings specify a color index: 0-15
  2. The actual color code associated with that index can be defined in 3 places: "defaults", "properties", and console2's settings.
  3. "Properties" overrides "defaults," and console2's settings override everything.
I wanted all my shells to have the powershell default colors.  Here's how you do that:
  1. Launch powershell
  2. Right click the window header and select "Properties"
  3. Click the 'Screen Text' radio button
  4. Copy down the index of the selected color box (should be 6, that is the 7th box)
  5. Copy down the color values of that box (should be 238,237,240)
  6. Do the same for 'Screen Background' (index of 5, color 1,36,86)
Now we'll make CMD use these colors:
  1. Launch cmd
  2. Right click the window header and select "Defaults"
  3. Click the 'Screen Text' radio button
  4. Select the index color box you copied down above (should be 6)
  5. Change the selected color values to the values you copied down above (should be 238,237,240)
  6. Do the same for 'Screen Background' (index of 5, color 1,36,86)
Now we'll make Console2 use these colors as well.  Console2 seems to automatically use the color indexes you defined in the "Defaults" settings but it wont use the color values you defined...  So you have to redefine them in Console2's settings:
  1. Launch Console2
  2. Edit > Settings
  3. Change the 5th index color box mapping to the value you copied above (the dark pink one to 1,36,86)
  4. Change the 6th index color box mapping to the value you copied above (the bronze one to 238,237,240)
Now all your shells will have the same colors, including custom shells you host in Console2 like VS or Ruby or git-bash.

Enjoy!

Monday, June 20, 2011

How Powerful is Your Language?

Most mainstream programming languages today are basically the same.  I mean, they are all Turing complete, they can all access databases, the web, and so on.  What else is there?

Well, my comparative languages class in college said there were four factors for evaluating programming languages:
  1. Readability
  2. Writability
  3. Reliability
  4. Cost
I would add a 5th, which is maintainability.  The argument I'm going to attempt to make today is that Dynamic Languages, and Ruby in particular, score better in these factors than static languages, and therefore are more powerful languages.

The first thing you might think of is syntax, and there is certainly something to be said for that.  Clean syntax makes for enhanced readability and writability.  And the ability to create DSLs in your language is another major plus to readability and writability.  So right off the bat Ruby is off to a pretty good start.

But I think the argument actually goes deeper than that.  Consider the SOLID design principles:
  • SRP: Single Responsibility Principle
  • OCP: Open Closed Principle
  • LSP: Liskov Substitution Principle
  • ISP: Interface Segregation Principle
  • DIP: Dependency Inversion Principle
These are principles we use to describe good code.  That is, more readable code, more reliable code, more maintainable code, and less costly code.  And I believe that Ruby, as a language, has many of these principles built right in.  And that means that it scores higher on the comparative ladder, and therefore is a better, more powerful, language!

Lets start with SRP.  So Ruby has Modules, which are pretty great.  And I would argue that they are a helpful tool for SRP.  I've had some awesome conversations on this point, but I land on the side of saying that Modules and MixIns are totally useful for SRP.  But that's about as far as we go for language support for SRP, so lets move on!

OCP is a bit more interesting, though still straight forward.  Dynamic languages like Ruby allow you to open up any class from a distance and add new methods to them, or change the definitions of existing methods.   So you can change anything in the class without having to open up the actual class definition. That's some pretty serious built in OCP support.  Of course, to be fair, this isn't a feature you're likely to depend on when you're implementing a class you want to follow OCP...  Its more of a last resort really, but we'll find its very useful for mocking which we'll talk about later.

When talking about LSP Wikipedia has this to say: "Behavioral subtyping is a stronger notion than typical subtyping of functions defined in type theory."  This is a super-wonderful sentence for my purposes here!  LSP is all about the behavior of subclasses matching their parent classes, and the beauty of this is that dynamic languages are ALL about behavior, you tend not to get to hung up with "types".  So, you still have to be careful that your derived classes' behavior is consistent with that of their parent's, but since you're already thinking about behavior instead of types, you are off to a much better start.

Interface Segregation seems simple at first glance, dynamic languages don't have interfaces, end of story right?  Not quite, because while ISP is about interfaces, what's its saying is that your interfaces should be highly cohesive so that you cannot divide the methods of an interface into discrete sets based on who calls them.  This is because we don't want to be coupled to methods we don't care about.  But in a dynamic language, the caller just sends whatever "messages" it wants to the object it is calling, so by default our "interfaces" are as segregated as they could ever possibly be!  We have perfect, automatic ISP in dynamic languages.

Dependency Inversion is sort of the crowning jewel of built in principles in dynamic languages.  Most of the time in static languages our main reason for caring about Dependency Inversion is for unit testing.  We invert our dependencies so that we can mock them out in our tests.  This is outrageously annoying because it means we can never ever ever call a constructor.  So we are forced to either a) make all our dependencies stateless or b) wrap all our constructors with factories.  It also means we have to introduce interfaces to describe a huge number of our classes. In Ruby, we don't have to think about this.  At all.  We can stub out the calls to constructors and return our own mock versions.  It's so easy it's totally stunning the first time you do it.

So there you have it!  Dynamic languages are awesome, in part, because they have the SOLID design principles built in, which makes them score better on the comparative language scale.

Monday, June 13, 2011

Code As Practical Art

One of the bands I play with, The Prime Time Big Band, had the lucky opportunity to have Sean Jones run one of our rehearsals.  It was awesome.  He was full of wonderful metaphors, and phrases, and energy.  One of his phrases was "the humanity of the music," referring to emphasizing the emotion of the lines and phrases of the music.  Another was "playing a ballad is like holding a baby."  You don't hold a baby tentatively or nervously.  You have to cradle it lovingly and softly.  But at the same time, you must HOLD it, firmly, and not drop it!

Cool stuff, but why am I writing about it on my tech blog?  Because a lot of what he said, and the passion in the way he said it, made me relate it to my job and my code.  For example, he said the difference between a great band and a good band was small, but it was in the attention to detail.

Sean Jones didn't speak to it directly, but I started to think about how much of what he was describing was very specific to artistic disciplines.  And yet it had a similar ring to programming.  And ultimately it has to do with the freedom of creation.  Art, music, and programming all deal with the creation of something via a controlled but very flexible medium.  Paint and canvas, 12-tone scales and instruments, or code and CPUs.

Tons of people have equated programming to art because of this creative aspect.  But the interesting difference is programming must serve a practical purpose.  Art and music don't have a practical purpose, past being pleasing, or challenging, or making money.  The practical aspect of programming is what can cause us to forget about, or trivialize, the creative joy of programming.

It is possible to deliver practical but crappy software.  That is, the software works, but the code or the UI or the architecture or the performance is for shit.  No one wants to spend their precious time building crappy software, but when the clock is ticking and the boss is getting impatient you can get swept away.

I got into programming because I loved the creative side of it.  Solving problems in clean, elegant, organized, understandable, and dare I say clever ways is what I love doing.  And this requires immense creativity!  It is no coincidence that these things are also what leads to better software applications.

So don't forget what you loved about programming!  Don't shun the creative aspect of your craft.  Apply the attention to detail it deserves and create awesome software!

Monday, June 6, 2011

Should You TDD Controllers?

If you use Selenium to test your website should you TDD your controllers?

My default stance is to TDD everything, so why would I even ask this question?  Because controller tests are just a mess of mocks and therefore tend to be time consuming to write, brittle, and mostly just verify that you called things in the right order...  This is especially true if you write thin controllers by pushing logic into "service" classes and your models.

Controller is one of the Object Stereotypes which is defined as "Controls and directs the actions of other objects.  Decides what other objects should do."  So this is an object that just deals with other objects.  That's why it's so mock heavy to unit test.

If you have Selenium (or whatever) tests that drive your website, then you have integration tests on your controllers, and, indeed, your full stack.  So at least the "happy paths" of your controller are covered.  And arguably covered in a more useful way than unit tests.

So, for argument's sake, lets say you agree with me that controller unit tests are mock heavy, expensive to write, and of limited value.  So what value do they provide?  One thing stands out above all others: error handling.  Especially errors from "services" which may depend on infrastructure concerns like file systems and web services or whatever.  Its difficult to test failures in these types of things without mocks and its difficult to mock in a Selenium test.

So, should you TDD controllers?  I think, only when it provides you enough benefit to out weigh the costs (imagine that).  For me, that means I don't test controllers that just do standard CRUD operations and handle 404 errors.  Those are trivial, fully covered by selenium tests, and take too long to mock.  But anything more complicated than that, I test.

Monday, April 4, 2011

Mercurial: Record

Here's another post about some of the more advanced features of Mercurial.  The last two posts were about the Mq extension, which allows you to maintain a queue of patches with distinct changes in them.  This allows you to keep separate changes separate, and to test those changes separately.

The trick with Mq though is that you have to know ahead of time that you are going to make separate changes and you have to go create a new patch for those changes.  Before I go any further, I want to stress that this is actually a good thing!  You should proactively keep your changes separate.  This helps ensure you don't bite off too much at once, or fail to test you changes.

But sometimes you forget to be proactive about separating your changes and then you want to untangle them after the fact.  You can do this using the Record extension.  This is another extension that ships with Mercurial, but you have to enable it to use it.

The Record extension is invoked with "hg record" and will go through the files you've changed (or just the ones you tell it to look at) and will iterative over each change "hunk" in the file.  A change hunk is a set of sequentially changed lines.  For each hunk it asks you if you want to record the change or not.  You simply say "y" if you want it, and "n" if you don't.

The result is a new commit that contains the changes you recorded.  It's a surprisingly easy process.

However, there's a flaw with this approach.  The resulting commit might be broken: it might not compile, or the tests might not pass.  And since it turns directly into a commit, you don't really get the chance to test it...  Now you could qnew the other changes in your working directory, then pop that patch to get back to the last commit, and test.  And if you notice any issues you can qimport the change that record created in order fix the problems. Then you can qfinish it, and qpush to get your pending changes back.  Follow that?

Or instead of using hg record which results in a new commit, you can use hg qrecord which results in a new mq patch.  You'll still need to qnew the working directory changes.  But now you wont have to qimport if you need to make any changes to the original patch.  I prefer this method because it seems more sane that you end up with mutable patches after prying changes apart instead of finished commits.

Monday, March 28, 2011

Mercurial Mq: Modify a changeset

Mercurial is an awesome distributed version control system.  If you work on a project that cares about clean changesets, you may run into the need to modify a changeset after you have committed it.  This generally isn't allowed, and it's a potentially dangerous thing to do.

It's dangerous because changing history modifies the identity of the repository.  So if you pushed the changeset, then modified it, your repository will no longer be compatible with the remote one.  And that would be bad.

But if you haven't shared the changeset yet, you CAN modify it.  All you need is to enable the mq extension.  You can use this extension to turn an existing changeset into a patch, and then you can modify that patch.  When you're done, you can finish the patch, turning it back into a finalized changeset.  You can also edit more than one changeset this way by importing them, then popping them off the queue one at a time.

Here's an example:
hg qimport -r 123 -r 124
hg qpop

# make changes to files

hg qrefresh
hg qpush

# make changes to files

hg qrefresh
hg qfinish -a

If you want to update the changeset comment, you can do that by editing the respective .diff file in the .hg\patches directory.  For example, .\hg\patches\123.diff.  Just open that sucker up, modify the comments on top and save it when you're done.  Couldn't be simpler!

Monday, March 21, 2011

Mercurial Mq

Mercurial is an awesome distributed version control system, abbreviated "hg."  Hg's command approach emphasizes small well named commands to make it easier to learn and understand.  It also comes "safe" and "easy" out of the box, but allows you to simply turn on built-in extensions to gain all kinds of shoot-yourself-in-the-foot power.

One of those extensions is called Mq.  This allows you to manage a set of patches in a queue.  You may be asking yourself, "What the hell does that mean?!"  Instead of describing how it works, I'll describe when and how you'd use it.

Suppose you are developing a new feature.  You're changing code here, changing code there, adding tests here and there, and so on.  But suddenly you notice some code that could use improving.  But that code doesn't have anything to do w/ the feature you are working on.  What do you do?!

You could just change it.  But if you do, you will have two unrelated changes in the same changeset.  This isn't the end of the world...  But it's not good either.  For one thing, you might introduce a bug.  And now when someone is trying to track down that bug, it will be really non-intuitive that your changeset may have introduced that bug.  Basically, tangling changes is just bad.  It means you're moving too fast, doing too many things at once.  So let's not just change it.

You could hope you'll remember about it for later.  Or you could write it down and hope you'll see your note and come back to it.  But, come on, that ain't gonna happen.

OR you could use the mq extension.  mq will allow you to take all the changes you've made on your feature and put them in a patch in a queue.  Then create a new patch in the queue, and improve the unrelated code.  Now go back to the first patch and keep working.  When you're all done, "finish" all the patches in the queue and they turn into permanent changesets!  ta da!

Here's what that little story would look like, assuming it's the first time you've ever used mq in this repo:
# work on your feature, notice code that needs to be improved...

hg init --mq #only needed first time you use mq
hg qnew -m "my new feature" new-feature #automatically includes all working dir changes in the patch
hg qnew -m "improved some code" improve-code

# improve the code...

hg qrefresh #adds working dir changes to current patch
hg qpop #takes improve-code changes out of working dir, drops you back to just the new-feature changes

# finish your feature...

hg qrefresh #adds working dir changes to new-feature patch
hg qfinish -a #converts all patches into permanent changesets
This might look like a lot to you at first glance, but it's not. And in practice its surprisingly simple.

You move between patches with hg qpop and qpush.  qpop takes a patch out of your working directory and drops to the previous patch in the queue.  qpush does the opposite, adding the changes of the next patch in the queue back into your working directory.  You can have any number of patches, one after the other, in your queue.  And, if you need to for some reason, you can reorder patches.  Check out the MqExtension page for more.  And after you enable it, "hg help mq" makes it very easy to learn all the various commands.

Very useful feature, and addictive once you get used to it!

Monday, March 14, 2011

Mercurial Branches

One of the best things about a distributed version control system, like Mercurial, is how easy it is for many people to collaborate and share code and changes with each other.

In hg there are many ways of sharing changes. You can export changesets w/ the export command and send them to people. You can bundle up a bunch of changesets together with the bundle command. Or you can share your repository and other developers and can pull directly from you.

That last method where people can pull from you is the easiest, fastest, and generally the best. But it can be a real pain to make sure everyone you want can see your repo. This is where named branches can be useful.

In Mercurial you can "branch" by creating clones in "path space." This is the generally recommended way to go. But you can also create branches within the same repository without cloning a new repository. The benefit of this is you can push and pull those branches through a central repository or any other shared repository. This makes is much easier to share changes with people! It's also more efficient, both from a network traffic and disk space perspective.

For example, you could start work on a new feature in a named branch and make a fair amount of progress and decide you want some feedback from the team. You share your changes, and the team can then commit new changes and share them back to you.

If we were using clones instead of named branches, your feature changes would be in a cloned repository. If your team has a central repo, you wouldn't be able to push your clone there because you're not ready to "finalize" your work yet. So instead, you would have to host your clone so your team could pull from it. Then each team member would have to host THEIR cloned version of your clone so you could pull back their feedback.

With named branches you can leverage the shared repos you already have setup, in this case a central repo. You just create a branch, work on it, and push it to the central repo. Your team pulls from the central repo, makes updates to the branch, and pushes them back.

Here's what this would look like:
hg branch new-feature

# do some work and commit some changes...

hg push -b new-feature --new-branch #push to central repo
Note the --new-branch switch. If you don't include this switch hg will abort with a warning that the push will add new branches to the remote repo. This warning prevents you from accidentally sharing branches that haven't been shared yet. Also note that you don't have to include "-b new-feature". Hg will push all changesets in the repository by default.

Your teammates would then do this:
hg pull
hg up new-feature

# review and add feedback commits...

hg push
If the default branch has had commits added since new-feature was branched, the pull command here will print a message telling you that 1 head has been added to the repository. This head is on the new branch. If you run hg merge at this point it will abort telling you that branch 'default' only has one head. If you want to actually merge branches, you have to explicitly give it the branch name, as we'll see next.

When you're all done with the feature you can merge it back to the default branch, and close your feature branch. That looks like this:
hg up default
hg merge new-feature
hg ci -m "merge"
hg up new-feature
hg ci --close-branch -m "close" #closes the branch
hg up default

If you don't close the branch it will remain listed in the hg branches command.

And that's all there is to named branches! Just hg branch to create them, hg update to move between them, hg merge to merge them back together again, and hg commit to work on them. Easy, fast, and efficient!

Monday, March 7, 2011

Craft over Art

I'm slowly working through Apprenticeship Patterns.  Kind of a dry book, but it does have a few interesting concepts.  One I particularly enjoyed was the "Craft over Art" pattern.  The chapter opens with a quote from Richard Stallman: "I would describe programming as a craft, which is a kind of art, but not a fine art.  Craft means making useful objects with perhaps decorative touches.  Fine art means making things purely for their beauty."

It goes on with "As a craftsman you are primarily building something that serves the needs of others, not indulging in artistic expression.  After all, there's no such thing as a starving craftsman.  As our friend Laurent Bossavit put it: 'For a craftsman to starve is a failure; he's supposed to earn a living at his craft.'...  If your desire to do beautiful work forces you out of professional software development and away from building useful things for real people, then you have left the craft."

"Part of the process of maturation encompassed by this pattern is developing the ability to sacrifice beauty in favor of utility if and when it becomes necessary."

I found this to be an incredibly accurate and valuable discussion.  One of the most difficult balancing acts in programming is that between building what your customer needs, and building what you wish they needed, or even just what you want to build.  Sometimes its hard to tell the difference between the two.  Other times, you know the difference, but it pisses you off!

There is another balancing act that comes up a lot: that between the quality you WANT to build and the quality your user wants.  This goes both ways, but generally we tend to want to build at a higher quality than our users think they want.  "When using this pattern you will have to balance your customer's desire for the immediate resolution of their problem with the internal standards that make you a craftsman."  This is especially important for people influenced by the craftsmanship movement.  Sometimes craftsmanship comes across as a pursuit for perfection over a pursuit for utility.

For me, this Craft over Art pattern was a good reminder to stay focused on delivering high quality utility for my users.

Monday, February 28, 2011

A Bit About Pointe Blank

I'm excited to announce I recently got a promotion at work!  I'm now Pointe Blank Solution's Software Engineering Manager, which basically means I'm in charge of our development team.  The line is: responsible for setting technical objectives, development team processes and practices, and fostering a productive and engaging environment. What makes this extremely exciting for me is both the products we are building, and what we are trying to grow into.

We are a fairly small company with 11 developers and 20 people all told, and we're growing.  Our focus is on building software that changes the way industries get their work done.  We typically partner with a client and build a product much like a consulting firm would.  But then we go on to productize that software and sell it to a broader market.  We don't do short term contracts and we're not a body shop.  If the software isn't going to make a big impact on our clients and our community, we won't build it.  Our two main projects at the moment are in justice and health care.  Just about the most complicated industries you can imagine, so we have no shortage of interesting challenges to work through, both business and technical.

Our entire organization believes that good code is a competitive advantage.  As a result, our development team is dedicated to Acceptance Test Driven Development in Ruby with Cucumber at the feature level, BDD with NUnit at the unit level, and Clean Code practices throughout.  And we try to use the latest technologies including ASP.NET MVC 3, C# 4, CSS 3, HTML 5, jQuery, and Ruby.  We use code reviews to maintain our high standard of quality and to share knowledge between team members.

We are building infrastructure around maintaining a rapid feedback loop and enabling the Boy Scout Rule.  Part of this is a framework of supporting code and tools we call Nails.  This includes everything from Continuous Integration, to one click automated deployment tools, to custom code generation tools (and most of that tooling is written in Ruby).  For example, a single push to the central code repository automatically builds the source, runs the unit tests, runs the Cucumber tests against a built-from-scratch test database, migrates the beta database, and deploys the beta website.  And that's just the build server.  We're also building tooling to support the TDD feedback loop.  We want developers to be able to focus on building value, not doing tedious configuration work or boilerplate coding.

We believe in passion, craftsmanship, and continuous learning.  To further that, we're currently in the process of putting together a meetup.  Basically, we're going to take our internal meeting, and open it up to the public.  Hopefully I'll have more news about that soon.

We are about a year into most of this, so it is constantly evolving as we learn and keep improving.  So, this is a very exciting time at Pointe Blank!  I'm writing all this because 1) I'm exciting about it and 2) I think we are a very unusual company, especially in this part of the country, and I want to get the word out.  Plus, I kinda just wanted to brag...

Tuesday, January 18, 2011

Strings in Ruby vs. C#

In Ruby:
s1 = "string"
s2 = s1
s2 << "a"
s1.should == s2

In C#:
string s1 = "string";
string s2 = s1;
s2.Insert( 0, "a" );
Assert.NotEqual( s1, s2 );

Strings in C# are immutable. So trying to change a string actually creates a new string. So updating s2 does not update s1.

Strings in Ruby are mutable. So strings in Ruby act like pointers and s1 is updated when you update s2.

UPDATE 1/19/2011:
To be more clear, here are some more examples of how Ruby behaves:

s1 = "string"
s2 = s1
s2 += "a"
s2.should_not == s1

s1 = "string"
s2 = s1
s2.gsub!('s','z')
s2.should == s1

Monday, January 10, 2011

Withholding Information

I read this blog post titled Team Trap #5: Withholding Information the other day.  It tells the story of a team brainstorming meeting in which the team is eliminating ideas.  When "Harry"'s idea is eliminated, Harry takes it as a personal attack and detaches from the meeting.  The author's take is that by withdrawing from the meeting and not saying anything about his emotional state to the team Harry is withholding information that the team needs to function well.
This sort of thing happens all the time. One member of the team feels like he’s not being heard, or isn’t valued and withdraws. The rest of the group goes on, discusses, makes decisions, starts to act. The team is missing out on the intelligence, creativity and participation of that member. They won’t have his buy-in for decisions, and won’t have his full-hearted support for action. When situations like this aren’t handled, relationship fracture and drains away. When you’re part of team, you need to be willing to say what’s going on for you, so that the team stays healthy and connected.
Now, if everyone took every opportunity to treat things as personal attacks and started telling the team how their emotions had been hurt we'd never get any work done.  But it is true that this kind of thing happens.  And it happens to everyone at one time or another.

That said, I think it's especially important for programmers to keep this in mind because we have a tendency to expect people to be rational, and we don't react well when they aren't.  People aren't machines, and if you're going to build a strong team it's important to remember that.

Also worth noting, "It's just business" is bullshit.  Work can't be done well without emotion.  But you do have to manage those emotions.  Yours, and everyone else's.

Tuesday, January 4, 2011

Learning to Focus

You can't program well, efficiently, and successfully unless you can focus.

The enemy of focus is distractions: your boss walking in, your phone ringing, your co-workers talking to you, emails, IMs, tweets, text messages.  These are distractions that actively steal your focus.  There are also distractions that you create yourself: reading Google Reader, Facebook, Twitter, and Reddit, talking to your neighbors, working on too many things at once, and so on.

These kinds of distractions need to be managed:

  • Turn off email notifications and check them less frequently.
  • If you are on IM, mark yourself busy when you're working.
  • If someone walks in or calls, tell them you need 20 minutes to wrap up.
  • Limit the number of things you are actively working on at one time.
  • Schedule breaks.
  • Expect to be interrupted.

The last two are especially important.  Scheduling regular breaks as in the Pomodoro Technique helps you to stay focused during your work periods.  When you know you have a break coming up its easier to put off answering messages, and reading crap on the internet.

Expecting to be interrupted by phone calls keeps you from getting frustrated when it happens.  It also means you have to work in small increments and keep note of where you're at so when you do get interrupted, you can get back into it more easily.

Hard Work

This is all fine and good, but at the end of the day the hardest part of focusing is that it is hard work.  When I first attempted the Pomodoro Technique I couldn't believe how hard it was to work for 20 minutes straight.  I had no idea how often I was allowing myself to be interrupted, by active interruptions like phone calls and people talking to me, but mostly by interruptions I created myself like reading crap online and talking to other people.

Actionable

Another important element of staying focused is you need to know what you're working on with enough detail to actually DO SOMETHING.  This shows up in Getting Things Done, when it talks about managing your tasks it recommends you write down both the task and the first actionable step to completing the task.  There is a somewhat subtle but important distinction there.

Focus vs. Interruption Roles

Focus gets very difficult when you have different job responsibilities too.  For example, if you are expected to program and manage a project.  Programming is a role that requires you to be "in the zone", "in flow", "plugged in".  In other words, focused.  But managing a project is an interruption driven role.  Answering peoples questions, meetings, reviews.  Interruption driven roles don't work well with focus driven roles.  To make this work, you have to find ways to set aside time to focus without skimping on your interruption requirements.

Value Your Focus

But no matter what your role is, its absolutely crucial that you understand the importance of focus and that you take your own ability to focus seriously.