Thursday, November 20, 2008

Make CMD Useful

Cmd is the command line processor in Windows. This post will describe a few of the absolutely required things you need to do to make cmd a useful tool.

Step #1: Stop using cmd and use Powershell.

I'm only half joking. Powershell is much more powerful and useful, so if you can use it, you really really should.

But if you can't or can't always use Powershell, here's the short list on how to make cmd useful:
  1. Use auto-completion (the Tab key)
  2. Use macros
  3. Setup colors
Auto-completion
Auto-completion is on by default. For example, go to C: and type "cd Win<Tab>". Cmd will automatically complete your thought and replace Win with WINDOWS. If that's not what you wanted, hit Tab again and it will go to the next likely match.

Macros
Macros are like abbrievations. They allow you to create a shorter name for something. For example, suppose you are looking for a certain file but you can't remember exactly where you put it on your file system. You'll be moving from one directory to another. Every time you arrive in a new directory you'll want to see if the file you're looking for is there. The command dir shows you that. But dir gives you a list with file names in one column and all kinds of information you don't care about in the other columns. That's not a very efficient use of space when all you care about is the file names.

What you really want to see is just the names, laid out in a many columned format. dir /w will do that. But you want it to pause after each page to give you a chance to read. dir /w /p does that. Finally, you want it to sort alphabetically with folders on top, then files. dir /w /p /O:GN does that.

Clearly, dir /w /p /O:GN is way too much to type over and over again as you navigate around. What we need is a macro so we can shorten that up. I use the name dw for my macro.

So how do we create macros? We use a program in cmd called doskey. First, create a text file and call it "doskeyMacros.cmd." In this file define all your macros, one per line. For example:
dw=dir /O:GN /w /p $*
Now, in cmd type:
doskey /macrofile="doskeyMacros.cmd"
This will load in all your macros. So you can now type dw instead of dir /w /p /O:GN.

There is still a problem. Who wants to execute that doskey command everytime they open cmd? We need that to happen automatically. To do this we will have to edit the registry. But first, create a text file called "cmdAutoRun.cmd" and type this in it:
@echo off
doskey /macrofile="C:\full\path\to\your\doskeyMacros.cmd"

Now we edit the registry. Start->Run->regedit.exe. Navigate to HKEY_CURRENT_USER\Software\Microsoft\Command Processor and in the AutoRun key (create it if it's missing) type the full path to your cmdAutoRun.cmd file enclosed in quotes.

Close and reopen cmd and your macros will be loaded.

Setup Colors
If you like the white on black you're done! I like the powershell look, so I change my colors in cmd. Right click on the window header and click "defaults" then switch to the "Colors" tab.

I set Screen Text to r:238, g:237, b:240 and Screen Background to r:1, g:36, b:86

And there you go! Cmd is now a bit more useful than it was before. Enjoy.

Thursday, November 13, 2008

Why Write Good Code?

If you give two programmers the same task, and they both complete it in the same amount of time, and the results both pass the same unit tests, and no bugs are discovered in testing, does it make any difference who's code you choose?

Or, to put it another way, does it matter how the code is written, or does it only matter that it works?

I'm going to make the case that it does matter. Good code matters.

But why? If the code works, who really cares? After all, the purpose of code is not to be aesthetically pleasing. The purpose of code is to tell the computer how to do whatever it is that your user wants it to do. If it meets that end, why would you ask for anything more?

The answer is quite simple: because it doesn't end there.

You Will Have To Go Back
If you could write your code once, get it working, and never touch it ever again, that would be one thing. But that is not how it works in real life. You will have to go back to it.

You might go back to add something new, like a new feature.
You might go back to modify it its behavior, or its look and feel.
You might go back to resolve a bug.
You might go back because you have to do something similar, and you want to see how you did it before.
You might go back so that you can refactor and use part of it for another purpose.

There are a huge number of reasons why you *might* have to go back. So extrapolate that to its logical conclusion and: You will have to go back!

Someone Else Will Have To Go Back
Worse still, it might not even be you who has to go back to that code you wrote. It could be someone else. Maybe someone who was around when you first wrote the code, or maybe a new hire, straight out of school. Or an intern, still in High School!

So not only is it very likely that you will have to go back, it's even more likely that someone else will have to go back!

They Don't Teach That In School
This is something you don't learn in school. In school, you're given an assignment. You hack on the code until it seems to work. You turn it in. The teacher black box tests it (I've only heard of a few professors who actually read the code and graded on style, design, etc) and you get a grade. You leave the code on your hard drive and never open it again.

In school, working code is the only thing that matters because you almost never have to go back. You certainly never have to go back a year later, when you've forgotten everything about it.

Because of this, the real hard cord nerds like to get into code golf matches. "My program was only 200 lines!" "Mine was 150 lines!" Sadly, exceedingly verbose code doesn't translate into code that is easy to come back to. Quite the opposite, usually.

Good Code Matters
Good code matters precisely because you (or someone else) is going to have to go back to code you've written in the past. If its good code, you stand a chance of being able to add features, fix bugs, update behavior, whatever. If its really good code, you stand a chance of being able to do those things and still have good code when you're finished. If its bad code... maybe not so much.

Though the bad code might work, it's a veritable house of cards. It may be nearly impossible to understand, and therefore nearly impossible to work with. Or it may be so fragile that the smallest change causes seemingly unrelated things to break. Or, in the least severe case, you may just have to do a ton of refactoring to make what at first glance should have been a fairly inconsequential update. Of course, in that case, the chances you do the refactoring are kind of low. Its more likely you just hack it in. And now the code is even worse than before.

There's another form of bad code. It could be riddled with errors. When you read the code, you see potential for errors all over the place. You know that no errors have been found in testing or production, but the code clearly has errors. And you have no idea how it's possible no one has run into those errors yet. What do you do if you've come back to code that looks like that?

Definitions
So that's why bad code is bad. Its very difficult to work with when you have to go back to it. You may have noticed that I've managed to get this far without ever really defining what "good code" is, or what "bad code" is.

Bad Code:
  1. Code which doesn't work or has major bugs
  2. OR Code which is difficult to work with when you have to go back to it
Good Code:
  1. Code which works and has few minor bugs
  2. AND Code which is easy to work with when you have to go back to it
This definition has an interesting implication. You could go back to a piece of code to add a new feature and feel that it falls into the Good Code category. This is because the way the code is written has made the change you're making easy to make. But you could go back to that code again to add a different feature and this time decide it falls into the Bad Code category. This is because the way the code is written has made the change you're making hard to make.

Same code, one time it was good, one time it was bad. Agile methods and TDD talk about this in terms of "guarding against change." The simple and unfortunate truth is that you can't possibly write your code to guard against every change someone might want to make to it. You just have to take your best guess and hope it works out.

Degress of Bad
Of course, there are degrees of good and bad code. Code that has been written in a loosely coupled fashion, that does one thing well, that applies sane design patterns, and is as simple as possible will always be easier to go back to than code that has none of those properties.

Someone may still look at it, and for their purposes, think it is bad. But clearly, it could have been worse. Much worse. Much much much much much worse! Maybe even so bad that it seems the only path forward is to completely rewrite it.

The Netscape Fallacy
A company called Netscape used to make a web browser called Navigator. There are probably people alive today who never even heard of it. It used to be the best and it had the most market share. I know, unbelievable.

After Navigator 4.x they decided that the code was simply too hard to deal with, and if they were going to make any progress moving forward they would have to rewrite it. Three years later they came out with Netscape 6. But by then, Internet Explorer had stolen all but 10% of the web browser market.

And then Netscape was bought by AOL (read: went out of business). And sometime later, Firefox was born! Which was able to climb back to having something like 12% of the browser market!

Joel Spolsky writes about this in Things You Should Never Do.
And again in Netscape Goes Bonkers (I think it really pissed him off).

The lesson to take away is that receding into a hole for three years so you can rewrite your code is not a good idea.

Why? Because everyone else passes you by in the mean time. Because you end up with "Good Code," but it has more bugs than the "Bad Code" had.

What's the alternative? Write good code to start with. Failing that, take baby steps. That is, refactor the code, don't throw it all away. Failing that, refactor in parallel with on going maintenance and development. And last but not least, go out of business.

Why Write Good Code?
So why write good code?

Good code saves you time in the long run. You will have to go back to that code. If it's good, you'll be able to work effectively with it. If its bad, you're screwed.

Good code helps keep you from falling pray to the Netscape Fallacy. You'll have a better chance of updating and refactoring your code to stay modern if it's good. If its bad, you're stuck having to rewrite.

Good code gives developers a fighting chance when they have to work on code they didn't write. They'll probably still think its bad code. But they'll be more likely to survive the experience.

But Doesn't Good Code Cost More Up Front?
In our hypothetical scenario that started this whole post, two developers took the same amount of time to write code to do the same thing, and both ended up with code that worked and had no bugs.

You may have scoffed at that and asked, "In reality, doesn't it take longer to write good code than bad code?" And further, "Are the benefits of good code worth the up front time investment?"

First off, it won't always take longer to write good code. An experienced developer can write good code in the same time it takes an inexperienced developer to write bad code. So it doesn't necessarily take longer to write good code.

But it clearly takes time for an inexperienced developer to become experienced. And it will take time for them to experiment as they try to learn how to write good code. After all, it's one thing to know the definition. It's something else entirely to actually fullfill the definition. I'm certainly still working on that one.

So the second question should actually be, "Are the benefits of good code worth the up front cost of developers learning how to write good code?"

Given all the downsides to bad code, and all the up sides to good code, I think the answer to that is a clear "Yes!"

Thursday, October 16, 2008

Remap Ctrl and Caps Lock

I used to think that I was the only person who did this, but as my horizons have widened slightly I've realized I'm just one in the crowd. But it's still a minority group to be sure, so I'm doing my part to evangelize.

The standard keyboard layout puts the Caps Lock key to the left of the A key and it puts the right Ctrl key in the bottom left. This means Caps Lock is basically on the home row while Ctrl is behind and almost under your hand. This seems backwards to me as I never use Caps Lock, but I use Control about 10,000 times a day (Ctrl+S, Ctrl+click in Firefox, Ctrl+C, Ctrl+V, Ctrl+Z, etc).

So, as I briefly mentioned in a post about Keyboards almost a full year ago, it would seem to make sense to swap the Caps Lock and Control keys so that the key you use frequently is on the home row.

When I first did this a very long time ago I muddled around deep in the Windows registry and came up with a .reg file you could use. But I recently stumbled on a much easier way: just use SharpKeys. If you use an operation system other than Windows, hit up Google and you'll find tutorials on how to do it for your OS.

Tuesday, October 14, 2008

Vim Learning

This is the ninth post in my series on using Vim to do C# development. You can read the introduction into why someone might want to do that here.

At this point we have a very flexible, fast, and relatively easy to use environment for editing, all within Vim. As you can see, Vim is not the type of editor that does everything you want out of the box. The Vim philosophy seems to be: be capable of doing everything, but make no decisions on behalf of the user. In other words, where other software would have picked a bunch of defaults that they thought were best, Vim does nothing and lets you pick what you want. The downside is it takes time, and it can be hard to even know features exist. The upside is you can configure it just how you want it, and I get 9 blog posts out of it.

This post is about how you find out what Vim can do, and how to setup it up. Basically, this is how I figured out all the things in the last 8 posts.

#1 Vim Help
The best way to learn about Vim, what it has to offer, and how to configure it is to browse around the help. Just do :help and start navigating around. Make sure you read the first section of help.txt as it teaches you how to navigate the help. The most important thing to know is that CTRL-] follows a link ("tag") and CTRL-T takes you back.

Also, read :help help_context as it teaches you how to find help on the commands in a certain Vim mode (normal, visual, insert, etc).

If you're looking for something specific, but you don't know what vim calls it my best advice is to just not get discouraged. Keep fishing with :help , trying different possible words. Don't be in a hurry. Stay in the mind set of browsing around. Read what you land on, if it doesn't look right keep looking for clues before giving up. Frequently all you need to know is Vim terminology.

#2 Google
When fishing through Vim Help fails you, try Google. This may teach you the terminology you needed. If so, dive back into the Vim help.

#3 Vim Tips
The Vim Tips Wiki is the next best place to go. Here you'll find all kinds of stuff people have come up with that might be helpful for you. Most of the scripts I've written are simply adaptations of scripts I found on the Vim Tips Wiki. These can be somewhat scattered. You'll frequently have to read through all the "duplicates" because each will have slight variations. Generally you can get what you want by picking between the variations.

#4 Vim Scripts
The Vim Scripts page is similar to the Vim tips but contains more robust scripts rather than simple tweaks and tips. This is where you'll find things like the Snippets script.

#5 Real People
Sometimes you just can't get an answer on the web or from the help. In those cases, it's best to ask someone who may have more experience. I've had a lot of help from the guys on the Vim IRC channel (server: freenode, channel: #vim). If you don't have an IRC chat program I recommend ChatZilla which is an AddOn to Firefox.

Armed with this, you should be able to get Vim to do anything it's capable of. So if you come up with (or come across) any cool or useful scripts/tips/etc, please let me know!

Thursday, October 9, 2008

Vim File Editing

This is the eighth post in my series on using Vim to do C# development. You can read the introduction into why someone might want to do that here.

In Visual Studio you have a solution which contains all of your files. You simply locate the file by clicking around through the tree structure, then double click to open it. When editing more than one file, Visual Studio puts each file in a tab.

In Vim you open files to be edited with
:e path
You can use the tab key to auto-complete directory and file names as you type them. This will open the file to be edited. If you don't know the name of the file and you need to click around a file browser to find it, do
:browse :e
This will open a standard file browser dialog and then open the file you select.

Vim uses the concept of a buffer when editing multiple files. Buffers aren't visible like tabs, but otherwise they're basically the same. To see what buffers are open do
:ls
You'll see a numbered list. To go to one of those do
:b2
where 2 is the number from the list of the buffer you want to switch to. Or if you'd rather use the name of the file do
:b file
where file is the name. Use the tab key to auto-complete from all the open buffers. This auto-complete will even work if the file doesn't begin with what you typed. For example, type "gram" and it will find "program.cs". This is tremendously helpful!

If you're looking for alt-tab like functionality so you can keep switching back and forth between the same two buffers do
:b#
This is another really useful command to remember.

Vim can also split the window and show you multiple buffers simultaneously, or the same buffer at two different locations. There are tons of ways to do this (see :help windows.txt). I'll show you the way I like to do it because I find it to be the easiest to remember.

To open a new horizontal split do Ctrl+w s. For a vertical split do Ctrl+w v. Then just use :e or :b to work on whatever file you want in that split (by default it will open to whatever was in the window when you opened the split).

To close a split do Ctrl+w q. To move between splits do Ctrl+w {h,j,k,l}. h,j,k,l move the cursor around when you're in command mode: left, down, up, right respectively. When you use them with Ctrl+w they move between splits instead of moving the cursor, but the direction stays the same so it's easy to remember. For example, if you have two open splits and you're working in the bottom one, Ctrl+w k will move you to the top one. Then Ctrl+w j will move you to the bottom one.

Vim also has support for normal visible tabs, but I find it's buffer support to be easier to use in the long run. Especially since you can only have so many tabs open before they become useless, and since file names (and paths) can be too long to display reasonably on a tab (just look at MS SQL Management Studio...). If you're interested, :tabe opens a new tab, see :help tabpage.txt for more details.

The last major feature is one that Resharper adds to Visual Studio (and is also available in TextMate I hear) which allows you to search for a file you want to open when you only know part of it's name and you don't know exactly where it is. I have found this to be unbelievably useful, so I wanted to have the same ability in Vim. To do this I adopted a script from Vim Tips to Windows (and removed the Perl dependency). Add the following to your vimrc:
function! Find(name)
let l:_name = substitute( a:name, "\\s", "*", "g" )

let l:files = system( "dir *".l:_name."* /B /S" )
let l:list = split( l:files, '\n' )
let l:len = len( l:list )

if l:len < 1 echo "'".a:name."' not found" return elseif l:len != 1 let l:i = 1 let l:cwd = substitute( getcwd(), '\\', '\\\\', "g" ) for line in l:list echo l:i . ": " . substitute( l:line, l:cwd, "", "g" ) let l:i += 1 endfor let l:input = input( "Which ? (=nothing)\n" )

if strlen( l:input ) == 0
return
elseif strlen( substitute( l:input, "[0-9]", "", "g" ) ) > 0
echo "Not a number"
return
elseif l:input < 1 || l:input > l:len
echo "Out of range"
return
endif

let l:line = l:list[l:input-1]
else
let l:line = l:list[0]
endif
let l:line = substitute( l:line, "^[^\t]*\t./", "", "" )
execute ":e " . l:line
endfunction

command! -nargs=1 Find :call Find("")

Note: this script depends on the DOS dir command. It would have to be modified to work on a different system.

To use it simply type
:Find part of file
This will search recursively from your current directory (:cd) for files whose names contain *part*of*file*. It will then display a numbered list of matches and ask you which you'd like to open. Simply type in the number and hit enter, and the file opens.

With these techniques at your disposal you can now edit many files in a way which I believe to be much superior to what Visual Studio has to offer.

Update 5/4/2010:
Added :b# for alt-tab like functionality

Wednesday, October 8, 2008

Vim to and from Visual Studio

This is the seventh post in my series on using Vim to do C# development. You can read the introduction into why someone might want to do that here.

Now that we've got Vim setup to a point where it's quite useful for doing C# development, we need to start actually using it. As I've discussed in the past, there will still be lots of times when you'll want to use Visual Studio. Whether it be for the Designer, or for Exploring APIs you're unfamiliar with through Intellisense, or for manipulating resources, settings, project, or solution files, or for debugging.

VS -> Vim
The key is to be able to get back and forth between them easily. To get from Visual Studio to Vim is the easiest. Simply add an "external tool" which launches Vim and opens the current file. In VS go to Tools -> External Tools. Click Add and enter the following:
Title: Vim
Command: C:\Program Files\Vim\vim70\gvim.exe
Arguments: +$(CurLine) "$(ItemPath)"
Initial directory: $(SolutionDir)

There will now be a "Vim" item in your Tools menu which will open Vim to the current file AND the current line.

You can assign a shortcut key to this command as follows.
  1. Note exactly where the "Vim" command appears in the Tools menu (How many items from the top or bottom is it?)
  2. Go to Tools -> Customize -> Toolbars
  3. Click on the Tools menu so it opens and find where the Vim tool was, it will now say External Command X where X is some number. Remember X.
  4. Back on the Customize window, click "Keyboard..."
  5. Type "ExternalCommandX" into the "Show commands containing" box, where X is the number you just found
  6. Choose your shortcut key and assign it. I use Ctrl+Shift+V, Ctrl+Shift+V
Now you can open Vim from Visual Studio with a simple shortcut key sequence.

Vim -> VS
To get from Vim to Visual Studio is pretty easy as well. Assuming that your current path (:cd) in Vim is the path containing the .sln file, you can simply type the following:
:! *.sln

When you hit the tab key Vim will automatically expand to the full name of the sln file. Hit enter and the solution will open in Visual Studio.

Tuesday, October 7, 2008

Vim Code Folding

This is the sixth post in my series on using Vim to do C# development. You can read the introduction into why someone might want to do that here.

Visual Studio has a feature called "Outlining" which automatically allows you to collapse and expand regions, comments, methods, classes, namespaces, using statement blocks, etc. Ctrl+M, Ctrl+M will expand and collapse the block that your cursor is in. Ctrl+M, Ctrl+O will collapse all methods and summary comments (but not classes and namespaces).

Vim has this same ability built in but calls it Folding. The difference is simply that it has to be turned on if you want it, and that it doesn't understand your code as well as Visual Studio does (though it can be taught!).

In Vim, there are a number of different ways that folding can be done: syntax based, indent based, marker based, and manual. Syntax based looks for folding to be defined in the syntax file for the language being edited. Indent based folds lines that are at the same indent level. Marker based looks for a given character sequence and folders everything between those characters. Manual allows you to define where the folds should be.

I like to use the Syntax folding that's defined in the default Vim C# syntax file. This will fold #region ... #endregion blocks.

To make sure this is always on when I'm editing C# files I added the following to my vimrc:
if !exists("autocommands_loaded")
let autocommands_loaded = 1

" setup folding
autocmd BufNewFile,BufRead *.cs set foldmethod=syntax
endif

Note: if you've setup an autocmd for C# building as in my earlier post make sure you use the same if !exists()... block.

The following commands are used to open and close folds:
zo - open fold under cursor
zc - close fold under cursor
zR - open all folds
zM - close all folds

You can remember the "z" by thinking of those old accordion reams of paper where each sheet was connected to the next and imagining it from the side as you lift and lower the top sheet. From that profile the paper will appear to form a "z" as you fold and unfold it.

See :help fold.txt for more details.

Monday, October 6, 2008

Vim Snippets

This is the fifth post in my series on using Vim to do C# development. You can read the introduction into why someone might want to do that here.

Snippets are code templates that you can have inserted into your code. Its kind of like copying and pasting from a file with a bunch of common code blocks in it, but faster. Visual Studio has support for snippets, but they require bouncing through menus to find what you want, so they're not that convenient. As such, I don't know anyone who uses them. The only snippet Visual Studio does have that I use is the one for summary comments. When you type '///' it expands to a full summary comment block. It even includes your parameter names and everything, which is pretty sweet.

The TextMate editor on the Mac recently brought a lot of attention to the concept of snippets, and so now everyone loves them. Mostly because the way TextMate implemented them was easy to use, and because it came with a lot of good predefined templates for Ruby on Rails.

Not surprisingly, someone wrote a script for Vim that mimics TextMate's snippet support. Follow that link, then follow the instructions and you'll have it all installed.

I then created a cs_snippets.vim file in ...\vimfiles\after\ftplugin to define a few C# snippets. This is what my file looks like now:
if !exists('loaded_snippet') || &cp
finish
endif

" summary comment
Snippet /// ///<summary><CR><{summary}><CR></summary>

" object declaration
Snippet dec <{Type}> <{VarName}> = new <{Type}>();

"foreach
Snippet foreach foreach ( <{Type}> <{Var}> in <{Coll}> )<CR>{<CR><CR>}

"try/catch
Snippet try try<CR>{<CR><{}><CR>}<CR>catch<CR>{<CR>}

With these snippets you can type: ///<Tab> and it will expand to:
///<summary>
///<{summary}>
///</summary>

The summary tag in the middle will be highlighted and you can simply type whatever you want to write over it. Then hit tab to commit the value you typed and move to the next tag, if there is one. If you have the same tag name in more than one place in your Snippet all occurrences will be replaced.

Check out :help snippet for more details.

Let me know of any useful snippets you use too.

UPDATE 12/23/2009:
I recently switched from using the SnippetsEmu plugin I wrote about in this post to using SnipMate.  The main reason I switched is that SnipMate allows you to define your snippets on more than one line, which makes them much easier to get right and keep up to date.  SnipMate also seems to work a little better when you're filling in your snippets.  I highly recommend you check it out.

Friday, October 3, 2008

Vim Help Integration

This is the fourth post in my series on using Vim to do C# development. You can read the introduction into why someone might want to do that here.

The last post on Vim Intellisense talked about using ctags to navigate to the definition of a class/method and read the documentation there. This covers any code you, or your company, has written, but it leaves out all of the .NET framework's objects. That's a pretty big oversight.

The problem is we need to know what methods exist on a .NET object, or what parameters a .NET object accepts, or just general help on a certain .NET object. This information is all easily obtained from the MSDN library, a largely under utilized resource as most developers I know depend primarily on Intellisense instead.

To solve this, we'll setup Vim so that you can put the cursor on any word and hit F1 and Vim will automatically open your browser of choice and do a search of the MSDN library for that word. To do this, add the following to your vimrc:
" setup integrated help
function! OnlineDoc()
let s:wordUnderCursor = expand("<cword>")

if &ft =~ "cs"
let s:url = "http://social.msdn.microsoft.com/Search/en-US/?Refinement=26&Query=" . s:wordUnderCursor
else
execute "help " . s:wordUnderCursor
return
endif

let s:browser = "\"C:\\Program Files\\Mozilla Firefox\\firefox.exe\""
let s:cmd = "silent !start " . s:browser . " " . s:url

execute s:cmd
endfunction

map <silent> <F1> :call OnlineDoc()<CR>
imap <silent> <F1> <ESC>:call OnlineDoc()<CR>
Notice that I use firefox, and that the help site to use is determined based on the file extension of the current file. This allows you to set this up for any language you work with. Finally, if the extension isn't defined it'll open up Vim's help.

Thursday, October 2, 2008

Vim Intellisense

This is the third post in my series on using Vim to do C# development. You can read the introduction into why someone might want to do that here.

Before you get excited, I can't tell you how to get Intellisense setup in Vim. I can just tell you how to get close. I would say close enough, but you'll be the judge.

Intellisense in Visual Studio can do the following things:
  1. Complete variable, class, and method names as you type them
  2. Show you what methods/properties/etc are available on a given class after you type "class."
  3. Show you summary documentation on methods/properties/classes/parameters
Visual Studio can also take you to the code that defines a class/method/property with "Go To Definition".

In Vim, I can do #1 (sort of) and "Go To Definition." I claim this is "good enough" in most circumstances because Go To Definition takes you to the code where the method documentation resides AND it makes it easy to get right back to where you came from. This is obviously much more work than it is in Visual Studio where all that information is simply at your finger tips, but I found that the majority of the time the information is in my memory. And when it isn't, going to the definition and then coming back isn't so costly that it is a deal breaker, at least for me.

But in the interest of being very plain and transparent about this, Vim can't even come close to touching Intellisense, and there are many times when Intellisense makes life a lot easier. In these times, I work in Visual Studio. With that out of the way, on to how to do this in Vim!

The first part, Word Completion, is built in to Vim and you don't have to do anything at all to turn it on. Simply hit <ctrl+n> and Vim will autocomplete the word you're typing. If there is more than one match, it will show them in a popup menu. Use <ctrl+n> and <ctrl+p> to move forward and backward through that popup. Type anything to select the word. Vim uses every word in every buffer you've opened (in the current session) to match against, so it will always work on variable and method names in the same file and will work on those in other files if you've opened that file.

The second part, Go To Definition, requires more work. First you have to get a program called ctags. When you run ctags (from the shell) it builds a "tags file" which is basically a dictionary of every method and class name in every file you told it to parse. Vim knows how to parse a tags file, so you can then position your cursor on a method name in Vim and type <ctrl+]>, and Vim will take you to the file and line where that method is defined. After you've found what you need and are ready to return to where you came from simply type <ctrl+t>.

To run ctags just go to the top directory of your source code and run:
ctags --recurse
This will parse all the files in that directory and all it's subdirectories and then create a "tags" file in that directory. By default Vim will look for tags files in Vim's current directory and in the directory where the file you are editing is located. The only trick here is that you have to keep the tags file up to date by manually re-running ctags if you add or remove methods.

Wednesday, October 1, 2008

Vim TFS Integration

This is the second post in my series on using Vim to do C# development. You can read the introduction into why someone might want to do that here.

Where I work, we use Microsoft's Team Foundation Server Source Control. It's not very good at merging, but other than that it's a good tool.

Unlike some other source control tools, in TFS a "get" downloads the latest version of a file, a "checkout" lets you edit a file. Basically all a checkout does is make the file not readonly, but you have to do it or else TFS wont let you check the file in when you're done.

Because of this, if you're using Vim to do your work, when you first open a file and try to go into edit mode Vim will warn you that the file is readonly. This reminds you that you need to check it out. It would be very annoying if you had to switch over to the command line (or VS!) to check it out just so you could start editing. So don't do that! Just add this to your vimrc:
" setup TFS integration
function! Tfcheckout()
exe '!tf checkout "' expand('%:p') '"'
endfunction
command! Tfcheckout :call Tfcheckout()

function! Tfcheckin()
exe '!tf checkin "' expand('%:p') '"'
endfunction
command! Tfcheckin :call Tfcheckin()

The "expand" part will expand to the current file with it's full path included (Note: you have to use the execute command instead of running the ! command directly or else you'll have problems with parenthesis not being properly escaped in your file path).

Now you can simply type :Tfcheckout, and the file will be checked out. When you're all done type :Tfcheckin and the TFS checkin dialog will open, allowing you to enter a comment, checkin any other files, associate the checkin with a TFS work item, etc.

Monday, September 29, 2008

Vim C# Compiling

This is the first post in my series on using Vim to do C# development. You can read the introduction into why someone might want to do that here.

The most important thing to be able to do comes in two parts:
  1. Compile
  2. Step through errors
In college we did development in Java without any kind of IDE. So you typed in your text editor. Then switched to the DOS prompt and "javac"ed. When there were errors, which there always were, you opened the file with the error and navigated to the line where it occurred. This was slow and tedious, but not the end of the world. Until you get used to Visual Studio and being able to F5, and then simply double click on an error.

How do we do the same thing in Vim? First, we have to compile. To start with, I'm going to compile using the following command:
devenv slnname.sln /Build Debug

This will perform the build exactly like Visual Studio would, just without opening Visual Studio. I put this command in a .bat file in the same directory as the .sln called build.bat.

To get Vim to use this batch file to compile:
set makeprg=build.bat

Now from within Vim just type :make when your current directory (:cd) is the one containing the build.bat file, and your solution will build.

What about parsing out the errors? Easy:
set errorformat=\ %#%f(%l\\\,%c):\ %m

This tells Vim how to parse the error output.

Now if there are errors, hitting Enter will take you directly to the file and line where the first error occurred. :cn will take you to the next and :cp will take you to the previous. :cl will list all errors. Read :help quickfix.txt for more.

You could also use msbuild to compile your solution. The command for this is simply msbuild /nologo /v:q. It will find the .sln in the current directory and build it for you. However, with this command you have to modify your .csproj so that the error output includes full paths (otherwise the errorformat wont be able to parse it). To do that, just add the following line to the common PropertyGroup element of your .csproj:
<GenerateFullPaths>True</GenerateFullPaths>

The error format is the same as for devenv. But you don't need the build.bat (if you will always have only one solution in the directory at a time), instead you can use this:
set makeprg=msbuild\ /nologo\ /v:q

Now you can happily compile C# solutions and step through errors from right inside Vim.

If you don't like having to add the GenerateFullPaths node to your project file, you can add it to the msbuild command line instead as follows:
:set makeprg=msbuild\ /nologo\ /v:q\ /property:GenerateFullPaths=true

There is one more enhancement we can make. Vim has the ability to easily switch between compilers and set all the settings that the compiler needs as part of the switch. You can enable this by creating a "devenv.vim" file in vimfiles\compiler (on windows) containing the following code:
" Vim compiler file
" Compiler: DevEnv

if exists("current_compiler")
finish
endif
let current_compiler = "devenv"

if exists(":CompilerSet") != 2 " older Vim always used :setlocal
command -nargs=* CompilerSet setlocal <args>
endif

" default errorformat
CompilerSet errorformat=\ %#%f(%l\\\,%c):\ %m

" default make
CompilerSet makeprg=build.bat

To use this, just type :compiler devenv. Or, if you want Vim to always use this anytime you're editing a C# file (a file with a .cs extension) add the following to your _vimrc:
" setup C# building
if !exists("autocommands_loaded")
let autocommands_loaded = 1
autocmd BufNewFile,BufRead *.cs compiler devenv
endif

If you have any improvements to add, please let me know in the comments!

Sunday, September 28, 2008

Visual Studio Development

When I work on C# programs (which is what I do for a living, so I do it a lot), I work in Visual Studio. Currently, that means VS2008.

VS has lots of wonderful features, including:
  1. Intellisense
  2. "Go to definition"
  3. Integrated building and error output parsing
  4. Integrated debugging
  5. Solution/Project management
  6. Windows Forms/WPF designer
  7. Basic Refactoring
You can add Re-sharper, and then you get more, including:
  1. Compile as you type
  2. Code form suggestions
  3. Find file by name search
  4. Find method/class by name search
This is all wonderful, but it comes at a price. Namely, speed. Because VS can't possibly know when you're going to want what features, it has to prepare them all continuously. This means opening a solution takes forever. Opening the designer takes forever. Building takes forever. Add Resharper and your problems just get worse.

It's a trade-off. Lots of people are willing to sacrifice speed for the convenience of these features. Until recently, I was one of these people.

A lot depends on the kind of work you do. For awhile now, my kind of work has had me opening and closing different solutions many times through out the day. As well as having multiple instances of VS open at the same time.

First I got rid of Resharper. I had to. There were times when I could type faster than the letters could appear on the screen because it was busy compiling my changes. Plus, I realized that I didn't like it's implied style of development. With Resharper, it's assumed that the code you're writing should be perfect the first time. So it's always formatting it for you, and suggesting shorter ways you could write it, or pointing out that you could move this variable from here to there, etc.

I discovered that I don't write code line by line, I write it in "logical units", working through the details. Then I go back and make everything perfect when I'm satisfied with it. My mind is on more than just the one line I'm currently typing. I'm thinking about the whole thing. So when Resharper wants me to stop and think about some minor code form improvement, or when it suddenly moves my code around "formatting" it, it only serves to distract me.

So Resharper had to go. I did miss knowing that I'd typed an error without having to wait for a build. And I also missed being able to open a file by just typing it's name in the search box. But I found those to be very minor niceties overall.

There's this article that Steve Yegge wrote called Effective Emacs. He talks about all kinds of minor little Emacs details. But his larger point is that because he knows how to use Emacs so well, and because it does only what he asks it to, he is very efficient with it.

This got me thinking about Visual Studio, and all the time I spend waiting for it, or clicking around looking for things (files in Solution Explorer, methods/classes in Intellisense). I realized that the power of Visual Studio is that it tries to do so much for you, and hide so much (mostly) irrelevant detail from you. This makes it easier to learn how to develop things. And it can also save you time by generating code, and build scripts, etc for you.

Intellisense is the best example of this. It's always guessing what you're probably trying to type. This is great because it helps remind you what the full method names are, or what parameters they accept.

Visual Studio's whole goal in life is to require the programmer to do as little work and put in as little effort as possible. On the surface of it this seems great. But I suspect that less work and less effort doesn't always result in greater productivity.

For example, it's easier to randomly click through Intellisense, looking for something that might do what you want than it is to look up documentation. But the documentation is more likely to show you the right way to do what you're looking for, and explain it to you. You just have to put in more effort reading through it.

So, wouldn't it be better in the long run to use a tool that is very powerful, but stupid? Allowing you to do exactly what you want, but not trying to guess what you want, and not trying to do it all for you?

It seems quite clear to me that if you're willing to put in the effort, you could get a better development experience using something other that Visual Studio. You'll spend less time waiting for VS to do things for you, and more time thinking about just how you want to do it. You'll get to state what you're looking for, rather than clicking all over trying to find it. Basically, your mind will be more actively engaged, instead of passively waiting for VS to solve problems for you.

I'm not suggesting that Visual Studio is worthless. I'm just suggesting that there may be times when using a different tool would be better, and other times when VS would be better. Why not figure out what those times are, and use the right tool at the right time?

I've been working on that. My tool of choice is gVim, because I know it the best, and I like the modal editing concept better than shortcut key acrobatics. I'm going to write a series of posts documenting everything I've done to try and use Vim to develop C# in an environment where everyone else is using only VS.

In the meantime, let me know what you think about the idea of VS = least possible effort != greater productivity.

Links to Posts in Series:
Vim C# Compiling
Vim TFS Integration
Vim Intellisense
Vim Help Integration
Vim Snippets
Vim Code Folding
Vim to and from Visual Studio
Vim File Editing
Vim Learning
Vim File Navigation

Monday, September 22, 2008

MVC and MVP

Do a google search for MVC MVP and you will find no shortage of posts on the topic.

If you want pictures and diagrams and code samples I'll let you do the Google search. But, in brief, MVC stands for Model View Controller, MVP stands for Model View Presenter. What's the difference?

There is no difference! Well, if you look at it in enough detail, you'll be able to come up with all kinds of stuff, but if you look at the overall picture, there's really no practical difference.

Ok, I knew you wouldn't be happy with that answer... In MVC the View is stateless. When you interact with it (by clicking a button, for example) that action is forwarded directly to the "Controller" which then renders a new View.

In MVP, the View may or may not have state. In either case, when you interact with it the View handles that action (aka, the View has hooked into an event) and forwards the action into a call to the "Presenter." Unlike in MVC where a new View is rendered, in MVP the Presenter has to cause the View to update somehow. This can be done with data binding, or by the method on the Presenter returning data that the View parses, or by the Presenter raising an event that the View receives, or even by the Presenter calling a method on the View...

There is one other element. In MVC, there is a Controller for every View. In MVP, you don't strictly have to have one Presenter for every View. You could have many Presenters per View. You could have a Presenter for every "Control" in your View. Or you could have a Presenter for logical regions of your View.

In the end, the View is the same in both cases: it displays stuff. The Model is the same in both cases: it represents the data. And the Controller/Presenter is the same: it does stuff that the user asked for.

MVC is very applicable to the web, where actions on the rendered html page "post back" to the Controller on the web server.

MVP is very applicable to Windows Forms and WPF where actions on the Form/Window raise events which delegate the majority of the work to the Presenter.

Clearly, these patterns are pretty simple. But they get a lot of attention, and there is a lot of confusion surrounding them. Partly this is because Computer Scientists all have a bit of the Martin Fowler syndrome in them, so we feel like we need to classify everything. We like to have everything all lined up and tagged and placed in the right pigeon hole. Sometimes this is a good thing, other times it can lead to a "can't see the forest for the trees" type of problem.

But mainly, I think these patterns confuse people because they're named horribly. In MVC, the Controller doesn't control the View. The word "controller" invokes an image of a puppet master pulling the strings. The Controller is much more hands off than that. It's really more like an Interpreter or a Router which says, "Ah, I see you clicked the Send button. Allow me to route that to the proper sending authorities, and then I'll send you a new view." So while it certainly does control the application and how the view responds to action, "Controller" is just too loaded of a word.

In MVP, I really can't understand where the word "Presenter" came from. Maybe it's supposed to make you think of a presentation where the Powerpoint slides are the View, and the person talking and moving the slides ahead is the Presenter. But in reality it sounds more like there are two views... I think of the Presenter more like a Calculator. You're working out some equation on paper (the View) and using a calculator to execute certain functions for you, the results of which you record on the paper.

In the end the titles get in the way. What we want is a way to separate as much of our logic as we can from the display. This allows us to do a few things:
  1. Test our logic with TDD
  2. Change the display without requiring huge changes to our logic
  3. Change our logic without requiring huge changes to our display
At least, this is how I'm looking at it right now. I reserve the right to learn something new tomorrow and start from scratch.

Thursday, August 21, 2008

DIP is not for reuse

The Dependency Inversion Principle is this wonderfully clever concept for decoupling layers. I've written a bit about it before here.

To summarize, the idea is that your high layers define interfaces that your low layers implement. Thus insulating the high layer from the low layer. This fits in very nicely with TDD as you can write/test the high layers first and mock out the low layers.

So DIP is good stuff. But it doesn't play well with reusable component architectures. That is, if you're designing software components which you intend to use in many different places (some of which may be as yet unknown), you can't use DIP between them.

Maybe it's time for an example. Suppose we have a Component A and a Component B. Now we're writing a third Component C which will use A and B. Finally, we have an Application App which will use A, B, and C.

A, B, and C may all use DIP to structure the layers within the components. But you wouldn't want to use DIP between the components. If we tried to do that, we would have to define two interfaces in C that described how it used A and B. And three more interfaces in App that described how it used C, B, and A. Then A and B would both have to implement interfaces defined in C and App. Plus C would have to implement interfaces defined in App. And now you can't reuse A, B, or C in another App (without including App).


So what do we do if we want our COMPONENTS to be independent, just like our layers? Just change it slightly. Create an assembly for each component that contains interfaces used to interact with the component (ex: A.Contracts). Make A implement these interfaces, and let anyone who wants to use the component reference the Contracts assembly.


If you have another approach, or a similar issue, or you know, stuff to talk about... Let me hear it.

Monday, August 11, 2008

Users don't want help

This may be one of those obvious, everyone already knows it, stop repeating it like you're the first person to think of it, topics. But it's still true. And it's still important to remember.

Users don't want help.

This blanket statement applies to both types of users: software and drugs. I'm going to focus on the software type.

Think of three programs you use frequently. Lets pretend you're thinking: Internet Explorer, Microsoft Outlook, and Microsoft Office (you're clearly very into Microsoft software arn't you?). Have you ever in the entire time you've been using those used the help system?

If you're like me, the answer is no. While writing this I realized I didn't even know if Internet Explorer HAD help (I checked, it does).

Now have you ever had a problem in any of those programs? Of course you have. But you didn't go to the help. Why not?

Some people will answer by saying that the help is useless. Mostly this is true. But I know there was a time when that wasn't true. For example, I've been told that the help in the old DOS Word Perfect was fantastic. I think this is a chicken and the egg problem. What came first, bad help, or people not reading the help?

My guess is that software vendors actually realized no one was reading the help and so started putting less effort into it. Then people in desperate situations tried using the help and discovered it wasn't very good, so they stopped referring to it even in the most dire circumstances. The vendors still have to provide some kind of help of course. It's expected of them. But they certainly don't have to waste their time making it good!

It all comes back to the fact that users don't want help. When I'm trying to do something, I don't want help doing it. I just want to be able to figure out how do it. Right there. On the spot. And if I fail at that, I want someone else to do it for me. (That's why Linux forums are always full of "RTFM!")

If you can't give people help to get them to understand your application, how do you do it? In Steve Krug's awesome book, "Don't Make Me Think" he has this advice: If you can't make it self evident, make it self describing.

It's much harder than it sounds. But it's also more important than it seems. After all, if your users can't figure it out and they're not going to ask for help, they have only one option left: To make like a tree and get the heck out of there. I mean, leave.

Friday, July 25, 2008

Static Languages Don’t Trust You

Some time ago I was thinking about Interfaces and class inheritance in C# and how hard it can sometimes be to get what you want.

For example, in WPF there are many controls which inherit from Selector: ComboBox, ListBox, ListView, and TabControl. Selector includes SelectedItem and SelectedValue properties as well as a SelectionChanged event.

One notable control missing from the list of Selectors is the TreeView. This control is an ItemsControl but is not a Selector. However, it still has SelectedItem and SelectedValue properties.

The reason why TreeView is not a Selector is that its items don’t have indexes (since they’re hierarchical). Selector includes a SelectedIndex property which wouldn’t make any sense on a TreeView. TreeView also provides a SelectedItemChanged event instead of a SelectionChanged event which includes more information about the change.

Clearly, it makes sense for TreeView not to be a Selector, even though it does have many similar properties. The problem is that there is no common class or interface that TreeView and ListBox (for example) both implement. This means you can’t write common code to accept either a TreeView or a ListBox and get the SelectedValue out of it.

You might need to do this if you were dynamically displaying controls, or creating any kind of framework to work with controls. To accomplish this, you now have to create your own interfaces that describe what you need (SelectedValue, ValueChanged) and then create wrappers for each of the built in controls that implement your interface. Or, you create one big class that knows how to talk to all the different types of controls. I think the first approach is the better one though as it’s easier to maintain and enhance.

This all applies if you’re writing in a static language. If you're writing in a dynamic language, you could just call the SelectedValue property because you know it’s there, and everything would work at run time.

This is just one example of the kinds of problems that are so easy to run into in a statically typed language. To solve these issues, you have to create a lot of boilerplate redundant code that serves no useful purpose. It only increases the size and mental complexity of your code base.

This led me to what I thought was an interesting realization about the difference between statically typed languages and dynamically typed languages.

Static languages don’t trust you. But Dynamic languages do.

In a static language you have to prove to the compiler that the calls you’re making make sense. In a dynamic language, everyone just trusts you to do it right.

But anyone who has ever written any serious code knows that they really can’t be trusted. I mean, I’m going to make mistakes sometimes. And that’s where unit testing comes in.

In a dynamic language the LANGUAGE trusts you completely. Your unit tests on the other hand require you to be correct. The exciting thing here is that your unit tests not only test that the coding details are correct, they also test that your meaning is correct. That makes them much more useful than the compiler alone.

And if we’re going to be writing unit tests in static languages anyway, why be forced to jump through so many hoops convincing the compiler we know what we’re doing? Do we really need to work in a environment that limits our expressive ability because it doesn’t trust us?

I think the idea of using tools (and languages) that trust me (at the same time as they're helping me) and then building my OWN validation/verification specifically for my software is pretty interesting.

Friday, July 11, 2008

Coding Style

We were having a fun debate today about coding style.

My style is:
if_(_condition_)
{
body;
}

However, I was informed that there was some mutiny going on and people were trying to over throw my spaces inside parenthesis in preference of:
if_(condition)
{
body;
}

I asked around a bit and found out another senior developer here uses this coding style:
if(_condition_)
{
body;
}

Oddly enough, when it's an if statement I like that space between the if and the opening paren. But when it's a method, I don't include it:
private void MethodA(_formal params_)

Without question I'm a huge fan of the spaces between the parenthesis. I find this makes code WAY more readable. After all, the parenthesis are entirely syntax. The stuff you care about is between them. When the stuff you care about is directly touching the syntax noise, things get a bit more cluttered.

However, a quick google search for C# coding standards indicates that this is a rare stance. From what I found, most standards seem to call for:
if_(condition)

My current style is influenced by style guidelines we have at work. However, if I had my way, I would go back to The Elements of Java Style guidelines and do my curly braces differently:
if_(_condition_)_{
body;
}

I had a professor in college who recommended that book's style guidelines. I hated them at first. But after about 2 days of coding with them I learned to really like it.

Sure, it's a religious debate... But I'm curious, what's your style? And what would your style be if you had your way?

Tuesday, July 1, 2008

PowerShell: Get File Contents in Hex

I've done this a couple times and thought I'd share.

Occasionally I need to get the hexadecimal representation of a file so I can specify it as the value of a VarBinary variable in SQL Server for testing.

You can do this in two lines with powershell:
> get-content -Encoding byte yourfile.txt | %{ "{0:x}" -f $_ } | %{ if ( $_.Length -eq 1 ) { $out = $out + 0 + $_ } else { $out = $out + $_ } }
> $out


The first part:
get-content -Encoding byte yourfile.txt
Outputs the file's contents as bytes.

The second part:
| %{ "{0:x}" -f $_ }
Converts the bytes to hexadecimal representation.

The last part:
| %{ if ( $_.Length -eq 1 ) { $out = $out + 0 + $_ } else { $out = $out + $_ } }
Corrects any single hex character values so that they display preceded by a 0 and concatenates them into a string so that they you can output them all on a single line.

Anyone know of any ways to shorten this script up?

Monday, June 30, 2008

WPF ListBoxItem Double Click

The WPF ListBox does not have an event which fires when an item in the list is double clicked. As far as I can tell, there is no simple mechanism to accomplish this.

The best solution I've found is using the ListBox.MouseDoubleClick event. This event fires every time the mouse is double clicked anywhere in the listbox. This includes the background (if your items don't completely fill your list) and the scroll bar.

What you have to do is use the ListBox.InputHitTest method to get the element in the ListBox's Visual Tree which was clicked. Then you have to loop up the VisualTree until you find either a ListBoxItem (which means an item was double clicked), or the ListBox (which means an item was not double clicked).

Here's the code where ctlList is the ListBox:
void ctlList_MouseDoubleClick( object sender, MouseButtonEventArgs e )
{
UIElement elem = (UIElement)ctlList.InputHitTest( e.GetPosition( ctlList ) );
while ( elem != ctlList )
{
if ( elem is ListBoxItem )
{
object selectedItem = ( (ListBoxItem)elem ).Content;
// Handle the double click here
return;
}
elem = (UIElement)VisualTreeHelper.GetParent( elem );
}
}

I haven't been able to find another way to do this yet. There may be one flaw with this: If you use a DataTemplate in your list box, and a control in that template handles the MouseDoubleClick, I think it's possible the ListBox's event wont fire. Again, I haven't verified this, so it might work just fine.

If you know of a better way to get the double click on a list box item in WPF, please let me know!

UPDATE: Some commenters found a possible problem with the approach demonstrated in this post and offered an alternative method.

The problem is that it's possible the InputHitTest method could return something that is not a UIElement. This might happen if you had a Span element in your list box item template, for example.

The alternative method is to define a style for your list box's item container than includes an event hook for MouseDoubleClick.

<ListBox.ItemContainerStyle>
<Style TargetType="{x:Type ListBoxItem}">
<EventSetter Event="MouseDoubleClick" Handler="listBoxItem_DoubleClick" />
</Style>
</ListBox.ItemContainerStyle>

Thanks to Steve and Mark for pointing this out!


Friday, June 27, 2008

IoC and DI complexity

Inversion of Control Containers (Ioc), Dependency Injection (DI), and the Dependency Inversion Principle (DIP) are huge blogosphere topics these days.

Quickly, Dependency Injection (DI) is a pattern of “injecting” a class’s “dependencies” into it at run time. This is done by defining the dependencies as interfaces, then passing in a concrete class implementing that interface to the constructor. This allows you to swap in different implementations without having to modify the main class. As a side effect, it also causes you to follow the Single Responsibility Principle (SRP), since your dependencies are individual objects which perform discrete specialized tasks.

Dependency Inversion (DIP) is a design principle which is in some ways related to the Dependency Injection pattern. The idea here is that “high” layers of your application should not directly depend on “low” layers. Instead, the high layers should define interfaces for the behavior they expect (dependencies), and the low layers will come along and implement those interfaces. The benefit of following this principle is that the high layers become somewhat isolated form the low layers. This means if some arbitrary change is made in the low layer it is less likely to have to be propagated up through all the layers. Dependency Inversion does not imply Dependency Injection. This principle doesn’t say anything about how high layers know what low layer to use. This could be done by simply using the low layer directly in the code of the high layer, or through Dependency Injection.

The Inversion of Control Container (IoC) is a pattern that supports Dependency Injection. In this pattern you create a central container which defines what concrete classes should be used for what dependencies through out your application. Now, your DI classes will determine their dependencies by looking in the IoC container. This removes any specification of a default dependency from the classes themselves, and it makes it much easier to change what dependencies are used on the fly.

Clearly, these are some very powerful patterns and principles. Basically, DI and IoC remove the compile time definition of the relationships between classes and instead define those relationships at runtime. This is incredibly useful if you think you may need to modify the way your application behaves in different scenarios.

However, if you pay attention to why these patterns are primarily used by the various people talking about them in the blogosphere you’ll see that it’s for unit testing. The reason people are bothering is because they want to create mocks and stubs of their objects so that they can write unit tests. The Dependency Inversion and Single Responsibility principles that arise from this are certainly an added bonus, but not the primary goal. And the ability to swap in different REAL dependencies is not one that anyone planed to use.

Let’s be realistic. How many applications really need to be able to do the same thing in two different ways at the same time? That’s what DI is for, but I don’t think many people really need that capability. It’s much more likely that your application will evolve from doing THIS to doing THAT. DI will make this migration simpler, but only because it forced you into following SRP and DIP. You could have followed those principles without using DI.

The question is, “If your application doesn’t require DI (except for unit tests), should you use DI?”

The question that leads to is, “What's the harm in using DI for unit testing?”

The answer to that is: complexity. Using DI adds complexity to your application. IoC adds even more complexity.

Where does the complexity come from?
  • There are more pieces and components to keep track of
  • It’s harder for a person to understand how everything fits together into a functioning whole
  • There are more restrictions on the things you can do in your code: you can’t new up a dependency; you can’t require fields through a dependency’s constructor, etc
  • Interfaces can’t strictly define everything (will it throw an exception, will it return null, will it display it’s own error dialogs, etc)
  • With some IoC tools, I have to maintain an xml configuration file…
  • There are simply more lines of code
  • It is harder to browse the code and debug the code (because there are more layers and indirection)
When I brought this up in comments on the YTechie blog everyone told me the problem wasn’t with the patterns, it was with my IDE, or it was because I wasn’t commenting my interfaces well enough… This was mostly because the examples I was using to try to indicate the complexity were trivial.

The point I’m actually trying to make is just that there is more complexity! I need a better IDE because of it! I have to write more detailed implementation specific comments because of it! It doesn't really matter if doing those things are a good idea anyway. The point is that now it's complicated enough that I have to do them, I can't get by without like I could before.

To put it simply, I have to do more work. That’s the harm in DI and IoC (and to a lesser extent DIP): complexity -> more work -> more confusion -> more potential for error -> more chaos over time

The next question is, “Is this added complexity enough of a downside to make DI/IoC not worth it?”

This is the real question that everyone should ask themselves before they dive head first into the newest latest thing. Unfortunately, you’ll find a surprising lack of thought about this. Or even willingness to think about it. When people find something new they like, they don't like to admit it may come with some downsides too, however minor they may be. Don't get me wrong! Some people are thinking about it, like Dave^2. But it the blog world, it's always a struggle to get passed the "It's awesome" to the "but...".

The answer to our question is: It Depends. That’s the Computer Engineer’s motto. And it’s hugely important to remember that it always depends. It depends on your circumstances, and the complexity of your application or component, and an innumerable list of other factors. It's not an all or nothing answer either. It could be no problem here, and total disaster there.

Are having unit tests worth the added complexity for you? As long as you recognize the complexity, you’re fully qualified to make that decision. Personally, I've found many circumstances in which it was worth it, and a few others where it was just too much overhead. But let me know what you decide, and how it works out for you.

Monday, June 23, 2008

Custom Drop Downs

For at least the last year I have had an unhealthy obsession with drop downs. Custom drop downs really.

That is, combo boxes. But when you click on the down arrow, you get a list with check boxes. Or a tree control. Or a tree control with check boxes. Or a custom edit surface with a text box and an "Apply" button. Or anything you can possibly imagine, really.

All of these things are immensely useful. And if the story ended there I would never have become so obsessed. Sadly, the story continues. It turns out, such things are not exactly easy to build.

My first attempt involved Subclassing the .NET combo box. I got a working drop down in the end, but it had some odd behavior, and was overall just a huge hack.

Next I learned that Infragistics had a component that you could use to create custom drop downs. Put any control in it you want and you're off to the races. For a time, my obsession was sated. Until we learned that this component could not be resized after it was opened. To resize it, you had to close it and reopen it. This results in noticeable, seizure inducing, flickering. And that = bad.

Later, I learned about how easy this was to do with WPF. In fact, it was the very first thing I ever tried in WPF. But that doesn't help me much at work where our software is all WinForms.

Fortunately, this story has a happy ending! It turns out that .NET 2.0 introduced a component much like the Infragistics one but without the added suck!

It is called the ToolStripDropDown. Basically it's just a Form with special properties that make it behave like a popup. It doesn't steal focus when it opens. It closes as soon a user clicks off it. It can have a shadow border (if you'd like) like a context menu. You can override its default close behavior by using the Closing event and checking the e.CloseReason enumeration value and setting e.Cancel = true. And best of all, it can host any .NET control you want to put in it, as long as you host that control in a TreeStripControlHost.

I stumbled on it at jfo's blog here and immediately put it into use. In the process of trying to do some wacky custom stuff I found the same material from jfo's blog, but on msdn here which you might prefer. Both of those show example code for putting a treeview in a custom drop down.

I must have spent months working on this problem. I probably executed thousands of Google searches looking for solutions. All I ever came across was people using borderless forms and the deactivate event, which doesn't quite cut it for all cases. So when I finally discovered this, my obsession compelled me to post about it.

The only thing that I haven't tried yet is making a "suggest" combo box using this technique. This is a control in which when the user types, you execute a stored procedure and fill the drop down with matching items. The potential problem spot here is that you need focus on the text box so you can type, but you need the drop down to be open. I'm guessing you could pull this off with the ToolStripDropDown though. If anyone knows for sure, please leave a comment!

Monday, June 16, 2008

Fun with Casting

This is a .NET specific post. I ran into this interesting behavior the other day. It makes perfect sense, but I'd never thought about it before so I thought I'd share.

object o = 5.0;
int i = (int)o;

That code will bomb out with a run time exception complaining about an invalid cast.

object o = 5.0;
int i = (int)(double)o;

That code will run just fine.

What's happening is the meaning of the cast is different. In the first case I'm saying that I expected there to be an int in the object. There isn't, it's a double. So it blows up. In the second case I'm saying that I expected there to be a double in the object and that I want it to truncate the double into an int. It's slightly unexpected because the same syntax means two different things depending on the types it's acting on.

Monday, June 9, 2008

Powershell Grep

If you're into Powershell, you may wish you could issue a grep command like you can in Linux. Guess what? You can!  Grep is a command that can search the contents of files for a regular expression.

In Powershell, we can do the same like so:
dir | select-string "my regex"

This command returns any matches (line by line) in any files in the current directory.  And if you want it to search through subdirectories as well:
dir -recurse | select-string "my regex"

If you have anything other than text files, this will dump a lot of nonsense and do a lot of really annoying beeping. To fix that you can tell it to only search files of a certain extension:
dir -recurse -include *.txt,*.cs | select-string "my regex"

This works great, but it's a lot of have to type, so I define a function in my $profile which I named grep.  You can check out my posh-prefs profile on bitbucket.

If all you wanted was to look at the names of the files, instead of the contents, you could:
dir | where { $_ -match "prefix*" }

Okay, I was just messing with you. You'd ACTUALLY do it like:
dir prefix*

Or recursively:
dir -recurse -include prefix*

Not only is this way shorter, it's actually more efficient because "the provider applies [it] when retrieving the objects, rather than having Windows PowerShell filter the objects after they are retrieved" (according to help dir -detailed).

Monday, June 2, 2008

State of the Blog

I've been writing this blog for 1 year and 1 month. In that time I have put up 64 posts, that's 4.9 posts per month on average. I've had 76 comments (including my own), that's 1.2 comments per post on average. Feedburner believes there are between 14 and 19 people who subscribe to this blog day to day. Google Analytics indicates that this blog gets anywhere from 8 (on weekends) to 52 Visits a day. The vast majority of those visits are brought in from Google searches.

I find these numbers interesting, but mostly irrelevant. I don't really care what those numbers look like. I'm primarily motived by these three objectives:
  1. To challenge myself to keep tackling new problems and learning new things and evaluating the things I have done
  2. To get in touch with people outside my little company who have different viewpoints and different experiences
  3. To keep an archive so I can lookup things I've figured out or thought about in the past or so I can refer other people to them
That said, its good to have goals and its good to evaluate what you're doing (which is the point of this blog after all). Even though this is cheesy and goes against what I usually try to do here, I thought it might be worth a shot: So I want to solicit some feedback and ask a few questions.


I have one person who posts here regularly (thanks Josh!). Frequently he just says "duh" or "I've seen it slightly different" but I love that. It tells me other people have been through it. I'm not looking for expert comments, after all, I'm no expert. But I would really love to hear more of what people think, even if its just a gut reaction.
1. What can I do to draw more comments?


I write about things I'm interested in or working on. That's why the topics range so widely and randomly. I also try to write when I'm confident that I've come to a resolution. That's why what I write is less like a blog and more like little essays or articles. I also write as if what I'm saying is true, even when I know I could be wrong or when I haven't had a chance to exercise the ideas much. I find second guessing and apologizing in writing to be really irritating and I assume you'll do the second guessing on your own as you read.
2. Should I change anything about the topics or style?


I write when I have something to say. I shoot for 4 posts a month (but not necessarily one a week). Other than that I don't keep a schedule. I do this because I don't want to publish half formed thoughts or minor mostly content-less topics. But I've read lots of people who say you should post on a regular schedule.
3. Should I adopt a schedule?


I write this stuff for fun mainly. But it would be more fun if it was less one sided.
4. Do you have any feedback, or is it all perfect just the way it is?

Thursday, May 29, 2008

Unit Testing Pitfalls

Unit Testing and more importantly TDD is all the rage these days. If you go simply by the noise in the blogosphere, it looks like everyone is doing it. Of course, the reality is probably that almost no one is doing it, but many people think its a good idea, and some of them wish they could do it.

For example, I have written quite a few posts about Unit Testing, TDD, and related topics like Dependency Injection. I have even done some real TDD on the software I write at work. However, most of what I write I don't TDD, and don't Unit Test.

Yes, I'm big enough to admit it. I think its a good idea, and I wish I could do it. But the truth is I don't.

Fortunately, I have reasons. And those reasons are that for all the promise of TDD and Unit Tests, there are a number of Pitfalls.

Almost everything has dependencies, and those need to be mocked/stubbed, and mocking sucks.
Martin Fowler has a great article about Mocks and Stubs if you want to read up. I think that mocking sucks because:
  1. It is a lot of work
  2. It requires intricate knowledge of the internal coding of the thing you're testing
Stubs are slightly better in some ways in that you don't have to go with as much intricate knowledge. But they're more work than mocks, and they can be super complicated, and they still require a good deal of internal coding knowledge.

If mocking sucks, and almost everything needs to be mocked, then almost everything sucks to unit test.

If an interface doesn't exist for your dependency, you have to wrap it.
Say you're testing something that writes to the windows event log. The .NET framework doesn't have an IEventLog interface defined, it just has an EventLog class. So if you want to mock out that dependency with dependency injection, you have to create your own IEventLog. Then you have to create a concrete class that implements IEventLog. Finally, you have to forward every method and property call in the concrete class to the EventLog framework class.

This no fun to write and it's adding complexity and overhead to your code. Just because you want to test.

Note: Using a dynamic language would remove the need for the interface and therefore make this problem go away.

You can't use constructors on your dependencies
Suppose your dependency needs some required information to function and the class you're testing has that information and wants to provide it. Typically you would simply create a constructor on your dependency that took in the required info. Then the class you're testing would new it up and pass in the info. Simple.

You can't do this if you're using Dependency Injection because the dependency must be an interface and interfaces can't have constructors, plus an instance of the concrete class must be passed in to your class's constructor.

To get around this you have to pass in a concrete class that implements the interface. Then you have to send the required info in through properties. Now you have to write the dependency class so that it checks that the required info has been provided before it does anything that requires it. This check will have to go in every public method and possibly some of the properties. Thus the class is more complicated and has more overhead. Just because you want to test.

You can't new-up your dependencies.
Sometimes you may need to new-up a dependency. Then when something changes you may want to new-up a new object to replace the old one. You can't do this if you're using Dependency Injection since you have to pass the dependency in as a fully formed concrete class.

To get around this you're going to have to write the dependency so that it is reusable. Unless you need two instances at the same time. In that case, you'd have to make your dependency into a factory that provided you with instances of your actual dependency. Just because you want to test.

So far, these pitfalls have all been due to Dependency Injection, which like I talked about in an earlier post is powerful, but also kind of scary. We might be able to avoid all this injection of dependencies by using a framework like Typemock, but that's not free, and if I recall right, its not cheap either.

GUIs can't be tested.
It depends on what kind of applications you're writing. For the kinds of apps we work on where I work, the GUIs can be pretty complicated. In fact, usually the GUI is just about all there is to it (aside from retrieving and storing data). We're still writing lots of complicated code which it would be awesome to test, but it's all operating on GUI state.

When people ask me what they can/should Unit Test I always say "Find the algorithm." But when the algorithm is "set this value on this field when the user clicks on this but only when this condition is met, otherwise change the controls which are displayed to this and disable that" you're pretty much out of luck.

Some things aren't worth testing
If all your class does is order calls to other classes and react to errors, your tests are going to be of limited value. Mainly because you're not testing much. It may be 100 lines of code, but it's really not doing much of anything. No algorithms. And any regressions are likely to be because of changes to the dependencies, not because of changes to that class. So is it worth testing this?


I would love to see a book or article on unit testing address these issues. I mean, who is writing code to transfer funds from one account to another and is dealing with simple objects? Who is writing a Queue? Who is writing a web service to serve a music catalog? These examples may teach the concepts and principles of TDD and Unit Testing, but they don't help me to actually practice it. Am I the only one?

Wednesday, May 21, 2008

How Should I's

There are ten types of programmers: Those that understand binary, and those that don't.

Oh, no, that's not what I meant.

There are two types of programmers: Those that ask, "How do I make this go?" and those that ask, "How should this go?"

I like to call the first group High school programmers. These are the kind of people who when presented with a task take their first idea and start trying to make it work. When it doesn't, they just keep tweaking it until it does work. Then it's "done."

I call them High school programmers because this is what High school students do when presented with an error message. "Oh, that's weird. Well, I'll just change this over here and try it again. That didn't do it? Ok, what about if I do this? Still no? Well what about..." They just care that it works in the end, they don't really care how it works or why it works or why it didn't work in the first place.

The second group are the ones that not only want their code to work, but want it to work the best it can. These people are probably going to consider alternative implementation approaches, designs, and architectures. They're probably going to refactor their code to make sure it's as clean and efficient as possible. They may even go so far as trying different things before making up their mind (prototyping, if you will).

This distinction actually is important. You obviously want the How Should I's working with you and not the How Do I's. Simply because their work will be better: cleaner, more maintainable, more bug free, more extensible, and believe or not finished faster.

Its important because this is the quality you're actually looking for. I've read many blogs where people claim that you can't be a good programmer and leave work at 5pm. This is simply ridiculous. Certainly your How Should I's are likely to be more obsessive, and therefore more likely to get caught up and work longer. But there is absolutely no reason why a How Should I can't have a life outside of work, leave at 5pm, and still be a great developer.

I also think that being a How Should I is a necessary condition to qualifying as Smart and Gets Things Done.

And on top of that, being a How Should I is very likely to also make you a top 20%-er.

You can also see this quality in The Pragmatic Programmer's definition of a Pragmatic Programmer,
Tip 1: Care About Your Craft
Tip 2: Think! About Your Work

So if you're interviewing, or reviewing other people's work, or simply working with other developers, this is a quality you should look for and appreciate.