Thursday, March 27, 2008

Powershell Rename with Regex

I had a little more fun with Powershell today and I thought I'd share.

I used Visual Studio to generate a bunch of SQL Scripts into a Database Project. (In Server Explorer, right click -> Generate Create Script To Project).

This is very slick and works wonderfully. Only thing I didn't like is it puts the database name at the front of the file name:

I'm creating this project because I know I'm going to be executing these scripts against many databases, so I don't want the database in the name. I couldn't find a way to configure VS so it would generate the scripts with different names, so I'm stuck with them. That being the case, how do I easily rename all the files?

Powershell to the rescue!
dir | rename-item -newName { $_.Name -replace '^Database.', '' }

This command will rename every file in the current directory, removing the "Database." prefix from the name (if it has one).

You can use the shorthand version of the command and parameters as well:
dir | ren -new { $_ -replace '^Database.', '' }

I'm just so happy to finally have a powerful command line in Windows.

Tuesday, March 25, 2008


In two previous posts I discussed the Model View Controller pattern and the various flavors of it: MVC is What Exactly? and MVC in Windows Applications.

In this posts I presented my understanding of MVC that I had gained from various books, articles, blogs, and thought experiments. Shortly after I encountered the term MVP: Model View Presenter. Then I found some articles that described passive MVC differently than I had described it. Then I found some articles about PAC: Presentation Abastraction Control.

In short, I discovered that there doesn't seem to be one definitive definition of MVC, nor of MVP.

From what I can tell, MVP is just MVC where the Controller is allowed to do more work. In that regard, it is more similar to what I described in my posts on MVC. I have also seen arguments that the way I described Passive MVC is actually MVP.

In the end all the issues I discussed then still apply. I just wanted to clarify that this area of design is full of lots of slightly different ways of accomplishing the same goal, and everyone seems to use a different word to describe it.

Update 9/11/2008: Follow up post on MVC and MVP with more details

Friday, March 21, 2008

Varbinary and Service Broker, a story

Today I was working on a very simple SQL 2005 Service Broker service. It was to work like this: someone executes a stored procedure passing in some meta data and a varbinary(max) which contains a document. The stored procedure inserts a record into the database with the meta data, then builds a service broker message containing the varbinary and some other required information. A windows service is monitoring the service broker queue for messages. When one arrives it parses it out and creates a file out of the bytes.

This took me about an hour to get through this morning (I already had written a nice service broker library, which is why it was so easy). However, I spent the rest of the day fighting with setup, data types, and encodings.

Problem #1: Windows installer service can't start. Error 193: 0xc1

A very helpful error message which popped up immediately after I tried to start my service. It was clear the OnStart method was never invoked. Apparently this error can be caused by a random "program" file in C:\ on occasion. In my case that wasn't the problem.

Nope, my case was much dumber than that. C# Services, like most programs, require a main method, usually in a program.cs file, to execute. I didn't have one because I started with the visual studio class library template instead of the windows service template... Once I realized what was missing this was easy to fix. But realizing it took longer than you might think.

Problem #2: Invalid XML Document (1, 256)

It turns out that when you define a SqlCommand and setup a bunch of Parameters on it and you tell the parameter that its length is 256, it's actually going to truncate anything you pass into it to 256 characters... In this case, my xml message was longer than that. Duh.

Problem #3: Invalid XML Document (1, 1)

This one is weird and I can not explain it. Say you create an XML variable in SQL and use it as your service broker message like so:
declare @msgX XML
Then in C# you define the message_body parameter as an nvarchar(max). The types are mismatched, but we know XML is just a string, so everything should be just fine, right?

Wrong. I don't know why. But the string you get out will have some random byte as the first byte. If you print it to the Windows Event Log it will look like a little box. This byte causes C#'s XML parser to blow up. If you change the SqlCommand parameter to XML(max) (so the data types match) the byte goes away...

Problem #4: Encoding the varbinary data into XML from SQL

You can't build your xml message just by concatenating strings together if you're dealing with varbinary. You can't cast the varbinary variable to a string either. What you need to do is encode the varbinary. In SQL, the easiest way to do this is:
declare @encoded xml
set @encoded = ( SELECT @varbinary FOR XML, TYPE )
This will automatically encode your binary into base 64 text.

Problem #5: Decoding the varbinary out of the XML in .NET

It took me awhile to find the right way to do this. At first I was trying to use the static Encoding class like
Encoding.Unicode.GetBytes( encodedData );
But this was just giving me the encoded string back.

Finally I found the right way to do it:
byte[] b = Convert.FromBase64String( encodedData );

Moral of the Story or What I learned:
Debugging Windows Services is hard. You have to print messages to the Event Log to figure out what is going on. And every time you make a change you have to: uninstall the service, rebuild the solution, rebuild the setup file, reinstall the service, go to computer management services, set the Run As permissions on the service, start the service, and check the event log to be sure it started correctly. THEN you can do whatever you need to test it.

This takes time. And it sucks.

So what I learned is, take the time to build a test, or refactor the code, or copy and paste the code if you have to. Even though you think, "oh! I just have to do this and it will work" take the time. Cause it's not going to work after you do that. And then you're going to have to do it all again. So just create a nice little testing environment for yourself. Its a one time expense, and then everything will go much faster from there. You'll be much happier, and if you're lucky, you wont have to spend all day debugging something that had basically been done at 11am.

Wednesday, March 19, 2008

IoC and DI

Inversion of Control Containers and Dependency Injection.

A coworker of mine gave me an MSDN article called Loosen Up which goes over Dependency Inversion, Dependency Injection, Inversion of Control containers, and 3rd party IoC tools. It's an excellent article, I highly recommend it.

In an early post I talked about Dependency Injection and referred to it as a slippery slope. I still feel this way to some extent, but its the only way to do good unit testing, so we're stuck with it.

First I want to point out something obvious I got from that article that I hadn't thought of somehow. In Dependency Injection I mentioned how my classes defaulted their dependencies in the constructor and used Properties to allow different instances to be passed in.

This sucks for two reasons:
  1. If someone does actually pass in a different instance of a dependency, the default will have been newed up for nothing
  2. There are ton of properties on the object that it is likely no one will ever use
What's the obvious way to fix this? Create a constructor that takes in the dependencies. Create a default constructor that calls the other constructor:
public LessStupidClass( depA, depB, depC ) {...}
public LessStupidClass() : this( new depA(), new depB(), new depC() ) {...}
Solves both the problems of the stupid way I was doing it.

On to IoC. The idea behind Inversion of Control Containers is basically that you don't like the fact that the class has a way of specifying its default dependencies. You'd rather have the class lookup what its defaults should be in a "Container of Default Dependencies" that has been pre-configured. This can easily be done by performing the lookup based on the interface type of the dependency. The third party tools (like Spring.NET and Castle Windsor) take this even further and allow you to setup the default dependencies in configuration files.

That's the idea at least. Personally, I'm not a fan. I can see how this might be useful if you actually anticipated swapping in different dependencies frequently, or even at all. However, I'm just doing it so I can unit test. So introducing this container layer just makes it harder for me to figure out what the hell is going on in my code. Its bad enough depending on an interface because I can't "right click -> go to definition" any more. And it only gets worse when you move it into a configuration file...

Maybe I'm missing something here. If so, please let me know. But as far as I can tell, for the ways in which I've been using dependency injection, in which there is only 1 concrete dependency class and there is likely only going to be one, EVER, IoC just adds unneeded complexity. It moves the stuff I need to know further away from the places where I need to know it.

And it "centralizes" my dependencies... In my experience, "centralizing" things is only good when it makes sense that everyone share from the same source. It doesn't make sense to centralize when you're either A) forcing things to match that otherwise wouldn't have to match or B) moving things away from where they are used just so they can be "centered." Centralizing also has the unwanted side effect of causing things to get all tangled together. Now that everyone is sharing the same thing, defined in the same place, I have to be really careful about changing that thing... Because if I change it, it will change everywhere, even if I didn't know everywhere it was used.

Aside: Sometimes you want things to be centralized. But make sure you REALLY want it before doing it. Like consistency, it's not a good idea in and of itself.

UPDATE 6/27/08: for more on complexity, read this followup post

Monday, March 17, 2008


When you're in the business of creating software, demoing your software is very important. If you're in a consulting type of business, then its even more important.

In college, I had the opportunity to give a number of presentations on some research I was doing at conferences. The research involved Mobile Agents, Evolutionary Computation, and Swarm Programming. Obviously, the audience was mostly professors and graduate students. I absolutely loved giving those presentations, and apparently it showed. After I gave a talk on Swarm the chair of the session came up to me and said he really wished his students could have been there to see it, not only because they were playing with swarm too, but because he'd been trying to tell them that research presentations could be fun.

I also had to give a couple of really stupid presentations in my Marking and Management classes. Both of these were completely devoid of any kind of meaningful content. They were just an exercise in... well, who knows. Fortunately, I got lucky and did pretty well. In fact, in both courses the teachers actually told the class that what I had done was what they were looking for from everyone. I actually found this horribly embarrassing. I still tastefully rubbed it in my friends' faces of course, but still.

Now, I didn't put any kind of special effort into those presentations. My slides didn't have animations and sound or funny jokes. In fact, I put about the minimum amount of effort into those slides as I possibly could. No one would call them pretty. So that certainly wasn't why it was "fun." I also didn't practice what I would say, I didn't work on any clever phrases. All I did was create a basic outline of what I would talk about, and then turn that into slides. Then I went and gave the talk.

Turns out the reason why I was getting such good responses was simply because of enthusiasm. I learned this from my research presentations, where the enthusiasm was genuine, and applied it to my business presentations, where it was totally contrived. All I did was:
  1. Behave a little less formally than everyone else
  2. Allow myself to get excited about the exciting parts
  3. Explain things from the bottom up (so people wouldn't get confused)
And that was it. All this positive feedback had me thinking I was really good at this stuff. And then I started working full time and I had to start giving software demos to relatively non-technical users.

Suddenly, my 3 steps to success didn't work anymore. Behaving less formally (which for me translates into acting somewhat silly) still got people's attention, but it didn't translate into them understanding or enjoying what I was demoing. Getting excited actually seemed to just cause them to get confused (and it wasn't because I started to talk fast, or about things that were only exciting to me either!). And I couldn't explain from the bottom up, because the audience didn't care about the software bits, and I wasn't involved in the business analysis so I couldn't talk about the why very well.

Finally, I realized there were a couple other factors that had contributed to why my college presentations went over well, and none of them had anything to do with what I was doing:
  • I had an interested and invested audience
  • The audience was knowledgeable about my topic
In the research presentations, everyone was smarter than me to start with, and what I was working on wasn't really complicated, it was more just fun. So they paid attention and played along.

In the business presentations, the other students didn't give a damn what I was talking about, but the professors did. The professors paid attention and played along.

Now, in these demos, the users really just don't care. You'd think they would... After all, its their software. But they don't. So they get distracted by any little thing and they can't follow the thread of what you're trying to demo to them. You'll say a word, which causes them to think of something, and they'll ask about that thing, even though its completely unrelated to the day's demo, but since they're the client you'll try to answer it, and during your answer they'll think of something else, so you'll have to try to answer that...

It's like a program with an infinite number of nested function calls, you never manage to return all the way up the call stack. And even if you did, no one would remember what was going on before the call.

Also, my demo audience members don't really relate well to computers. Its not that they get confused easily, and its certainly not that they're stupid. Its more that they're just nervous they're going to get confused I think. So as soon they start to slip, even a little bit, they seem to give up and stop following.

So the difference is the audience. I still believe that my enthusiasm mixed with the slightly silly relaxed style will work. What needs to change is the way the demo is presented. So here are the guidelines I've worked out that have had good results so far:
  1. Start every demo with a picture or flow chart of the overall process you're about to perform in the software
    1. This will help avoid questions like, "When will we do this?" "I'm going to show you that next."
    2. This will help avoid some of the side tracks as well, because everyone knows what we're trying to show
  2. When you first open a screen, start slowly describing what's on it
    1. This will give people an opportunity to read it and and try to figure it out for themselves, which they will do even if you try to dive it to explaining it
  3. Before you do anything (click anything), say what you're about to do
    1. "Now I'm going to add a new Employee"
  4. When you're doing it, repeat what you're doing
    1. "I'm going to click "New" *click* to do that"
  5. After you've done it, describe what happened
    1. "I can now type in the information about my new Employee"
  6. If you don't feel like a broken record you're doing it wrong
    1. Your audience doesn't know the software like you do, so it wont feel as repetitive to them
  7. Make sure you don't sound like you're insulting their intelligence, this is mostly a tone of voice issue
  8. If one person gets lost or confused, they could start seriously side tracking everyone else, so don't let anyone get lost
Has anyone else out there had these kinds of experiences?

Friday, March 7, 2008

To Empower or Restrict

Software Developers frequently work on things that will affect other people. It could be as simple as the behavior of your application, or an API other developers will use, or a standard on how to do something either in code or in practice.

There are two ways to approach these kinds of activities:
  1. Attempt to empower the people your work will affect
  2. Attempt to restrict the people your work will affect
Obviously, given the terminology I'm using, I think empowering is better. By empower I mean give flexibility, simplicity, and in general, leave room for people to use their brains and do what is right for them.

By restrict I mean force everyone to do everything exactly one way. Restricting has its benefits. It will have a much smaller learning curve. It will keep everyone consistent. It will frequently work very well for the one thing it was designed for.

"Obviously a small learning curve is good!" exclaims Joe Q Designer. Everyone recognizes this, especially Joe. But in this case, Joe is getting a small learning curve because he's eliminated (restricted) many options and abilities. Clearly, you'll learn faster if you have less to learn. But Mr. Designer has traded flexibility for ease of learning. Unless J.D. has been both very careful and very lucky, this will come back to bite him in the end when someone needs to do something slightly different. Even though its only slightly different, Joe wont be able to accommodate it.

"I'll handle the slightly different cases as they arise. In the meantime, at least everyone is consistent!" Yes, Joe. But is this just consistency for the sake of consistency? Where Joe sees consistency, I usually see lost flexibility. Unless Joe can show a real benefit of the consistency, it's just a cop out.

"Well, I still think consistency is good. Plus, look at how little work has to be done with my solution to accomplish <a>!" Joe protests. I must say, I like how simply your solution handles <a> Joe. Who doesn't like simple solutions? I'm just concerned that <b>, <c>, and <d> are ridiculously hard now! If we'd made your solution less restrictive, <b>, <c>, and <d> would be easier. <a> would be a bit harder, it's true, but everything would be easier on average! Plus, when <e> comes along, we might actually be able to accommodate it.

I think its usually better to err on the side of empowering people rather than restricting them. And I definitely think its better to have a mindset of empowering people when you're designing things that will affect other people.

Don't get me wrong. You have to draw the line somewhere. You'll always be restricting something. Otherwise we'd all just give our clients C compilers and say, "off you go!" It's really a question of degrees and of what's important.

For example, if you know the users who will be using your system have no experience with computers, you're going to want to make things really simple. Like, iPod simple. However, your product is going to be better if your mindset is one of empowering, not restricting. It's not that I'm removing features because my user is stupid. I'm designing the set of features my users will most benefit from in a way that they will understand and enjoy. The guy who is thinking the second way is going to make a much better iPod than the guy thinking the first way.

So, Joe Q Designer, please stop trying to restrict everyone and start empowering us. We'll all love you for it.

Monday, March 3, 2008


C# allows you to use various Preprocessor Directives (the things with the # in front of them).

One of these is #if, and there is a DEBUG symbol you can use to determine if the program is currently running in debug mode or release mode. In general, using this is an awful awful thing to do. Why would you want to modify the behavior of your application when it was in debug mode vs. release mode?

Well, you might want to do this if you needed to add some code in debug mode to make debugging easier for yourself. But I can't imagine a case where you would want to use it to actually change the behavior of your application.

Why? Do you really need me to tell you why? Okay, fine. Its because now you can't trust debug mode. Any testing you perform in debug mode no longer applies to release mode. You might as well never run the app in debug mode, its a complete waste of your time.

"Surely you're overstating the issue!" protests John Q Programmer. "Like, what if I wanted to throw an exception if something was misconfigured or misused, but I only wanted it to blow up in Debug mode. In release mode, I don't want the application to blow up!"

Not wanting the application to blow up in release is very noble of you, John. I commend you. You are truly wise beyond your years. But I'm assuming you, Mr. Programmer, are throwing the exception because something has gone wrong. NOT throwing the exception in release mode is not going to change the fact that something has gone wrong. In fact, it is likely to make it worse. J.Q.'s application may now be in an inconsistent state! There's no telling what might go wrong. John hasn't avoided a problem in his application. Far from it! He's just potentially made the problem 100% times worse.

So J.Q., please don't do that.

Now, if you really must use preprocessor directives for some reason, at least make sure you're VERY careful with them. It may look like an if statement, but it's not. It is literally changing what code the compiler gets to see. If the #if doesn't pass, it will be as though the lines of code within it never existed.

That means you can get some really nasty behavior if you're not careful:
if ( iNeedToDoSomething)


See what's wrong with this?

If we are in debug mode, this code looks like:
if ( iNeedToDoSomething)


If we are NOT in debug mode, this code looks like:
if ( iNeedToDoSomething)

Holy crap! DoTheMainThing just got moved into the else! But only when we're not in debug mode.

To fix this you could simply add curly braces, but I think this example demonstrates how easy it is to get preprocessor directives messed up. And also why you should try just about anything to avoid using them.