My senior year of High School I took AP English. I hated that class, but I have to admit I did learn a little about writing. One thing I learned was you got a higher grade if you included your own insight and opinions into your "book review" type papers.
My junior year of College I took History of Aviation to fulfill the last of my History credit requirements. It was a horrible class. The material was interesting, and the books we used were decent, but our professor was… well, boring. And the tests and assignments were simply a waste of time. It was quite literally like going back to freshman year of High School with "chapter summary" 1 page papers in 5 paragraphs due every week.
I can’t recall exactly what grades I got on the first few papers, B- or C comes to mind though. The professor had underlined roughly half of my essay (not to indicate anything, I think she was just one of those people who underlines as they read so they don’t loose their place...). Then she’d scrawled, "more examples" or something to that effect on the top. I was kind of peeved. I’d read the material, and I thought that was obvious given the content of my "essay." And like I’d learned in AP English in High School, I’d added in a lot of my own personal take on what the reading had taught me and how it applied to whatever idiotic prompt we’d been given that week. That was clearly not what the professor was looking for.
It took me a few papers to finally realize that this was a class where it didn’t matter what I was learning, or how well I was writing. This was a class where I had to figure out what the professor wanted to see in those papers. It didn’t matter if what she wanted to see made sense, or was good or bad, or furthered my education... It only mattered that she wanted to see it.
So I developed a method for writing those papers. I would look at the prompt we’d been given. Then I’d skim through the material we were supposed to read. When I found a sentence or paragraph that applied to the prompt somehow, I’d underline it and add a mark at the top of the page so I could find it again. When I’d finished skimming the material, I’d go back through and find everything I’d underlined. I’d summarize the sentence and put the page number. Then I would number each quotation 1-n in an order that would sensibly answer the prompt. Finally I’d write my paper. The introduction would restate the prompt. The rest of the paper would be direct quotes (or sometimes paraphrased quotes, always referenced) with transitions between them. The conclusion would restate the prompt. Each of my little 1 page essays would include about 8 to 12 quotes.
Let me drive that home. I literally wrote NOTHING except to transition between quotations. I’m not overstating this, that’s all I did. And while I think I did a decent job masking it with my transitions, it was still quite obvious.
I got As on every paper for the rest of the semester.
My dad once taught me a term which summarizes what I was doing here wonderfully. When I would tell him about some stupid paper I had to write he would say, "Polish your golden shovel!" This referring to all the Bull Shit you’re about to shovel into your paper and attempt to masquerade as golden writing.
Of course, usually when you’re putting your golden shovel to work it’s not as blatant as my History of Aviation story. In fact, in normal use it’s a very valuable skill to have. Sometimes you have to sharpen the saw, and other times you have to polish the golden shovel.
Monday, December 31, 2007
Friday, December 28, 2007
SQL Server 2005 Service Broker
Service Broker is a very nice feature of SQL Server 2005. Like most things, it’s quite simple after you’ve figured it all out, but when you’re first learning, it can be quite daunting. When it comes to learning about SQL Server the main problem is that all the words Microsoft uses to describe things are so overloaded it’s easy to get confused. Everything is an "application" and everything is a "service"... This rapidly becomes confusing... "Which application are they talking about now? Their’s? Mine? What service? My service? The service broker service? That stored procedure?"
It also becomes confusing because SQL Server has so many different "technologies" built into it. A lot of them seem to do the same thing, but in fact they’re different, some more than others. For example, there is Notification Services, Service Broker, Event Notification, Internal Activation, External Activation, etc. And then it gets even more confusing because some of these "technologies" are just combinations of other "technologies." External Activation, for example, is really just Event Notification applied to Service Broker and it uses Service Broker Queues to deliver its messages.
In this post I’m going to briefly go over some of the things I’ve learned about Service Broker. If you’re completely new to Service Broker and you want a quick detailed overview with code samples you may want to go read this tutorial or browse through MSDN’s service broker articles.
So then, what is Service Broker? It is simply a queued message delivery system. A message sent from one person to another is guaranteed to not only arrive but arrive in order, and in the event that something goes wrong, not be lost. Of course there are a lot of details to describe exactly how it behaves, but that’s really all there is to understanding it.
What would you use it for? Lots of things! The uses I’m most concerned with are "Asynchronous triggers" and "Large-scale batch processing." In other words you can drop a message into a queue and then forget about it, continuing with your own work and ultimately completing. In the background, you know that your message will at some point be received and handled. In this way you can offload work from stored procedures or triggers to be performed asynchronously.
So here is the big question, if we’re sending messages from person A to person B, who or what are these people? They are anything which is capable of executing SQL commands. Thus it can be a stored procedure, a C# application using dynamic SQL, a Windows Service, anything. Notice that any of these things can be either the sender or the receiver!
Here’s a simple example. The sender can be a stored procedure which you invoke. This stored procedure will simply execute the needed commands to send a message from himself to the receiver. The receiver can also be a stored procedure, but a slightly unusual one. This stored procedure will be configured to start when SQL Server starts and it will run infinitely in a loop, periodically checking for new messages. If this is beginning to sound a lot like Socket programming to you, then you already understand Service Broker.
Where is the queue in this example? Well, actually, there are two of them. The sender has a queue and the receiver has another. Why? So far I’ve been describing this as a dialog. Person A always sends messages to B who always receives them. B never sends anything to A. But, in fact, that’s not the way Service Broker works. Service Broker always assumes a dialog in which A and B are both sending and receiving messages to and from each other. That doesn’t mean you have to carry on a dialog, in fact when I’ve used Service Broker I’ve only had a monolog. But that’s simply in the way I’ve written the applications that are using Service Broker. I didn’t do anything different in the way I setup Service Broker to make it work that way.
Back to our example, since we have two stored procedures, could we put them in two different databases? Indeed we can! In fact, that is one of the main uses of Service Broker. You could put them in two different catalogs of the same server, or you could put them in two completely different servers. Now you can effectively have two different databases talk to each other.
Having a stored procedure that is always running and monitoring the queue in a loop works just fine and there is really nothing wrong with that approach. However, Service Broker comes with another option for receiving from a queue called "Internal Activation." Basically, SQL does the looping in the background for you and when a message arrives on the queue it invokes a stored procedure. This stored procedure pulls the message off the queue and does whatever it wants to do with it. Usually the stored procedure is written to then enter a loop where it pulls messages off the queue until the queue is empty, at which point the stored procedure ends. Internal Activation will also invoke multiple instances of your stored procedure if the currently running procedures aren’t able to keep up with the number of messages arriving at the queue. In this way Internal Activation can help your message handling scale based on demand.
When I first heard about this a little red flag went up in my brain. I thought to myself, "Self! I thought messages were guaranteed to be delivered in order with Service Broker. If Service Broker is going to launch multiple "handler" stored procedures in parallel, how is it that the messages will be handled in order?" It turns out the answer lies in another Service Broker concept I haven’t mentioned yet called a conversation. Messages have to be sent on a conversation. Messages within the same conversation will arrive AND be handled in the order they were sent, guaranteed. But messages between conversations can be handled in parallel. This behavior is performed magically in the background without you having to do anything. It happens because whoever receives a message on a new conversation automatically obtains a lock on that conversation and they will only be able to receive messages from that conversation, until the conversation ends. But again, if no one had told you, you’d scarcely be aware of it at all.
If there is Internal Activation, there must be External Activation as well, right? Right. Internal Activation launches a stored procedure inside SQL Server, External Activation is for launching arbitrary programs outside SQL Server. .NET programs, C++ programs, Ruby programs, whatever. At least, that’s what you’d expect. In fact it’s kind of a misnomer. The Internal Activator launches the internal stored procedures. Internal Activation refers more to when the activator should do its work. The same is true for External Activation. The difference is that SQL Server 2005 doesn’t come with an External Activator. So you have to write your own. You can do this, because SQL Server 2005 does have External Activation, which tells you when a Queue has been activated.
External Activation is performed with SQL Server Event Notification. In the parlance, you’d write an external application which receives the external activation event (called the Queue Activation event). Okay... So how does the external app "receive an event?" Through Service Broker! That’s right, you create another Service Broker Queue and the Queue Activation event is placed in that queue as a message. Then you write an external application that monitors this new queue and creates new application processes which behave just like the activated stored procedures from Internal Activation. I’m not going to go into more details of how this works. It is a bit more complicated of course. However, Microsoft does have a sample.
How does this external application monitor the Service Broker queue? Exactly the same way as our first example stored procedure did. It just sits there in an infinite loop waiting for messages to arrive on the Queue. In other words, it just executes a SQL statement inside a loop.
So, what’s the difference between using an External Activator and just writing an application that monitors the actual queue in question (not the "queue activation" queue)? The only difference is that the External Activator will spawn off multiple handlers applications and therefore scale better with demand. Just writing a monitoring application will only handle one message at a time (unless you get fancy, which could get you into trouble in some circumstances due to conversation queuing as mentioned earlier). However, there are circumstances where this is all you need. In fact, all the ways I’ve used Service Broker to date have worked this way.
This is partly because all my Service Broker "applications" have been monologs and the processing they have performed wouldn’t really go any faster if it were done in parallel. In fact, I have never used more than one conversation. So far, I always create a single conversation. I cache the conversation handle in a database table so I can reuse it whenever I send a message. This is because most of the performance impact when using Service Broker comes from creating a conversation. That makes sense since the conversation is actually the unit that has guaranteed in order message delivery. By caching the conversation and never ending it or starting a new one, you improve the performance. This is discussed in an article called Reusing Conversations. You have to consider some scalability concerns before deciding to use exactly one conversation. For me however, I felt that was good enough (not to mention simple).
For me, the receiving application is a Windows Service written in C#. It creates a background thread that loops indefinitely issuing a WAITFOR( RECEIVE ... ) command to SQL Server with a half second or so timeout. This setup allows me to send messages serially and receive and handle them serially as well. If I needed to send messages in parallel I’d have to use more than one conversation. I believe I could still receive the messages serially, but I’d have to be very careful with when the conversation lock was released, as discussed in this msdn article, Conversation Group Locks.
Hopefully this summary has shown how flexible and powerful Service Broker is. And also how easy it is, despite the fact that the learning curve is pretty high and the setup curve is pretty high too (you have to create at least 6 different new database objects at the bare minimum to get a service broker conversation going). With luck SQL Server 2008 wont deprecate all this information.
It also becomes confusing because SQL Server has so many different "technologies" built into it. A lot of them seem to do the same thing, but in fact they’re different, some more than others. For example, there is Notification Services, Service Broker, Event Notification, Internal Activation, External Activation, etc. And then it gets even more confusing because some of these "technologies" are just combinations of other "technologies." External Activation, for example, is really just Event Notification applied to Service Broker and it uses Service Broker Queues to deliver its messages.
In this post I’m going to briefly go over some of the things I’ve learned about Service Broker. If you’re completely new to Service Broker and you want a quick detailed overview with code samples you may want to go read this tutorial or browse through MSDN’s service broker articles.
So then, what is Service Broker? It is simply a queued message delivery system. A message sent from one person to another is guaranteed to not only arrive but arrive in order, and in the event that something goes wrong, not be lost. Of course there are a lot of details to describe exactly how it behaves, but that’s really all there is to understanding it.
What would you use it for? Lots of things! The uses I’m most concerned with are "Asynchronous triggers" and "Large-scale batch processing." In other words you can drop a message into a queue and then forget about it, continuing with your own work and ultimately completing. In the background, you know that your message will at some point be received and handled. In this way you can offload work from stored procedures or triggers to be performed asynchronously.
So here is the big question, if we’re sending messages from person A to person B, who or what are these people? They are anything which is capable of executing SQL commands. Thus it can be a stored procedure, a C# application using dynamic SQL, a Windows Service, anything. Notice that any of these things can be either the sender or the receiver!
Here’s a simple example. The sender can be a stored procedure which you invoke. This stored procedure will simply execute the needed commands to send a message from himself to the receiver. The receiver can also be a stored procedure, but a slightly unusual one. This stored procedure will be configured to start when SQL Server starts and it will run infinitely in a loop, periodically checking for new messages. If this is beginning to sound a lot like Socket programming to you, then you already understand Service Broker.
Where is the queue in this example? Well, actually, there are two of them. The sender has a queue and the receiver has another. Why? So far I’ve been describing this as a dialog. Person A always sends messages to B who always receives them. B never sends anything to A. But, in fact, that’s not the way Service Broker works. Service Broker always assumes a dialog in which A and B are both sending and receiving messages to and from each other. That doesn’t mean you have to carry on a dialog, in fact when I’ve used Service Broker I’ve only had a monolog. But that’s simply in the way I’ve written the applications that are using Service Broker. I didn’t do anything different in the way I setup Service Broker to make it work that way.
Back to our example, since we have two stored procedures, could we put them in two different databases? Indeed we can! In fact, that is one of the main uses of Service Broker. You could put them in two different catalogs of the same server, or you could put them in two completely different servers. Now you can effectively have two different databases talk to each other.
Having a stored procedure that is always running and monitoring the queue in a loop works just fine and there is really nothing wrong with that approach. However, Service Broker comes with another option for receiving from a queue called "Internal Activation." Basically, SQL does the looping in the background for you and when a message arrives on the queue it invokes a stored procedure. This stored procedure pulls the message off the queue and does whatever it wants to do with it. Usually the stored procedure is written to then enter a loop where it pulls messages off the queue until the queue is empty, at which point the stored procedure ends. Internal Activation will also invoke multiple instances of your stored procedure if the currently running procedures aren’t able to keep up with the number of messages arriving at the queue. In this way Internal Activation can help your message handling scale based on demand.
When I first heard about this a little red flag went up in my brain. I thought to myself, "Self! I thought messages were guaranteed to be delivered in order with Service Broker. If Service Broker is going to launch multiple "handler" stored procedures in parallel, how is it that the messages will be handled in order?" It turns out the answer lies in another Service Broker concept I haven’t mentioned yet called a conversation. Messages have to be sent on a conversation. Messages within the same conversation will arrive AND be handled in the order they were sent, guaranteed. But messages between conversations can be handled in parallel. This behavior is performed magically in the background without you having to do anything. It happens because whoever receives a message on a new conversation automatically obtains a lock on that conversation and they will only be able to receive messages from that conversation, until the conversation ends. But again, if no one had told you, you’d scarcely be aware of it at all.
If there is Internal Activation, there must be External Activation as well, right? Right. Internal Activation launches a stored procedure inside SQL Server, External Activation is for launching arbitrary programs outside SQL Server. .NET programs, C++ programs, Ruby programs, whatever. At least, that’s what you’d expect. In fact it’s kind of a misnomer. The Internal Activator launches the internal stored procedures. Internal Activation refers more to when the activator should do its work. The same is true for External Activation. The difference is that SQL Server 2005 doesn’t come with an External Activator. So you have to write your own. You can do this, because SQL Server 2005 does have External Activation, which tells you when a Queue has been activated.
External Activation is performed with SQL Server Event Notification. In the parlance, you’d write an external application which receives the external activation event (called the Queue Activation event). Okay... So how does the external app "receive an event?" Through Service Broker! That’s right, you create another Service Broker Queue and the Queue Activation event is placed in that queue as a message. Then you write an external application that monitors this new queue and creates new application processes which behave just like the activated stored procedures from Internal Activation. I’m not going to go into more details of how this works. It is a bit more complicated of course. However, Microsoft does have a sample.
How does this external application monitor the Service Broker queue? Exactly the same way as our first example stored procedure did. It just sits there in an infinite loop waiting for messages to arrive on the Queue. In other words, it just executes a SQL statement inside a loop.
So, what’s the difference between using an External Activator and just writing an application that monitors the actual queue in question (not the "queue activation" queue)? The only difference is that the External Activator will spawn off multiple handlers applications and therefore scale better with demand. Just writing a monitoring application will only handle one message at a time (unless you get fancy, which could get you into trouble in some circumstances due to conversation queuing as mentioned earlier). However, there are circumstances where this is all you need. In fact, all the ways I’ve used Service Broker to date have worked this way.
This is partly because all my Service Broker "applications" have been monologs and the processing they have performed wouldn’t really go any faster if it were done in parallel. In fact, I have never used more than one conversation. So far, I always create a single conversation. I cache the conversation handle in a database table so I can reuse it whenever I send a message. This is because most of the performance impact when using Service Broker comes from creating a conversation. That makes sense since the conversation is actually the unit that has guaranteed in order message delivery. By caching the conversation and never ending it or starting a new one, you improve the performance. This is discussed in an article called Reusing Conversations. You have to consider some scalability concerns before deciding to use exactly one conversation. For me however, I felt that was good enough (not to mention simple).
For me, the receiving application is a Windows Service written in C#. It creates a background thread that loops indefinitely issuing a WAITFOR( RECEIVE ... ) command to SQL Server with a half second or so timeout. This setup allows me to send messages serially and receive and handle them serially as well. If I needed to send messages in parallel I’d have to use more than one conversation. I believe I could still receive the messages serially, but I’d have to be very careful with when the conversation lock was released, as discussed in this msdn article, Conversation Group Locks.
Hopefully this summary has shown how flexible and powerful Service Broker is. And also how easy it is, despite the fact that the learning curve is pretty high and the setup curve is pretty high too (you have to create at least 6 different new database objects at the bare minimum to get a service broker conversation going). With luck SQL Server 2008 wont deprecate all this information.
Thursday, December 27, 2007
Unit Testing and TDD
Unit Testing and Test Driven Development have been around for quite awhile. However, they're still just beginning to really become a commonly seen practice in industry programming, partly because of the influence of Agile Programming and Dynamic Languages.
A unit test is very simple (and it has nothing to do with any manor of human body part). From a code perspective a unit is a public class with public properties and methods. From a design perspective a unit is a "high level" function or behavior or task or object. A unit test is a method that tests a certain behavior of the unit. Thus one unit will have many unit tests, each one testing a different part of its behavior.
Sometimes unit tests are written by "QA" people after the developers have finished writing the code. Other times, unit tests are written by the developers themselves after the code has been written. Both of these approaches are terribly boring, inefficient, and ineffective.
Test Driven Development is the practice of writing unit tests BEFORE production code. It goes like this:
This is typically referred to as the Red/Green/Refactor pattern. It’s very festive and Christmas appropriate. First, you write up a list of tests. Then you decide what test you want to code first. Being a lazy computer geek who would rather IM the guy in the other room than actually get up out of your chair and (*gasp*) walk, you pick the easiest test first. Since you haven't written the code that the test is supposed to test, your test probably won’t compile. So, you write the minimum amount of code needed to get it to compile by defining the class you invented out of thin air in the test and any properties or methods you may have whimsically pretended it had. Now you run the test and it fails since none of those methods or properties actually do anything yet (red). Now that you see red, you can actually write some code. You write the minimum amount of code needed to get the test to pass (green). Then you survey your code and your tests and refactor as needed, re-running the tests when you're done. At this point you enhance your list of things to test, in case you thought of something new, and pick another test.
Every line of code you write is directly driven by a unit test (or a refactoring): Test Driven Development.
If you think I’m over emphasizing the minimum concept above, wait until you read your first TDD code sample. The first one I saw tested a Queue’s IsEmpty method. The test was that when the Queue is first created, IsEmpty returns true.The code they wrote to pass this test was:
So when TDD people say do the minimum, they really mean it.
It sounds quite simple and for the most part it is. However, there are some guidelines which will help make the unit tests more useful.
The main question to ask about Unit Testing and TDD is: does it help? Well, help with what? Bugs? Time to develop? Maintenance? Code design?
Here are some of the pros I’ve seen while doing TDD:
What are the cons?
So is it worth it? Do the pros out weigh the cons? So far, I think they do. I even think that in the long run, writing unit tests may actually save time because they make it so much easier to update code and verify that code is working correctly months and years later. They also make it easier for multiple developers to work on the same code at different times. Also, if you’re doing proper TDD, then maintaining the tests isn’t an issue. And you can address old code that isn’t tested by refactoring it and testing the refactored portions little by little as you fix bugs and add features. And while you may not be able to click a button on a form, you can call the method that the button’s click event handler would have called.
All that being said this is still computer programming.There is no silver bullet. But TDD seems to get between 70% and 80% of the way there, and that’s certainly better than randomly clicking around your Forms...
Finally, if you're completely new to this stuff you're going to want to check out nunit for unit testing and nmock or Rhino.Mocks for creating mock objects, all of which are free.
And now we’ve reached the part of the post where I ask what you think about Unit Testing and TDD. Do you agree with my take on it? Do you have stuff to add?
A unit test is very simple (and it has nothing to do with any manor of human body part). From a code perspective a unit is a public class with public properties and methods. From a design perspective a unit is a "high level" function or behavior or task or object. A unit test is a method that tests a certain behavior of the unit. Thus one unit will have many unit tests, each one testing a different part of its behavior.
Sometimes unit tests are written by "QA" people after the developers have finished writing the code. Other times, unit tests are written by the developers themselves after the code has been written. Both of these approaches are terribly boring, inefficient, and ineffective.
Test Driven Development is the practice of writing unit tests BEFORE production code. It goes like this:
- Write a list of things to test
- Pick a test and write it before you've written the production code
- Write the minimum amount of code needed to pass the test
- Refactor the production code AND test code, if needed
- Repeat
This is typically referred to as the Red/Green/Refactor pattern. It’s very festive and Christmas appropriate. First, you write up a list of tests. Then you decide what test you want to code first. Being a lazy computer geek who would rather IM the guy in the other room than actually get up out of your chair and (*gasp*) walk, you pick the easiest test first. Since you haven't written the code that the test is supposed to test, your test probably won’t compile. So, you write the minimum amount of code needed to get it to compile by defining the class you invented out of thin air in the test and any properties or methods you may have whimsically pretended it had. Now you run the test and it fails since none of those methods or properties actually do anything yet (red). Now that you see red, you can actually write some code. You write the minimum amount of code needed to get the test to pass (green). Then you survey your code and your tests and refactor as needed, re-running the tests when you're done. At this point you enhance your list of things to test, in case you thought of something new, and pick another test.
Every line of code you write is directly driven by a unit test (or a refactoring): Test Driven Development.
If you think I’m over emphasizing the minimum concept above, wait until you read your first TDD code sample. The first one I saw tested a Queue’s IsEmpty method. The test was that when the Queue is first created, IsEmpty returns true.The code they wrote to pass this test was:
public bool IsEmpty()
{
return true;
}
{
return true;
}
So when TDD people say do the minimum, they really mean it.
It sounds quite simple and for the most part it is. However, there are some guidelines which will help make the unit tests more useful.
- Tests must be fully automated requiring no external input or setup that is not performed by the tests themselves.
- Don't write code unless you have a failing test. Your tests define the code's requirements.
- Each test should test as little as possible. That is, be as specific as possible and test exactly one feature of the unit. Then, when a test fails, you know exactly what is wrong.
- Tests should exercise only the unit being tested and not any of that unit's dependencies (ex: database, other units, etc). This is accomplished through dependency injection and mock objects.
- Test code is not a secondary citizen; it is just as important as production code and should be treated as such.
The main question to ask about Unit Testing and TDD is: does it help? Well, help with what? Bugs? Time to develop? Maintenance? Code design?
Here are some of the pros I’ve seen while doing TDD:
- Fewer bugs: Thinking through what tests to write, and actually writing them, frequently reveals other test scenarios I would have missed. This is partly because I’m not doing "gunslinger" development and partly because you are using your classes before you’ve written them.
- Regression testing: Worried that code you wrote three months ago still works? Run the tests. Worried you forgot something during your code refactoring? Run the tests. Worried you broke your public interface by accident? Run the tests.
- Easier debugging: The test tells you exactly what is wrong and you can frequently tell exactly what LINE of code is wrong.
- Improved code design: Because you need to test one unit at a time, you naturally design your code into smaller, “DRY”-er, more orthogonal, and more composable units. You also tend to design around algorithms instead of UIs.
- Increased confidence: Because everything is tested and tests are fast and easy to run, you KNOW your code works. When your boss asks, "Does this work?" you don’t have to say, "Last time I checked..." The last time you checked was immediately after your last build. "It works."
- One development style: Whether you’re writing new code, or prototyping, or debugging, or adding features, or working on someone else’s code, it’s the same process: Red/Green/Refactor.
- Feeling of "making progress": You’re constantly finishing small units of work in the form of tests and getting instant feedback that the work is correct. This means you don’t have to go days hacking on some deeply embedded code, hoping you’re getting it right.
What are the cons?
- At least twice as much code: Tests require lots of code. You have to write it, there is no avoiding it.
- Tests take time: Writing all those tests takes time.
- Test Maintenance: You have to keep that test code up to date if you want it to be worth anything.
- Dependency Injection can get ugly and it can be hard to know where to draw the line, as I discussed earlier.
- Some things can’t be tested: You can’t test GUIs. There are also things that are just too hard to test. Asynchronous timing issues for example.
- Code has to written to be tested: You can’t test any arbitrary code. You can test parts of it in some cases, if you’re willing to deal with dependencies it may have. But to test it "properly" it has to be written with testing in mind.
So is it worth it? Do the pros out weigh the cons? So far, I think they do. I even think that in the long run, writing unit tests may actually save time because they make it so much easier to update code and verify that code is working correctly months and years later. They also make it easier for multiple developers to work on the same code at different times. Also, if you’re doing proper TDD, then maintaining the tests isn’t an issue. And you can address old code that isn’t tested by refactoring it and testing the refactored portions little by little as you fix bugs and add features. And while you may not be able to click a button on a form, you can call the method that the button’s click event handler would have called.
All that being said this is still computer programming.There is no silver bullet. But TDD seems to get between 70% and 80% of the way there, and that’s certainly better than randomly clicking around your Forms...
Finally, if you're completely new to this stuff you're going to want to check out nunit for unit testing and nmock or Rhino.Mocks for creating mock objects, all of which are free.
And now we’ve reached the part of the post where I ask what you think about Unit Testing and TDD. Do you agree with my take on it? Do you have stuff to add?
Thursday, December 20, 2007
Dependency Injection
Dependency Injection is an "object oriented design pattern" for creating loosely coupled objects. It has nothing to do with needles, and if it makes you think of the Breakfast Club that's not my fault.
Lets say you're writing a class whose job is to find a certain configuration file. Its going to look in a predetermined expected location, and if it doesn't find it there it will ask the user to select it. This object is dependent on the user. And in code, it is dependent on some kind of UI for asking the user for the file.
If we're in C# you can write this object so it creates an OpenFileDialog (which doesn't have the same Memory Leak causing behavior as a regular form, by the way) and displays it to the user. However, this would not be a loosely coupled design. What if you wanted to change this to use the Vista open file dialog? Or what if you wanted to make it force the user to select a valid file by continually asking them until they select one? You'd have to edit the main class. And you'd have to be careful that the user prompting logic didn't become intertwined with the rest of the class' logic.
What can we do? Let's inject our class. Let's inject it with dependencies. Or more accurately, classes that take care of its dependencies for it.
Create an interface like IPromptUserForConfigFile which contains a method that returns the path the user selected as a string. Now modify the constructor of your main class to accept an instance of an IPromptUserForConfigFile class. The main class simply calls the method on this interface, all the details of how the user is prompted are abstracted away. Plus you can change how the user is prompted at any time by simply passing in a different class that implements IPromptUserForConfigFile.
Dependency injection seems pretty simple. It gives you loosely coupled objects, which are very orthogonal, and possibly best of all it makes it easy to unit test with the help of a library like nmock.
What are the downsides? You now have an interface and a class that you otherwise probably wouldn't have created (since you haven't actually had any practical DEMAND for loose coupling yet). And you also have to construct a class that implements the interface and pass it into the main object's constructor. There are libraries to help with that part, like Spring.NET. I've never personally used them, but they exist. Actually, when I've used this pattern I've built in a default implementation of the interface to the main class and instead of passing in the interface object through the constructor I allow you to override my default with a property. This clearly violates some of the loose coupling, but to be honest I'm really not using this pattern for the loose coupling, I'm using it so I can Unit Test.
The biggest downside here would seem to be that the size of your code base has increased. And this has happened to enable loose coupling, but mostly to allow for Unit Testing. Is this a good thing?
When I first started learning about Unit Testing my biggest worry, apart from how much more code I'd be writing and maintaining, was that I would start writing strangely designed code just so I could Unit Test it and not because it was the right design for the situation. You can imagine all kinds of terrible things. Classes with all public methods. Classes with only a single static method that is only used by one other class. Classes with tentacles and suction cups and slimy oddly named and mostly irrelevant properties.
However, I was surprised at how often my design changed for the better due to thinking about testing. My first few unit testing attempts didn't result in any tentacles or other horrible things. That is, until I encountered dependency injection. "Look! I'm making my code loosely coupled!" I thought to myself, "That's a good thing! This Unit Testing really works!"
But dependency injection is a very slippery slope. After all, the majority of classes you write are riddled with dependencies. Things like databases, framework code, web services, user interfaces, other code libraries you've written, etc. Should you seriously create a class to wrap each one of these and then pass those classes into other classes that apply the "business logic" and "glue" between how those dependencies are used? Imagine how huge your code base would be then! And you certainly can't create static classes if you're planning on passing them into another class through a constructor. And of course now that you've created all these objects that wrap various dependencies, you're going to want to reuse them. In .NET that means putting them in their own assemblies. Are we going to end up with 500 assemblies with complicated dependencies? Where do we draw the line?
An object mocking framework like TypeMock (not free!) would help alleviate the need for all the "injection" going on here just for the sake of testing, though you'd still need to create many objects. I need to look into that product more to decide if its worth the cost (its quite expensive). If it is, then the line can be drawn with a question, "Do I need this to be loosely coupled?" If it isn't, then the line needs to be drawn with a different question, "Do I need to mock this for testing or to have it be loosely coupled?" The first question is pretty easy to answer. The second one... maybe not so much.
Lets end with a string of questions. Do you unit test? Do you mock objects? What mocking library do you use? How does dependency injection make you feel?
Lets say you're writing a class whose job is to find a certain configuration file. Its going to look in a predetermined expected location, and if it doesn't find it there it will ask the user to select it. This object is dependent on the user. And in code, it is dependent on some kind of UI for asking the user for the file.
If we're in C# you can write this object so it creates an OpenFileDialog (which doesn't have the same Memory Leak causing behavior as a regular form, by the way) and displays it to the user. However, this would not be a loosely coupled design. What if you wanted to change this to use the Vista open file dialog? Or what if you wanted to make it force the user to select a valid file by continually asking them until they select one? You'd have to edit the main class. And you'd have to be careful that the user prompting logic didn't become intertwined with the rest of the class' logic.
What can we do? Let's inject our class. Let's inject it with dependencies. Or more accurately, classes that take care of its dependencies for it.
Create an interface like IPromptUserForConfigFile which contains a method that returns the path the user selected as a string. Now modify the constructor of your main class to accept an instance of an IPromptUserForConfigFile class. The main class simply calls the method on this interface, all the details of how the user is prompted are abstracted away. Plus you can change how the user is prompted at any time by simply passing in a different class that implements IPromptUserForConfigFile.
Dependency injection seems pretty simple. It gives you loosely coupled objects, which are very orthogonal, and possibly best of all it makes it easy to unit test with the help of a library like nmock.
What are the downsides? You now have an interface and a class that you otherwise probably wouldn't have created (since you haven't actually had any practical DEMAND for loose coupling yet). And you also have to construct a class that implements the interface and pass it into the main object's constructor. There are libraries to help with that part, like Spring.NET. I've never personally used them, but they exist. Actually, when I've used this pattern I've built in a default implementation of the interface to the main class and instead of passing in the interface object through the constructor I allow you to override my default with a property. This clearly violates some of the loose coupling, but to be honest I'm really not using this pattern for the loose coupling, I'm using it so I can Unit Test.
The biggest downside here would seem to be that the size of your code base has increased. And this has happened to enable loose coupling, but mostly to allow for Unit Testing. Is this a good thing?
When I first started learning about Unit Testing my biggest worry, apart from how much more code I'd be writing and maintaining, was that I would start writing strangely designed code just so I could Unit Test it and not because it was the right design for the situation. You can imagine all kinds of terrible things. Classes with all public methods. Classes with only a single static method that is only used by one other class. Classes with tentacles and suction cups and slimy oddly named and mostly irrelevant properties.
However, I was surprised at how often my design changed for the better due to thinking about testing. My first few unit testing attempts didn't result in any tentacles or other horrible things. That is, until I encountered dependency injection. "Look! I'm making my code loosely coupled!" I thought to myself, "That's a good thing! This Unit Testing really works!"
But dependency injection is a very slippery slope. After all, the majority of classes you write are riddled with dependencies. Things like databases, framework code, web services, user interfaces, other code libraries you've written, etc. Should you seriously create a class to wrap each one of these and then pass those classes into other classes that apply the "business logic" and "glue" between how those dependencies are used? Imagine how huge your code base would be then! And you certainly can't create static classes if you're planning on passing them into another class through a constructor. And of course now that you've created all these objects that wrap various dependencies, you're going to want to reuse them. In .NET that means putting them in their own assemblies. Are we going to end up with 500 assemblies with complicated dependencies? Where do we draw the line?
An object mocking framework like TypeMock (not free!) would help alleviate the need for all the "injection" going on here just for the sake of testing, though you'd still need to create many objects. I need to look into that product more to decide if its worth the cost (its quite expensive). If it is, then the line can be drawn with a question, "Do I need this to be loosely coupled?" If it isn't, then the line needs to be drawn with a different question, "Do I need to mock this for testing or to have it be loosely coupled?" The first question is pretty easy to answer. The second one... maybe not so much.
Lets end with a string of questions. Do you unit test? Do you mock objects? What mocking library do you use? How does dependency injection make you feel?
Monday, December 17, 2007
Connect4
In college I took an AI class. For the most part I was pretty disappointed in it. I wanted to learn about techniques to make computers learn (Neural Networks and Evolutionary Computation and so forth) but intro to AI is mostly just A* and Alpha Beta search. Given, those are clever algorithms, but at their core they're just brute force, so I didn't find it all that interesting.
However, our final project in the class was to develop a program to play Connect 4. The class broke into teams of 4-6 people. Then we had a big tournament between all the teams to see who's connect 4 program would win. Our professor gave us a heuristic to start us off which she told us wasn't very good. We were supposed to develop our own heuristic and then any kind of UI (or not) that we wanted.
Heuristics are pretty boring. Basically you just spend a lot of time trying to figure out how to decide, given a certain board state, if the board is good or bad. So you do all the work of figuring out the game, code it into a heuristic, and the computer just brute forces its way through all possible game moves trusting that your heuristic is smart enough to tell what is good and bad.
My team didn't really want to work on the Heuristic. Instead the majority of them went off and created a pretty sweet OpenGL UI where the Connect 4 board would spin around and the pieces would fly off the stack and into the board...
We just used the Heuristic our professor had given us, but a friend and I optimized the crap (memory usage) out of our Alpha Beta search, board state representation, and heuristic evaluation. Because our alpha beta routine was so much faster than the other teams' we were able to search about 2 levels deeper into the game tree.
We tied for first place.
Even though our Heuristic wasn't supposed to be very good, the fact that our search algorithm was so much more efficient meant we looked further ahead than everyone else and that was all it took.
At the time, we also had access to a room full of computers. We considered creating a client/server program to use all those computers to evaluate the game tree in parallel but after we did the math we realized that adding 12 more computers would only get us one level further down the tree because the number of nodes in each level of the tree doubles. We didn't think one more level was worth the effort, so we didn't bother.
However, our final project in the class was to develop a program to play Connect 4. The class broke into teams of 4-6 people. Then we had a big tournament between all the teams to see who's connect 4 program would win. Our professor gave us a heuristic to start us off which she told us wasn't very good. We were supposed to develop our own heuristic and then any kind of UI (or not) that we wanted.
Heuristics are pretty boring. Basically you just spend a lot of time trying to figure out how to decide, given a certain board state, if the board is good or bad. So you do all the work of figuring out the game, code it into a heuristic, and the computer just brute forces its way through all possible game moves trusting that your heuristic is smart enough to tell what is good and bad.
My team didn't really want to work on the Heuristic. Instead the majority of them went off and created a pretty sweet OpenGL UI where the Connect 4 board would spin around and the pieces would fly off the stack and into the board...
We just used the Heuristic our professor had given us, but a friend and I optimized the crap (memory usage) out of our Alpha Beta search, board state representation, and heuristic evaluation. Because our alpha beta routine was so much faster than the other teams' we were able to search about 2 levels deeper into the game tree.
We tied for first place.
Even though our Heuristic wasn't supposed to be very good, the fact that our search algorithm was so much more efficient meant we looked further ahead than everyone else and that was all it took.
At the time, we also had access to a room full of computers. We considered creating a client/server program to use all those computers to evaluate the game tree in parallel but after we did the math we realized that adding 12 more computers would only get us one level further down the tree because the number of nodes in each level of the tree doubles. We didn't think one more level was worth the effort, so we didn't bother.
Wednesday, December 12, 2007
PowerShell Example
Here's a real quick Windows PowerShell example I just used.
I've been experimenting with .NET's Window's EventLog support. There's a class in System.Diagnositics called EventLog which lets you query the event log and add entries and so forth.
I had a C# program which created an event source on the Application Log like this:
and I needed to change it so that the event source was on a new custom log like this:
To do this, I needed to delete the old event source, otherwise I would get an error because I was trying to use the event source with the wrong log.
I was about to write a tiny little C# program to say:
And then I thought, "PowerShell!"
I've been experimenting with .NET's Window's EventLog support. There's a class in System.Diagnositics called EventLog which lets you query the event log and add entries and so forth.
I had a C# program which created an event source on the Application Log like this:
if( !System.Diagnostics.EventLog.SourceExists("TestSource") )
System.Diagnostics.EventLog.CreateEventSource("TestSource",
"Application");
System.Diagnostics.EventLog.CreateEventSource("TestSource",
"Application");
and I needed to change it so that the event source was on a new custom log like this:
if ( !System.Diagnostics.EventLog.SourceExists("TestSource") )
System.Diagnostics.EventLog.CreateEventSource("TestSource",
"CustomLog");
System.Diagnostics.EventLog.CreateEventSource("TestSource",
"CustomLog");
To do this, I needed to delete the old event source, otherwise I would get an error because I was trying to use the event source with the wrong log.
I was about to write a tiny little C# program to say:
System.Diagnostics.EventLog.DeleteEventSource("TestSource");
And then I thought, "PowerShell!"
PS C:\> [System.Diagnostics.EventLog]::DeleteEventSource(
"TestSource")
"TestSource")
Tuesday, December 11, 2007
Copping Out With Consistency
Computer Software is so virtual that it is very hard to determine why one thing is "better" than another. Is it better to have buttons run along the button of a grid horizontally, or along the right of a grid vertically? What exactly does "better" even mean? Easier to read, easier to understand, better looking, more extensible, easier to code, ...? You can apply all the Theories of Software Usability you want to and its still going to be hard to determine.
The same issues come up in object oriented design. Should this be one object or two? Should it contain state or not? Should it use inheritance, or delegates, or dependency injection?
Its very difficult to prove that one way is better than another in cases like this. To combat this problem, programmers and designers use some common rules of thumb. One of these is that consistency is a good thing.
If you choose to lay out your UI in one way somewhere else, its a good idea to do it the same way here. If you've designed your objects to behave a certain way somewhere else, its a good idea to make them work similarly here. It makes sense. If everything is "consistent" users don't have to constantly learn new things. Learn it once and it applies all over. This was one of the things that impressed me about Windows Powershell. Its structure makes all its commands very consistent.
However, I've found that people all too often use Consistency as a convenient cop out. "We have to do it this way! Its more consistent!" Okay sure, rule of thumb, consistency is good, got it. Is it always true? Of course not! Sometimes the benefits of doing something a different way are better than the benefits of being consistent. Other times, being consistent can be down right bad. If you're forcing A, which isn't in any way similar to B, to behave similarly to B, you're probably doing something wrong. And of course, if the consistent way has problems, or if a better way has indeed been found, then forcing yourself to be consistent is just keeping you from making any forward progress.
I know, this seems obvious. But its such an easy trap to fall into and it happens all the time. Its just easier to say, "lets make it consistent," than it is to think about if that might actually be the best choice. After all, software is so virtual, its hard to determine if one thing is "better" than another.
The same issues come up in object oriented design. Should this be one object or two? Should it contain state or not? Should it use inheritance, or delegates, or dependency injection?
Its very difficult to prove that one way is better than another in cases like this. To combat this problem, programmers and designers use some common rules of thumb. One of these is that consistency is a good thing.
If you choose to lay out your UI in one way somewhere else, its a good idea to do it the same way here. If you've designed your objects to behave a certain way somewhere else, its a good idea to make them work similarly here. It makes sense. If everything is "consistent" users don't have to constantly learn new things. Learn it once and it applies all over. This was one of the things that impressed me about Windows Powershell. Its structure makes all its commands very consistent.
However, I've found that people all too often use Consistency as a convenient cop out. "We have to do it this way! Its more consistent!" Okay sure, rule of thumb, consistency is good, got it. Is it always true? Of course not! Sometimes the benefits of doing something a different way are better than the benefits of being consistent. Other times, being consistent can be down right bad. If you're forcing A, which isn't in any way similar to B, to behave similarly to B, you're probably doing something wrong. And of course, if the consistent way has problems, or if a better way has indeed been found, then forcing yourself to be consistent is just keeping you from making any forward progress.
I know, this seems obvious. But its such an easy trap to fall into and it happens all the time. Its just easier to say, "lets make it consistent," than it is to think about if that might actually be the best choice. After all, software is so virtual, its hard to determine if one thing is "better" than another.
Thursday, December 6, 2007
Check in Comments
My first approach to check in comments was not to use them. "You're just going to have to diff the code to see what really changed anyways!" I would have said.
My second approach has changed a bit. I still think you're going to have to diff the code to see what really changed. Like I talked about in Code Documentation, meta data is a bad substitute for reading code. However, check in comments can help answer the following questions:
A check in comment should convey what changed at a high, abstract, level. To figure out how those changes were accomplished: diff. To figure out if those changes might be causing a bug you just found: diff. To figure out how to use the changes: diff. You're better off looking at code to answer those kinds of questions anyway. The check in comment is just a guide to help you figure out where to look.
My second approach has changed a bit. I still think you're going to have to diff the code to see what really changed. Like I talked about in Code Documentation, meta data is a bad substitute for reading code. However, check in comments can help answer the following questions:
- What did that dude change?
- Which files do I need to diff?
- Could this change be of interest to me?
- If you didn't really change anything, don't leave a comment. If you're checking in a file and all you've done is change the spelling of a variable, or the capitalization of a method, or the wording of a comment, don't bother with the check in comment. No one is ever going to need to diff or read your changes for that version, so save me the time in reading your comment just to figure out that you didn't change anything, and don't leave a comment. If you're required to leave a comment, just put "none" or something equally simple.
- Title what you changed. Pretend you're trying to give the diff between your version and the previous version a Chapter Title. It should summarize the changes you've made at a high level.
- Be as brief as possible. Don't write a book. Don't even use full sentences. How few words can you use to describe the changes you made?
- Group check ins by change. If you're checking in 10 files which have had 5 mutually exclusive changes made to them, don't check them in all at once. Do 5 check ins of two files each. Each check in gets its own comment describing only the changes in those files.
- Check in one change at a time. If you're frequently writing check in comments that say, "Did this and that," could you have done "this" and checked in, then done "that" and checked in?
A check in comment should convey what changed at a high, abstract, level. To figure out how those changes were accomplished: diff. To figure out if those changes might be causing a bug you just found: diff. To figure out how to use the changes: diff. You're better off looking at code to answer those kinds of questions anyway. The check in comment is just a guide to help you figure out where to look.
Tuesday, December 4, 2007
Managing Projects
This is a follow up to a previous post, To Estimate or Not To Estimate.
I've been doing a lot of reading and learning and experimenting since that post and I now have a much clearer idea of what I'd like to see in a project management application.
I'll start with what kinds of questions I want to be able to answer:
Now you have some nice and simple burn down charts. But are they accurate? No. You know your developer's estimates are not correct. You also know that some devs are better estimators than others. So you can't accurately predict a ship date from this data. You can get a rough idea, and you can see that the project is in good shape and is progressing nicely, but you can't determine a date.
If you want to be able to predict an actual ship date, then you need to track time, and you need Evidence Based Scheduling and you need FogBugz. Can EBS account for the fact that working with a third party scan library will take 4 times as long as what a developer usually works on? Kind of, but not really. Can it predict that a ton of redesign and new features may need to be added to the application after the client sees a demo? Kind of, but not really. So even though EBS is way more accurate with the data it has, it's still just giving you a rough estimate because it doesn't know what data may arrive tomorrow.
So the question becomes, how accurate do we need to be with this stuff? Are we satisfied knowing that our work is trending toward completion? Or do we need to know WHEN completion will actually happen?
From my current thoughts above you can tell that, right now at least, I think it's enough for me to know we're trending toward completion. Unfortunately, I'm not the only person in the equation. My boss needs to do two things I don't. First, he needs to guess before we even start a project how long the project will take. I call this a "blind estimate" and it's clearly impossible, but he still needs to do it because the client isn't going to hire you otherwise. Second, he needs to keep the client updated on when we're going to ship, because that is all a client cares about: when we'll ship, and how the budget looks (only some of them care if the app is any good).
Clearly, actual time matters a lot to him. And not just time spent working on tasks, but all "billable time." Time spent in design meetings, time spent with clients, time spent writing documentation, time spent researching tools, etc. Thus any time entry system which is going to be at all useful for integrating with billing has to allow me to enter time on things other than tasks or project items. It should allow me to just enter time and not require it to be "on something."
Furthermore, now that we're entering time, that time can be used to help with future "blind estimates." How long did it take to do x last time? It's likely to take a similar amount of time this time. That's why I think it makes sense to track time at the Project Item level. It's high enough to be "reusable" but low enough to be semi-accurate. Also, since we're estimating at the task level, and tasks are linked to Project Items, we can see how the remaining estimates and actual time spent compare to the original blind estimate and the budget.
I suppose the final question must be, is there a tool that will do all this? Not all of this, at least, not that I'm aware of. I know that a lot of this is very similar to Scrum. But the Scrum tools I've seen don't capture all of it (Scrum for Team System, escrum), especially not the time entry portions. Also, FogBugz comes pretty close, but it lacks the Project Item concept and doesn't have any burn down style reports. So if you were hoping I'd end this by recommending a tool, I'm sorry to disappoint. If I find one, or can make one work like I want, I'll let you know. And if you know of one, please let me know.
I've been doing a lot of reading and learning and experimenting since that post and I now have a much clearer idea of what I'd like to see in a project management application.
I'll start with what kinds of questions I want to be able to answer:
- Who has too much work, who needs more work? (Developer Workload)
- How much work have we done, how much work is left (Iteration Status)
- Is our scope slipping, are we getting things done, are we converging on a completion date? (Iteration Progress)
- Are certain high level tasks slipping or eating more time than others, or more than expected? (Project Item Progress)
- What are people working on/what should they be working on?
- Project - top level
- Project Items - high level "tasks", or "features" that make up a project. These are assigned to one or more individuals. Estimates are added at this level in terms of complexity or actual hours.
- Features, Tasks, and Bugs make up Project Items.
- Tasks and Bugs are assigned to developers and estimated in hours (<=16 hours)
- If actual time spent is going to be tracked, enter it at the Project Item level.
Now you have some nice and simple burn down charts. But are they accurate? No. You know your developer's estimates are not correct. You also know that some devs are better estimators than others. So you can't accurately predict a ship date from this data. You can get a rough idea, and you can see that the project is in good shape and is progressing nicely, but you can't determine a date.
If you want to be able to predict an actual ship date, then you need to track time, and you need Evidence Based Scheduling and you need FogBugz. Can EBS account for the fact that working with a third party scan library will take 4 times as long as what a developer usually works on? Kind of, but not really. Can it predict that a ton of redesign and new features may need to be added to the application after the client sees a demo? Kind of, but not really. So even though EBS is way more accurate with the data it has, it's still just giving you a rough estimate because it doesn't know what data may arrive tomorrow.
So the question becomes, how accurate do we need to be with this stuff? Are we satisfied knowing that our work is trending toward completion? Or do we need to know WHEN completion will actually happen?
From my current thoughts above you can tell that, right now at least, I think it's enough for me to know we're trending toward completion. Unfortunately, I'm not the only person in the equation. My boss needs to do two things I don't. First, he needs to guess before we even start a project how long the project will take. I call this a "blind estimate" and it's clearly impossible, but he still needs to do it because the client isn't going to hire you otherwise. Second, he needs to keep the client updated on when we're going to ship, because that is all a client cares about: when we'll ship, and how the budget looks (only some of them care if the app is any good).
Clearly, actual time matters a lot to him. And not just time spent working on tasks, but all "billable time." Time spent in design meetings, time spent with clients, time spent writing documentation, time spent researching tools, etc. Thus any time entry system which is going to be at all useful for integrating with billing has to allow me to enter time on things other than tasks or project items. It should allow me to just enter time and not require it to be "on something."
Furthermore, now that we're entering time, that time can be used to help with future "blind estimates." How long did it take to do x last time? It's likely to take a similar amount of time this time. That's why I think it makes sense to track time at the Project Item level. It's high enough to be "reusable" but low enough to be semi-accurate. Also, since we're estimating at the task level, and tasks are linked to Project Items, we can see how the remaining estimates and actual time spent compare to the original blind estimate and the budget.
I suppose the final question must be, is there a tool that will do all this? Not all of this, at least, not that I'm aware of. I know that a lot of this is very similar to Scrum. But the Scrum tools I've seen don't capture all of it (Scrum for Team System, escrum), especially not the time entry portions. Also, FogBugz comes pretty close, but it lacks the Project Item concept and doesn't have any burn down style reports. So if you were hoping I'd end this by recommending a tool, I'm sorry to disappoint. If I find one, or can make one work like I want, I'll let you know. And if you know of one, please let me know.
Subscribe to:
Posts (Atom)