As an example, lets say we're going to update a record in a database, but before we do we need to a do a get so we can see if any of the values changed. If some values have changed, we'll do some stuff after the update. Maybe we'll send an email or something. We're going to do a concurrency check by optimistic concurrency (see if it changed in any way using a last update timestamp column) before we do our save.
Now, this is just an example so I can make a larger point, so bear with me here. Lets go ahead and pretend we're using LINQ-to-sql to do the update, so LINQ will also do the concurrency check for us.
Our psuedo code looks kind of like this:
Get record
Update record
if record.prop changed:
send email
Update record
if record.prop changed:
send email
Now, if the update fails due to the concurrency check, this will just bomb out.
But notice that when the concurrency check fails, we still did the Get operation. This kind of sucks because we didn't need to do it. And we can't do the Update before we do the Get, that would defeat the purpose. So we're going to have to do the concurrency check manually.
Wait. What? Why am I getting all upset about this? WHO CARES if I do a Get I don't need to do in a failure scenario. Unless there is something unusual about this failure, like it's going to happen all the time, or it has some record locking implications, none of which apply here. I'm optimizing for the wrong thing. I should be optimizing for success, not for failure (While avoiding premature optimization, of course).
If I could remove the Get completely, so the method would Succeed and not need it, that might be something worth talking about. But it is totally not worth adding any code complexity just to optimize this method for a failure case.
Thus: Optimize for Success, and don't get too worked up over failure.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.