Sunday, March 30, 2008

Tech Night - Getting Started with MVC

First of all, thanks to all those who attended TechNight this past week at QSI HQ. That was by far the largest crowd I'd seen gather for TechNight, much bigger than the 10 people that came in September when I presented on ASP.Net AJAX. With Quick having the much larger training center now, I hope this trend continues.

The Snag

During post-presentation discussions, I finally arrived at why the test failed. Short version: I'm an idiot.

Long version: I went with the whole TDD approach to the presentation to show how MVC was more testable. It is a great way to show some of the advantages of MVC. It is if you continue to run those tests as you refactor your code.

What I did was stop at red, green, refactor. I didn't try for more green following an additional refactoring. I had written and passed my product controller tests, where the big test failure happened in the presentation, before I had added my model code to the product controller. That's a pretty big change, and clearly the tests caught that front of an audience, rather than in the comfort of my own home.

So, my protests of, "These passed at home!" were correct, because I only ran them once. Like I said, idiocy.

In the end we got it working for purposes of the demo, and I owe Mel, Steve, Steve, Kris and many others who shouted advice a big thanks for helping me over the hurdle. However, the demo ended up testing what I didn't need or want to test: That LINQ was doing what it said. I had wanted to add one more test prior to the demo that hit an in memory collection of products, that would have solved my problem with the connection string and been a much better test of the controller code.

Sunday, March 30, 2008

TDD Growing Pains

I've understood and attempted to follow TDD practices for a while. The initial understanding of it was pretty tough, and very, VERY sporadically put into practice. After all, it's a pretty big shift in thinking when you first encounter it. "Write tests? For code that's not there?? Wow, that's fantastic!!"

So, over time I've gotten much better at the whole TDD thing. Or so I had thought.

Wednesday, mid-presentation, my lack of patience with the system outed me. I had written my tests, passed my tests, changed my code, changed it a little more, and then went to present it. Wow did I get smacked in the head by not running the tests after each change. (Continuous integration for a small demo app didn't really enter my mind. I have a hard time firing up CI at home since there's not much to I, but maybe I should join Mr. Donaldson in doing so.) So, big time lesson learned there.

The flip side to that bit of a stumble was the amount of time TDD saved me on my current project at work. I had written a number of unit tests to cover some business logic, and a few integration tests. A couple of defects came in showing that I needed to change a number of my dates to nullable objects. Not a difficult change, but not totally trivial, either. Made the changes, ran the tests. Oops...more changes, ran the tests. The whole change over was done, tested, and passing within 15 minutes.

A few pitfalls I've fallen into while traveling down the path towards TDD enlightenment:

  1. I tend to wrestle a lot with when to layout some of the framework and when to just breakdown and write the tests. The main reason I wrestle with this one is that the good ol' crutch of intellisense sure comes in handy for test writing if you've framed your code out some. (When writing Ruby code in e, this doesn't seem to be as much of an issue for me.)

  2. A recent project found our team working with a pile of generated tests. With the ever present deadline looming larger and larger, we decided to go with the generated tests as they appeared to be "good enough." They were good at the start, but as the code base grew, and our tests with it, we started looking at our CI build taking up to an hour. There's a big red flag. Dug into the code, very few unit tests were generated, but a pile of integration tests were. Takes a while for nHibernate to do its thing 4,316 times. (We've since started cleaning that up, build is back to a more reasonable, but still outrageous half-an-hour now...)

  3. When to mock? I'm hip to the whole mocking thing, but identifying the right time to do so is still giving me troubles. I'm sure I'll get there with practice, just need some more practice.

  4. Is this test trivial, or needed? In a previous post I mentioned viewing code coverage the other way around, seeing what's uncovered rather than covered. I don't see 100% coverage, er I mean, 0% uncovered as attainable in the web projects I usually work on, but where do you draw the line? If you end up falling over on a piece of untested code, I guess it wasn't trivial.

  5. I still have large holes in what I test. Just about all of my javascript code is completely untested. I know there are frameworks available to help me with that, but I haven't gotten down and dug into them.

So, short term I'm going to keep the ears pinned back and keep moving forward with the testing first. May as well get a home build server set up, and keep digging on mocking as much as I can. Gonna have to investigate Moq, as well. Also, time to get that javascript tested. With as much AJAX as we're cranking out, this is quickly becoming a priority.

Wednesday, March 12, 2008

Is Code Coverage Worth Bragging About?

I'm a hockey fan, have been for a while. In being a hockey fan, I've had a number of statistical discussions around the sport. Second only to baseball fans in statsville, hockey fans love their numbers. One major discussion took place around shots on goal. That semi-subjective stat that tells a team how many times the goalie got between them and a goal. I've seen teams put 40 shots on net and lose to a team that got 18 because 3 of the 18 went in and only 1 of the 40. After the game, you lost, but put 40 shots on that guy! You certainly didn't lose for a lack of trying.

So, over in the software world, I'm now wondering if Code Coverage is our shots on goal stat. You get over beers and start talking with peers and hear...

Bob: "I've got 40% code coverage!"

Terry: "Bah, that's nothing, we hit 55% this afternoon!"

James: "You both suck, we've got 70% coverage and are almost through UAT!"

Here's what's missing in that discussion...Bob is 60% uncovered, Terry is 45% uncovered, and James has 30% of his code waiting to spring something on him in the next few days of UAT. In essence, they've each got a number of shots on goal, but how many are in the net? Are we focused on the wrong side of the equation?

The goal of 100% code coverage is a tough one, unless you're Joe O'Brien. (See: Testing Mandatory, CodeMash 08) In my life in ASP.Net, getting 1% coverage on a code behind file would be worth a round of beers.

But, are we focused on the wrong goal? Clearly, increasing your code coverage is decreasing uncovered code. But, if your goal is 75% coverage and you reach that goal, do you stop? Maybe shifting our focus to code uncoverage will close that gap.

Don't be happy with those 40 shots on goal...

Tuesday, March 11, 2008

Mix08 Recap

My recap of Mix08 has to start with a big thank you to Jeff Blankenburg, and the small company he works for based in Redmond, WA. Jeff got me a ticket to Mix (QSI got me the plane ticket and hotel stay), then once I got there he got Colin Neller, who runs the Memphis user group, and myself tickets to the Blue Man Group. Then when the White Death hit Columbus and our flight home got cancelled, Jeff put us up in his room for the night because he was staying one more evening. (Oh, and he loaned me $40 at the craps table…but he told me the juice is running on that.) If your DE is to drive more interest, JB is doing a good job getting me around the Microsoft community.

Second big thank you is going to go to Steve Harman. Steve and I both work for Quick, and roomed together while at Mix. (That’s the rest of the “us” that Jeff put up courtesy of the Blizzard of ’08.) Steve has a number of contacts in the community, and he introduced me to everybody he knew, and some he didn’t know. (And after our interesting trip home, Steve and I didn't kill each other, and we're still speaking.)

So, with Jeff and Steve leading the way, easily the biggest part of Mix for me was the people I met. There were a number of sessions I didn’t go to as I stayed in the Open Spaces/Sandbox area to just talk with people. So, I may have missed a Silverlight session, but I got to spend about 35 minutes talking to Phil Haack about the ASP.Net MVC framework. The guy writing that framework has to be a pretty solid source of information. Later I watched the Steve Balmer keynote with Rob Conery. Rob shared a lot of what his general development practices are, and what tools he uses. (I didn’t luck out and have beers with ScottGu like Steve did, though.)

I did get to a few sessions, though. The highlight one for me was Scott Hanselman’s talk on the ASP.Net MVC framework. Since I’ll be doing one of those myself on March 26th, figured I better watch Scott to get some materials to, I mean get ideas. I also got to see Nikhil Kothari talk about ASP.Net AJAX apps moving forward, which was pretty cool for me since I got a lot of information from Nikhil back when I was picking up on what was then ATLAS. One of the better sessions was a panel discussion on Open Source and where it’s headed. The panel consisted of Mike Schroepfer (Mozilla), Rob Conery (Microsoft), Andi Gutmans (Zend), Miguel de Icaza (Novell, Moonlight), Sam Ramji (Microsoft). They took questions which started the discussions, which got heated at times, but raised some great points. In hindsight, I wish I’d had found another panel discussion to go to.

All the sessions, including the keynotes (the Guy Kawasaki, Steve Ballmer keynote should not be missed) are available online now at in a number of formats. All that information is great, so in the end I really won’t miss any sessions I wanted to see, but the opportunities to meet the people I met aren’t available online. Here’s looking forward to Mix09 already.

Saturday, February 23, 2008

Don't Hide Behind a Catch Block

"Pragmatic Programmer tip #32: Crash Early."

I've been recently tasked with some refactoring items. Like any project that's longer than 8 weeks, you'll dig into some code and find yourself going, "What was I thinking with THIS?" Or, if the project's a bit longer, you wonder if you came up with this gem (and no, not a Ruby gem) or if somebody else on the team left this odor in the code.

The code smell of this particular blog post is the abuse of the try/catch block. I've stumbled across a few uses of the try/catch programatically, or worse, using an empty catch block to hide an issue you're not sure how to deal with.

Right off the top will go with the empty catch block. I encountered a few of these while refactoring, and had to wonder why we were hiding behind the empty catch block. Crash early, but when you crash sweep that under the rug and move on? That's not good. If you need a catch block, then isn't there something you should be catching? Take the following for an example:

[Image lost to time and the Internet...]

That's a pretty simple little conversion function, with the dreaded empty catch block in there. It's there not because of the TryParse, which is going to give you a false if it fails its own try, but because applying ToString to a null object is going to throw an object error. Rather than the empty catch, why not a null check on o? Then you know what you're dealing with rather than not caring and returning 0 for anything you're not expecting. To me, this is a classic garbage in, garbage out scenario. You pass in a null or an object that can't be converted to a an int and you get a 0.

Next up was a bit of sheer genius, and throwing my hands up when dealing with nHibernate. Rather than find out what was going on, I resorted to using a try/catch procedurally to handle the issue. Here's a bit of code that shows my brilliant solution:

[Image lost to time and the Internet...]

The issue there turned out to be that we were using the load method from nHibernate rather than the get, so I didn't have null object to check. I had an actual proxy object full of null properties. My solution, duck behind the catch block and make it bend to my will. If the load found something, the id check would pass and I'm doing an update. If the resulting object error from the property check threw, then I'm dealing with a new object, time for an insert.

In this case my ignorance of nHibernate's dealing with objects was the problem, but ignorance is no excuse. Using the try/catch in this manner was a hack with Paul Bunyan's ax. It got me moving foward, but was a bad solution.

The final empty catch falls in the same lines, where ignorance of what was available to us wasn't found until much later. This empty catch block example was saving us from a null reference error in a nullable data type in a data bind method.

[Image lost to time and the Internet...]

Getting rid of this empty catch may have been the easiest of the three, because nullable data types carry a "HasValue" property that tells you, well, if they have a value. The refactored code looked like:

[Image lost to time and the Internet...]

Using the HasValue property, we can refactor this one down to the always elegant ternary operator. We know what value we have or don't have and can cleanly bind our data to what's in this case a grid column.

If you're going to fail early, don't hide the failure. Figure out what it is and write the correct code for it. The try/catch is there to help you handle exceptions, not to hide them and move on.