Thursday, July 30, 2009

Demo Showcase Suite – A History – Part 8

Goal #6:  I wanted to plan for change, in scope, in the composition of the development team, in technology and in time lines.

Change on a project is inevitable.  Even on short projects of a couple of weeks, there will always be a small change that causes some consternation amongst the team.  I was conscious of four types of change:

1.  Scope. 

Everyone has dealt with this one.  You can’t fight it.  The best way to deal with it is to track it, to have good processes for handling it, and for providing feedback to management and the customer as to how each change in scope will affect the project.

Prior to the start of this project, our change tracking was minimal, if nonexistent.  We used a web product called Gemini for tracking bugs and some feature requests for existing projects, but we didn’t track deliverables and tasks for new projects getting started.  We used VSS (horror of horrors) for our source control.  It was quite clear that as an organization, we were outgrowing these tools. 

We made the decision to try something else as soon as the project was announced.  We had access to Team Foundation Server and Visual Studio Team System Team Suite without needing to spend a bunch of money, so we used that.  It was a big step forward from VSS, and it took quite a while to get up to speed on how to use it, and how to use it properly.  There were definitely some very painful moments, and we’ll all be glad when we can upgrade to TFS 2010 to get things like hierarchical task trees and better integrated reports.  We went back and forth on the best way to use the system, and I think, at the end, we were doing the best we could, though there are more features that we can probably gradually integrate into future projects to make us more efficient.  We didn’t take advantage of  the Estimate / Actual / Work Completed feature, and that is the one thing I would really like to change on the next project, to not just get a picture of how many tasks are outstanding, but how we are tracking effort to the goals.  It would also greatly improve our ability to provide feedback to everyone when a scope change comes in to see how it really affects the project time line.  Right now a ten minute bug fix carries the same weight as a 40 hour feature implementation on the task list, and that is obviously misleading and wrong.

2.  Team Composition

We only had one change in the team in terms of losing someone from the team from beginning to the end of the project, but we did gain two new people as well.  On a small team, it’s essential that in one person’s absence or departure, that another can step in and fill that role, or at least be able to understand what that person was working on so the work does not come to a grinding halt.  The only way I know how to do this is for everyone to be doing things the same way.  If everyone uses the same tools, writes code to the same standards, there’s a lot less chance that code will be a mess when someone turns it over.  I harped a lot to the team about a couple of things in particular

a)  Everyone had Resharper and was expected to make that little light at the top of the file go green every day.  Okay, part of this is due to the fact that I am anal-retentive when it comes to clean code, but part of it is a matter of enforcing standards.  I can’t sit over every developer’s shoulder and watch them name their variables, and set their tabs.  But Resharper, and the lovely Ctrl K D really left us with some pretty clean code.  It’s amazing how much easier it is to work on someone else’s code when you are not scared to open the file.

b) Nightly Builds:  We used TFS to do our nightly builds, right from the start, and constant check ins were encouraged.  Everyone feared the words ‘Who Broke the Build?  UNACCEPTABLE!” being shouted across the office.

c) Unit Tests:  As previously discussed, we could have done much better here, and we are working on making this better for the next phase.

d) Unity:  We used the Unity Application Block for dependency injection (DI).  I’ll go into more of the Unity overview in a later post, but because we had Unity set up early in the project, we were able to create mock objects and able to swap code in and out easily so that itwo people were rarely dependent on one another’s work.  This was especially important prior to bringing on the new people to fill a void in some of the back end code areas so that UI development could proceed until they were up to speed.

I also spent some time creating a ‘Technical Overview’ document that summarized some of the technical decisions we made early in the project, some of the issues that were left to be resolved, and a quick reference for some of the more complex processes we were using.  I don’t know how much this helped the new guys, but it made me feel better that I had at least tried to help them come up to speed without them needing to spend a week constantly asking questions we should have documented.

When I started my first IT job back in 1994, my boss gave me ownership of the New Employee Orientation / Standards book, and told me to update it when I found a question it didn’t answer, as I would be handing the book off to the next new person, who would then come to me first to ask questions.  I love this chaining of documentation pattern.  It gives the new people immediate ownership and responsibility.  Of course, some people weren’t as robust in their documentation as others were, but if we held true to the pattern, everyone would see the benefit of keeping the document up to date.  It made sense back then, anyway.

3.  Technology

We knew the technology was going to change, and change rapidly during this project.  Managing this change was a constant challenge.  Again, Unity and DI played a large role in helping to resolve this issue.  There were many times where we had issues with Azure, or did not know where the data would actually reside at the end of the project, but that didn’t stop the developers from continuing their work.  We built interfaces and mocks, and swapped the managers in and out using the configuration files. 

During most of the project, there were only two of us who ever touched Azure, just to insulate the team from the environmental changes.  We defined the processes and worked through the issues, but the rest of the team barreled forward, not needing to know the specifics of how the system would work on Azure.  This was true for deployments, data storage, security and configuration.  Almost everything we built could run directly in IIS7 or in the dev environment without Azure being available.  Pieces were converted over to run on Azure one at a time as they were ready with a few configuration changes.  I can’t emphasize enough how important this is when working with dependencies that are constantly changing.  Isolate them, mock them, keep the team moving.

4.  Time Lines

The final date was not moving, but that didn’t mean that there wouldn’t suddenly be a need to give a quick demo of what we had to the customer in the middle of a big dev cycle.  The builds saved us there, and having two (I would have preferred 3) environments to run the app in from early in the project allowed the devs to keep working, while the ‘PROD’ site was only updated as we achieved more stable releases.  Having the multiple environments saved our bacon more than a few times.

Wednesday, July 29, 2009

Demo Showcase Suite – A History – Part 7

This is the next in the series of entries about Demo Showcase Suite and the goals I wanted to accomplish.

5.  I wanted the team to feel the urgency of working hard and smart early in the project to avoid the death march mindset at the end

Here were the things we knew, right at the start of the project:

a)  The deadline was July 13th.  It was not moving.  No matter how many times I wished we had a few more months, weeks, days (or even hours), we were launching during the keynote on opening day of WPC.

b)  We had huge technological hurdles to overcome on areas we had never seen before.

c)  The customer wanted frequent updates on the work, samples of what we were doing.

To cope with all of this, we implemented some project management processes

1)  We broke the project into 3 week sprints.  This was our first time doing this, and I think we took the word sprint a little too seriously.  We were always up against a deadline of some sort, but that was driven by the final, immovable deadline.  When we saw our iterations slipping, we weren’t able to slide requirements to follow-on releases, but we did see that we were behind, and worked harder earlier in the project to get back on track.  We also were able to keep the developers focused on the important features instead of getting wrapped around the axle on something that wasn’t supposed to be started for a couple of months.

Iterations worked for us as well, as we were able to better manage the customer expectations for the functions they would see in the next release.  They knew what to expect, and knew what not to expect.  That went a long way in keeping down the surprises for both parties.

In the end, we did one ‘Alpha’ release, two ‘Beta’ releases, and 3 ‘RTM’ releases.  In retrospect, we should have called them Alpha 1, Alpha 2, Beta 1, Beta 2, Beta 3 and RTM 1 to properly set everyone’s expectations not just to the functionality, but to the quality, but that’s a lesson we learned the hard way.

2) We held a team scrum every day at 1:00 PM.  We tried to keep this to 15 minutes a day, and I, as scrum master, tried to keep the team focused on three things:

i)  What did you complete since the last scrum – Just a list, not a detailed dissertation.  This helped everyone else know what was ready to test, or what dependency they had been waiting on was now complete.

ii)  What are you working on between now and the next scrum  - Just a list.  If this was the same list you provided the day before, we hadn’t broken the tasks down far enough.  If that continued for more than a few days in a row, it was time to take it off line, break down the tasks some more, and add resources if needed.

iii)  What red flags are still outstanding that are keeping you from completing your work – Details as needed, especially if the task was taking much longer than expected or progress could not be made without more help

We tracked all of the tasks in TFS, though not as consistently as I would have liked, and added details / blocked status as needed.

At the end of the scrum, I stepped in to try to work any red flag issues to get the tasks back to green flag with each developer.

I’m a morning, person, and do my most effective work early in the day, so holding the scrum later in the day enabled me to break my day into two pieces.  Some days, some of the red flag issues carried over to the next morning, but usually, I was able to focus on the big picture tasks that were assigned to me in the morning.  The afternoon was spent solving problems, to keep the developers moving.  It stopped a lot of wheel spinning (I’d like to think) and made sure no one was stuck on an island.

Having the daily scrums gave me, and the rest of the team, constant feedback on how we were doing to meet our next milestone, and how much effort we needed to drive into the current iteration that week to get it done.  Sometimes that meant a weekend in February or March was spent in the office trying to clear up the backlog.  But we knew we couldn’t grow the schedule, so it was better then, than to wait till the last minute when other problems would undoubtedly arise.

I know for absolute certainty that without these two processes, we could not have been successful.  I also know that far more ‘project management’ time was charged to this project than was planned because of these scrums, but we probably saved just as much dev time by stopping the wheel spinning, and allowing me to focus on my tasks in the morning without constant interruptions.  With a team of more than two developers on a project, I would absolutely do this again.

Tuesday, July 28, 2009

Demo Showcase Suite – A History – Part 6

Honestly, I’m trying to do these once a day, but I took 3 days off for a vacation, and it’s taken me a couple of days to get caught back up.  Here’s the next installment on my series of goals that I had when we stated on Demo Showcase.

4. I didn’t want to get to a week from the delivery date, need to make a change, and have to regression test everything by hand.

As I write these entries, what I thought were distinct topics, all seem to be related.  This is talking about unit testing, right?  No.  Wrong. Different topic altogether.  Application regression testing has always been a thorn in my side.  I’ve never had the chance to put a good test automation tool to work for me to help with UI regression testing.  Along with unit tests, I wanted to have a tool that would let the QA’s quickly and efficiently automate their tests so that the tests they did in March would be instantaneously rerunable in July.  Sure, unit testing will get you part way there, but nothing beats a human looking at the screen and seeing if the right result is a reasonable result.

If there was a swing and a miss on a goal on this project, this was the big one.  In fact, I didn’t even get the bat of my shoulder.  In the post mortem discussions I’ve been having with the team, this one popped up to the top of the list for the QAs as something that directly affected their personal satisfaction with the project and with their jobs.  With the rapid pace that we were kicking out builds each day, the QA’s rarely got a chance to work on a stable version of the application for more than a few hours, and some tests would take an hour to go through by hand if they didn’t run into bugs.  They had a ‘happy path’ test they repeated each time, and there were many versions of the application where  they never got to complete the full test.  If we were to look at the number of hours spent testing on this project, I would think that we spent far more hours testing than the value we got from it.  That is, since so much time was spent testing the same thing over and over, little time was available to test edge cases until the build really stabilized.

So coming out of this project, we still have this as an open item to find an automated regression test suite that works for what we do.  I don’t think you can really evaluate the return on this investment prior to becoming proficient in the use of the tool, so it may take a few projects before we really see the benefits.  But it is definitely something we will be carefully considering before the end of the year, and likely before the next release cycle.

Tuesday, July 21, 2009

Demo Showcase Suite - A History - Part 5

Lesson #3: I wanted unit tests.

I’ve been on a lot of projects where we took a very short sighted view of the world, and planned for the initial release, and did no planning for life after that. It was always blamed on a lack of time in the project. “When we get this done, we’ll do our documentation.” or “Next time, we’ll do it right, and write some test cases”. It never works out that way. We all know it, and every time we utter those words, we know we’re lying to ourselves.

If you have enough time after a project to go back and write tests, your business is about to go down the crapper because the sales guys haven’t sold anything new, and there’s no backlog of work to get done, and you sure as heck know that your customer, who didn’t plan enough time for you to get your work done in the first place, isn’t going to plan budget for you to go back and write tests they expected you to write in the first place. If you ever admit to a customer that you didn’t test all of your code, you’ll probably lose that contract. They assume you have tested everything.

I started buying in to the idea of Test Driven Development last year on another project that involved building a dynamic search engine completely written in LinqToSql. The engine sat over a huge database, and some of the combinations of search terms they wanted to use involve 23 table joins. There were 50+ possible search terms, range searches, boolean searches, ‘like’ searches, and enum searches. This all took place in a legacy .NET Win Forms app built in VS2005. I built a VS2008 dll with Linq2SQL, and began adding test cases, one at a time, as I added the capability to do each type of search.

The engine itself was rewritten 2 or 3 times as it evolved, sometimes for speed, sometimes for maintainability, but those test cases that I wrote on day 1 were still my measuring stick to see how close the new engine was to returning the right results after each refactoring. To this day, I still trust those cases, even though I haven’t touched that app in almost a year.

At the same time that project was going on, I was working on another project that had no tests. It was a large ASP.NET application, with a couple of dlls. The majority of the code was in code-behind files of the pages, user controls, and a few internal classes. It was a pain in the butt to test, and every time I opened the code, there were code smell areas I wanted to fix, but was too scared to. I couldn’t afford to regression test the whole app just to make one small change. I’m still scared to work on it.

At the start of the Demo Showcase project, I did a lot of reading about Test Driven Development, looked into mock objects, and tried to get the team to buy into the concept. Unfortunately, I misunderstood a key premise of TDD, and made promises to the team that would prove to be inaccurate. I thought that we could have the QA group take the specs, and begin to write the test harnesses, and the developers would just have to make the tests turn green. I thought that MSTest and TFS would work together flawlessly, and take away any excuses that tests were too hard to write, or they didn’t know what was out there. I thought everyone would see the value of Mocks and Stubs and have an epiphany, and that TDD would sell itself. All I had to do was to sit back, show that my tests were working, and how great my life was, and everyone else would be jealous and follow suit. Not so much.

It took a conversation with Glenn Block and Jim Newkirk at a NerdDinner in January to help me clear up a key misconception. The QA’s could not, and should not, write the tests. At least not the first tests. The first tests were supposed to help the developer flush out the functionality of the code. When I brought this change back to the group the next day, the QA’s were relieved. The developers were disappointed.

The biggest problem we had with our TDD approach after that, was depending on MSTest and TFS. MSTest in VS2008 (and 2005) has a really annoying bug (that is supposed to be fixed in VS2010), relating to VSMDI files getting checked out and locked by developers or testers. I’d get our tests all organized and working, then next thing I know, the test lists were blank or out of date. I’d get upset with our team, and wonder who kept wiping out all my work. It turned out that VS2008 kept creating new versions of the VSMDI file if it couldn’t get a lock on the file, and would jump you into a new version without telling you. Running a specific set of tests became a frustrating step, and people stopped doing it. And when they stopped doing it, they stopped writing them in the first place.

The other big issue I found was that if you wrote a bunch of tests too far in advance, and set ‘Not Implemented exceptions’ in the code, the light on the builds stayed red for days, weeks or possibly months. People stopped trusting the tests, and stopped looking at them. I fell into this trap as well. I’m not sure how to fix this, except to flag tests that I know are not ready yet as ‘Ignored’. Write them, then ignore them until the code is ready to be worked on, then re-enable them. Put a To-Do Task in the code to remind yourself to come back and re-enable them. Never take a Not Implemented exception out of the code without writing a test for it first.

With regard to Mocks and Stubs, I chose RhinoMock, but I didn’t understand that there were two versions of it floating around, and started with the older version, which, in my opinion, is much harder to use than the Act, Arrange, Assert style of the 3.5 version. And I didn’t truly understand how to use it correctly until I read Roy Osherove’s book ‘The Art of Unit Testing’ just last month. I highly recommend this book, and have been pushing it down the throats of all the developers and QA’s here.

Finally, trying to break the cycle of developers not thinking they have enough time to write tests is a huge effort. It does take time to write good tests. It takes a lot more time when you don’t know what you are doing, when the technology and the concepts are all new. Learning TDD and RhinoMock, on top of learning the other new technologies that were part of this project became a daunting task for everyone. Learning TDD was pushed to the bottom of the list as we strived for results in early stages of the project, and the unit tests suffered. Later in the project, we came back around, and made a few other valiant attempts at it, but we definitely could have done better, and I know we will on the next one. It’s about educating ourselves, a little at a time, on processes that do and don’t work, and making adjustment as we go. We tried a lot of big changes on this project, and it may have been too much, too fast. But TDD is definitely something I am not giving up on, and will use it where ever we can in the future.

Monday, July 20, 2009

Demo Showcase Suite – A History – Part 4

A continuation of the lessons I learned from previous projects.

Lesson 2:  I wanted to accurately track scope changes

If you’ve never been burned by a scope change that wasn’t accurately tracked, this might not seem important.  This is an agile environment you say.  You should be able to adjust your plan every couple of weeks.  You start the project knowing full well that you don’t know everything.  That’s what agile is all about.  Well, as a developer I used to work with said frequently,  yes, and no.

I’m not an expert on agile.  I’m sure I’m doing it wrong, and about to give bad advice.  I know requirements will change as the product comes to life.  Feedback will drive changes.  Features that were once viewed as critical will be swept aside as new features take center stage.  I know this.  I just don’t like it.  I like having a nice stable target, and a plan to get from there to here.  I hate surprises.  In my work, surprises are never good.  I’m never pleasantly surprised by a customer request.  I’m never optimistic about a conversation that starts with ‘What would it take to…?’.  Remember, I’m not in sales.  I have to get the app built.  I rely on sales and the product owner to drum up new business.  I have to get ‘er done.

I know these changes are going to happen, and I know I’m going to grump about it as I try to protect the team.  That’s my job.  But I also know that customers have extremely short memories.  They never remember, and they hate being reminded, that the change they asked for resulted in the two week launch delay, or another feature being pushed to a follow-on release.  I’ve actually been on projects where I had multiple customers who all thought they were in charge of the product, and didn’t like the final product because someone else made a change to our marching orders without informing them.

This all comes back to the change tracking section at the top of the requirements document.  Every change has to be annotated there, even if the change tracking is turned on in the document.  Changes can be ‘Accepted’, and the history can be lost.  Religious adherence to updating the change tracking section is the great equalizer to the argument over why something changed, was late, or was missing.

As time went on, and the project evolved, we began adding the changes into our change management system (TFS), but at the start, we used plain old Word documents.  I’ll get into TFS and our development infrastructure later.   That’s a whole other conversation.

Was this approach successful?  I think so.  Unlike other projects, there were no last minute surprises from the client.  There were no gaping holes in the product that were missed requirements.  The customer was (I think) quite happy.  But I haven’t had to go back to those documents yet.  Maybe we just did a good job with them in the first place, or maybe we haven’t stepped back and looked hard enough yet.

Regardless, it’s a practice I intend to keep doing, one way or another.

Friday, July 17, 2009

Demo Showcase Suite – A History – Part 3

In my last post, I detailed 10 things that were on my mind when we started the project.  I’m going to go into a little bit of depth on those now.

1.  I wanted specs that were easy to write, easy to read, and easy to maintain.

I’ve done a lot of projects, in a lot of styles.  My first project was back in 1994 working for EDS on a General Motors project called ASPEN, building the new Assembly Line Support and Production Environment.  To this date, it is still the largest project I’ve ever worked on, even though I was just a minion on the production support team running the nightly builds, managing the source code repository and fixing bugs.  It was a typical waterfall project with 60+ developers and analysts and thousands of pages of specs written and managed by an entire team of people.  That works great when you have an entire team of people to manage the docs.

Most of the rest of my career has been on projects with 5 or fewer people on the team, where once a spec was created, it was forgotten about, or rarely ever referred to again after the initial development was done.  In one of the projects I worked on last year, the specs were pretty sparse, the details and functionality hidden in poorly formatted emails that were chopped up and inserted into a Word doc.  Some emails never made it into the docs.  Some changes were done by a phone call request or hallway conversation.  There was no continuity, and the specs were never maintained after development began, even when change requests came in from the customer.   Scope creep was inevitable, and it led to a lot of issues in the final release as to what was, and what wasn’t supposed to be included in the final deal.  We had to hunt through email to figure it all out.  That memory was fresh in my mind when we started Demo Showcase, and I vowed to prevent that from happening again.

Still, with a small team, limited time, and final requirements that weren’t yet ready, I couldn’t implement full fledged specs.  I started simple, using very simplified UML diagrams like the following built in Visio:


I stole this concept from someone else’s blog (don’t remember where now, or I would pass the credit).  My wife, who is a project manager by profession, had never seen something like this, and really liked it.  It allowed me to build up a list of high and medium level functions and to group those functions into our release plan.  We could easily sit down with the customer and review what would, and wouldn’t be in each release, and walk through the app functionality.  We took this diagram into meetings with a projector, edited it, and moved things around.  If something needed a deeper level of documentation here, we added it, but in general, two levels was enough, and we never went deeper than three.  If a function had more than three, it was a candidate for a ‘use case’ of its own.

It’s extremely easy to train management, customers and developers to read this diagram.  It’s almost obscenely easy.   The only problem I had was with Visio trying to move the bubbles around for me when I was adding a new one.

Once the diagram was flushed out, we split up the work between three of us (the Project Manager, the Product Owner, and myself) to create a set of Word documents with more detail for each function, which became, not coincidentally, the Functional Spec.


For each level 1 bubble (use case), we had a section in the document.  This document was maintained throughout the project by the QA Team and the Project Manager.  It had a change log at the top, and changes were tracked.  We did a final walkthrough of each document prior to starting the development on that part of the project that caught a lot of issues. That’s not to say we had completely flushed out all the details, but it was amazing how much we could put down in a short amount of time.

The QA’s test cases came right from this document.  If there was a discrepancy between the test and the app, the document won out, or at the very least, it was updated to reflect the change.  If changes started to sneak into the app without being updated in here, the QA knew pretty quickly.

Again, this document didn’t go down to the line level in the code, but if there was a particular architecture I wanted to use, a trick in program flow, or some other note I wanted to add, I put it in here.  No searching around for miscellaneous emails, no arguments about who told who what. 

I’m not saying this was perfectly executed all the time, but our QA’s did an excellent job of holding both the developers’ and the PM’s feet to the fire and trying to keep this in good shape.  Hopefully, on the next project, and the one after that, this becomes second nature, and it just gets done.

Of the things we did right on this project, this has to be at the top of the list.  Before we did this, we were headed down a very chaotic road, and the developers had that look on their faces that signaled we were in for a mutiny if we didn’t get our management act together.  This at least put off the mutiny for a couple of months.

Thursday, July 16, 2009

Demo Showcase Suite – A History – Part 2

Before I go into the details of what we did during the project, I’m going to cover two areas:  my role in the project, and the lessons I learned from previous projects and what I tried to do in this project to use those lessons.

First, I was the technical architect on the project.  At a small company, that can be a blurry title, and there were times where I ranged outside of what would normally be expected of the application architect.  What it meant to me was that I focused less on what the application did, and more on how it did it.  For instance, I chose the technologies, and resolved technical problems, but there were screens in the applications that I never saw until the project was almost complete.  There may still be screens out there that I have never seen.  I know for sure I have never looked at the help file in the application.  As long as it is there, I wasn’t concerned with it.  It is there, isn’t it?

In an ideal world, the application architect is the second busiest person during the first phase of the project.  The busiest should be the Project Manager / Product Owner as they gather requirements, get customers to sign off and kick out specs.  During the first months, I spent a lot of time trying to figure out how I wanted the project to work, what technologies I was willing to work with, and which ones I couldn’t risk.  This project was all about new technologies, and we were banking a lot of our project success on technology that didn’t even exist yet, but that was due to be delivered to us by a certain date.  i.e.  The ability to run native code in Azure apps didn’t exist when we started in January, but was delivered in March. 

Once the project gets going, gets traction, the architect should be able to slowly step aside and let the developers develop, and let the testers test.  My role was to then became the problem solver.  When a problem popped up that blocked development, I stepped in, took the issue, and tried to find a solution that worked within the overall architecture.  Sometimes that was a matter of interfacing with the Azure team, sometimes that was a weekend of trying to figure out how to create Undo / Redo patterns in Entity Framework with SQL Server CE.

The theory is that by the last few releases, the architecture work is done, and I could move on to the next phase, or the next project, gradually reducing my time dedicated to the project so that when the devs are done, the next project was ready to go, and a consistent pace could be sustained throughout the project and the year.  In theory.  We’ll come back to that later.

As far as lessons go, I’d learned a lot in the last year working on those three or four other projects, and, of course, had a dozen plus years of experience on other projects.  Here were the chief thoughts on my mind when we started:

  1. I wanted specs that were easy to write, easy to read, and easy to maintain.
  2. I wanted to accurately track scope changes.
  3. I wanted unit tests.
  4. I didn’t want to get to a week from the delivery date, need to make a change, and have to regression test everything by hand
  5. I wanted the team to feel the urgency of working hard and smart early in the project to avoid the death march mindset at the end.
  6. I wanted to plan for change, both in scope, the composition of the development team, technology and time lines.
  7. I wanted a reliable build and deployment process from the start.
  8. I wanted the team to be able to provide me with feedback quickly and easily and to give both myself and the PM a heads up well in advance of a red flag issue becoming critical.  No surprises.
  9. I wanted an architecture that minimized code smell, and one I could proudly show off as the best we could possibly do
  10. I wanted to enrich my technical knowledge base.

Only one of these goals would I consider an architectural goal.  But all of  them tie into making the project, and the company successful.  Some of these goals we achieved, some we got close to achieving, and some I plan to work towards achieving on the next project with the lessons learned on this one.

I’ll go into each one of these in the next entry in this series.

Tuesday, July 14, 2009

Demo Showcase Suite – A History – Part 1

I work for a small software consulting company that had been doing work for Microsoft for a number of years, creating the original Contoso Demo Showcase demos that were available on the Microsoft Partner web site.  By October of 2008, we had created over 200 of these demos in eight different languages (so about 25 in English that had been translated 8 times), with each one taking anywhere from two to six weeks to create.  Much of this work was done before I had ever joined the company in January 2008, but I witnessed many of the late nights and long weekends that it took to get a demo just right by manually shooting each screen, and building a WPF executable from a series of screen shots.  Some of these demos had 200-300 screen shots in each, and a missed step or slightly misplaced hotspot would cost hours of work.  The developers had built some tools, and an engine to run the sims, but both needed a lot of work to make them usable for a non-developer user. I think the developers on that original effort are still having nightmares about it.  They do get the shivers when you mention it.

In October of 2008, I had just finished three projects for other customers, one small ASP.NET website, one large ASP.NET website, and a major upgrade to a very large Win Forms application in C#.  Throughout those three projects, I kept a mental list of what we did right, and what we did wrong, with the intent of applying the lessons to the next project.

Dan, one of the owners of the company I work for, who doesn’t like being called my boss, told me at the beginning of October that he was lining up my next project, and that it would be huge.  It would involve this idea they had sold to Microsoft to automate the production of these demos.  Dan and Benjamin, the other partner here, dropped some big Microsoft names into the conversation to entice the glory hound in me out of hiding.  I didn’t know anything about the project at this point, and though I was flattered by the trust they were putting in me, and excited by the possibility of working on a big, high profile project, I was nervous about being associated with this Contoso project and the late nights that I heard had come with it.    They assured me those days were over.  This was a completely different project.  Yes.  It was.

I was a couple of weeks late getting started on the project as I had to wrap up one of my previous projects.  It’s never good to start out behind the eight ball, but in the scheme of things, that wasn’t a major impact on the project.  I officially started looking at the project documents on Friday, October 24th.  I remember going home all excited to be working on something new, starting from the ground up.  I returned on Monday morning, October 27th, and started digging in to the project in more detail, and started to outline my approach.  We were going to do this one right.

On Tuesday, October 28th, my boss sent me a note, asking if I heard of this Azure thing Microsoft had just launched, and wondered if it might be worth looking into for this project.  Sure, why not?  The Microsoft PDC in Los Angeles was going on that day, and info was coming out fast.  I spent the afternoon watching the PDC presentations on my pc.  By noon the next day, I reported to Dan that Azure looked pretty cool, and that it would save us a bunch of time doing the hard things.  Scaling with load, synchronization, blob storage, queues, security.  All things we had dealt with on previous projects that caused us major issues.  Developers could focus on the business app, and not on the environment.  Perfect.  I don’t remember his exact words, but it was something like “Good, because the customer wants the site built on Azure.”

I had never worked on a project with an OS that was in CTP.  Hell, I had never worked on a project that involved any CTP technology.  I came from a business systems background that waited at least a year and a half after software was released before using it.  In 2007 at my last job, we were barely upgraded to Visual Studio 2005.  I didn’t know what CTP meant.  CTP is after Beta, right?  Well, not exactly.

We started the project with a Proof of Concept application.  There was the web site (at the time called Simulation Server), a web service (SimSync), a web app (SimBuilder), a WPF Simulation Engine (SimEngine) and a Windows Service (SimCompiler).  Azure wasn’t quite ready for us to use, and we were still working out the details of getting into the Azure early adopter program, so we deployed the POC in a Win2K8 / SQL Server environment.  We delivered the POC a week or two before Christmas to give the execs a chance to look it over, and to get a feel for the project before we really dug in. 

We took the time during the Christmas lull to get fully indoctrinated with Azure, to begin to flush out a few details, to organize ourselves, and to begin to look at some of the more complicated features.  We also took the time to take a well deserved break while the details of the system got worked out.  We knew there would be a lot of work to do after the New Year. 

We were accepted into the Azure early adopter program in late December.  The execs signed off on the project in early January.  Everyone was rested and raring to go.  It was time to get coding.

To be continued…

Monday, July 13, 2009

Demo Showcase Suite Launch

This blog has been a little devoid of content for the last few months.  It’s not because I had nothing to say, it was because everything I had to say, couldn’t be said.  The three letters a blogger hates to hear are ‘N D A’, non-disclosure agreement.  There are still a few things I can’t talk about, but by and large, the ban has been lifted.

This morning, at 9:00 AM Central Time, Allison Watson, Corporate VP of the Microsoft World Wide Partner Group, announced the release of Demo Showcase Suite.  

“Microsoft’s® Demo Showcase Suite is a collection of demonstration resources that includes the new Demo Showcase application for creating your own click-through demos as well as this community site to manage your demos, search for demos, distribute demos you’ve created and stitch different demos together to make your own. Sign in with Windows Live ID and take a look through this site and download the Demo Showcase application to get started creating your own demos that you can share through Silverlight or download as standalone executables”

For the last 9 months, I have been immersed in this project as the lead architect.  It’s the biggest project I’ve ever led, and a high water mark in my career.  With a small team of developers and QA’s, we turned out a product that we are all quite proud of. 

You can see what the app does by going out and using it.  I’m going to focus my next few blog posts on the architecture of the product, the history of the project, and lessons and technology we learned while doing it.  For those of you that can’t wait, here’s a brief sample of the technologies we used (in no particular order):

  • Windows Azure
  • Windows Live ID Authentication
  • Unity Framework + Enterprise Application Blocks
  • Entity Framework
  • LinqToSql
  • WPF
  • MVVM in WPF
  • Silverlight 2
  • Test Driven Development
  • Microsoft Geneva Framework
  • ACS
  • SQL Server 2008
  • Custom ASP.NET Handlers
  • Object Graphs in EF using QuickGraph.NET
  • ASP.NET Dynamic Data
  • WCF
  • REST
  • RSS Feeds
  • Custom WPF Compilation Engine
  • Log4Net
  • Resharper 4.5

As you can imagine, things are still kind of hectic around here today.  I’ll try to update the blog this week with more details.  For now, go give the app a try.  If you can’t get to it because you are not a partner, please contact Microsoft Partner Group, and get enrolled today!

Thursday, July 9, 2009

Allow MSI Downloads on ASP.NET Web Site

I can't tell you how many hours I spent over the last few weeks trying to debug a problem with a web site where we were trying to allow users to download an MSI from a blob. Sometimes it would work the first time, and then the next time, the site would try to download the .aspx file the code was running in.

What made it really confusing was that this problem only happened when we were running in Azure with IE 8. In IIS 7 running on Win2K8, it was fine. With Azure and FireFox, it was fine. At one point, we had it working on everything except IE 8 running on Windows 7 with a backend of Azure.

What I finally had to do, was to create an ASP.NET Generic Handler (ashx) that did the heavy lifting for me.  To give credit where credit is due, the suggestion came from someone I met while at the latest NerdDinner in Bellevue, WA on July 7.  (I wish I could give you his name, but I do know he was on the ADO.NET Services Team at Microsoft)   This is the second time that I have attended one of those dinners, and the second time I was able to solve a problem the next day based on advice given to me by someone there.   Many thanks to Scott Hanselman for getting these events put together.

Here was the final code:

public class MsiDownload : IHttpHandler


    protected readonly ILog _log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);


    public void ProcessRequest(HttpContext context)


        string msiName = ConfigurationManager.AppSettings.Get("MSIName");

        byte[] returnedBytes = null;


        var blobManager = UnityFactory.Current.Resolve<IBlobManager>();




            returnedBytes = blobManager.GetMsi(msiName);

            if (returnedBytes == null)


                _log.ErrorFormat("Error obtaining MSI from blob storage service. Returned bytes is zero length.");



        catch (Exception exc)


            _log.ErrorFormat("Error obtaining MSI from blob storage service. Exception type: {0}, Exception Message {1}.", exc.GetType().ToString(), exc.Message);



        if (returnedBytes != null)



            context.Response.Buffer = true;


            // force download

            context.Response.ContentType = "application/x-msi;";

            context.Response.AppendHeader("X-Content-Type-Options", "nosniff");

            context.Response.AppendHeader("content-disposition", "attachment; filename=" + msiName);

            context.Response.AppendHeader("Content-Length", returnedBytes.Count().ToString());

            context.Response.OutputStream.Write(returnedBytes, 0, returnedBytes.Count());






    public bool IsReusable




            return false;