Wednesday, December 23, 2009

New Year’s Technical Resolutions – 2010

I’m a goal oriented person.  Always have been.  I hear management really likes that in developers, and now that I am more management than developer, I know I like that in my developers.   The key is to make sure that the developer and management have the same goals.

To that end,  I’ve spent some time over the past few weeks looking forward to 2010 and planning the technology goals for the company for 2010, so that we can take those goals and make sure that they both mesh with the vision for the company’s growth in 2010, and with the career goals of the developers for 2010 and beyond.

Personally, I want to maintain my technical edge over the next few years, to delay the inevitable full time move to management.  It’s not just my backup plan.  As an architect, I need to be able to see around the next corner.  I may not have to delve deep into how exactly JQuery Validators work, but I’d better at least be able to read JQuery and know what it’s capable of doing.

To that end, here are my technical goals for 2010:

  1. Visual Studio 2010 / TFS 2010 – I can’t wait for this release.  I believe this will have the single biggest impact of any technology on my day to day work life in 2010.  We have already identified major areas where our processes fall short, and hope that technology can help to address those issues.
  2. Entity Framework 4 – After spending the last two days debugging a large project written with EF 1.0, I really hope that this version justifies the immense amount of time we’ve invested into EF and fixes the many aspects of EF that I would currently give a score of' ‘WTF’ to.
  3. Azure – 2010 should be a huge year for Azure, with the full production release, and the stabilization of the environment.  As an early adopter, I’ve seen a lot of breaking changes smack me in the face every Thursday morning, and I’m hoping that the pace of these slow (stop would be nice) so I can begin to recommend this platform to my customers as a viable solution for their products.  I will also be doing deep dives into SQL Azure, the Service Bus, AppFabric and WIF as part of extending my Azure expertise
  4. SharePoint 2010 – I really liked what I saw at PDC, and we’re champing at the bit to get going.
  5. Windows Mobile 7 – I’ve never done mobile device work before, but I have high expectations of WM7.  I’ve actually had some great ideas for IPhone apps that’s I’ve had to discard because I have no access to a Mac to do the development, and the more I work with Apple products (and their inability to work with anything else), the more I despise them.  If WM7 delivers, I can see dropping my IPhone and going to a Windows based phone so I can actually work with it.  I’d have to pry my wife’s IPhone from her hands with a crowbar, but we’ll see what options come out before I even suggest it.
  6. RIA – I think this is a year away from being Prod ready, but there’s definitely power there, and something I want to play with in 2010
  7. WCF Data Services – Ditto above
  8. Learn JQuery - I was supposed to do that this year, didn’t make it.

My technical reading list has been fairly vacant the last few months as I put the finishing touches on my novel “The Forgotten Road” and the sequel “Nowhere Home”.  But with the release of VS2010 coming up in the 2nd quarter of 2010, I expect that the list will fill back up.  Here are some of the titles I’m looking forward to reading:

Introducing .NET 4.0- with Visual Studio 2010

Pro C# 2010 and the .NET 4.0 Platform, Fifth Edition

Pro ASP.NET 4.0 in C# 2010, Fourth Edition

Professional Application Lifecycle Management with Visual Studio 2010

SharePoint 2010 as a Development Platform

Pro SharePoint 2010 Solution Development- Combining .NET, SharePoint, and Office

I should also be able to blog more often.  Most of the NDA’s I’ve been working under will expire with the production release of Azure, so hopefully I will have topics to write about that are both timely and interesting.

Happy Holidays!

Tuesday, December 15, 2009

ACER 1420P - Another Review

After using this PC for a few weeks now, I have the following thoughts.

1. The screen is so freakin glare prone that it's almost unusable while commuting with overhead flourescent lights on the train. I mean, you eventually get used to it, but it's like a railroad spike being slowly driven through your brain. You can't find a position for the screen where the view is still good, and the lights aren't right in your eyes.

2. I really dislike the keyboard. I'm not the greatest typist in the world, and the sharp edges on the keys really catch my fingers and make bad typing worse. Combine that with the really small cursor keys in the right hand lower corner, and navigating a long Word document is pretty painful.

3. I still get occasional character repetitions for no apparent reason, despite having turned down the character repeat speed in the configuration significantly, to the point where if you are using the aforementioned cursor keys, you have to wait almost a second for them to repeat, and by then, you've gone past where you wanted to go.

4. On the plus side, I love the weight and the battery life. Definitely better than my Toshiba U405.

5. I haven't tried to make use of any of the advanced features of the system, partially because I don't want to spend any more time staring at that screen than I absolutely have to.

Based on the monitor, and the keyboard, and the fact that I never use the Touch screen, I would never buy this machine for myself or my family, and really miss my Toshiba, though I would probably buy the Toshiba T-135 if given a do-over. Maybe in the new year.

Sunday, November 22, 2009

Acer 1420P Review

Fully paid attendees of Microsoft PDC recevied a rather neat little treat: a new Acer 1420P Laptop. I've been playing with it for a couple of days, and while I haven't fully experienced the entire capabilities, I have seen the basics.

The battery lasts forever. I've been synching my ITunes from my current Toshiba U405-2854, and I've had the Acer unpluggeed most of the day, but had to put the Toshiba on the grid after an hour and a half of use.

This laptop is light. I'd almost say too light. At least the base is too light to support the display once the angle of the display to the verticle exceeds about 15 degrees, at which point the laptop flips over. The hinge on the screen is a little weak as well, and once past 25 degrees to vertical, the display slowly falls to horizontal.

The keyboard is really different from my Toshiba. Not necessarily a bad thing, but I write novels, and I'm a bad typist. So every adjustment in key position is a double whammy. The keys are more comfortable for me now than I first thought they would be. They are a lot rougher in feel than the Toshiba, but a lot of people like that. Maybe I will too after a while.

I haven't done any performance tests yet, so I can't give stats on that.

It took me half a day to figure out how to disable the Tap to Click feature on the touchpad. It turns out that the version of the Synaptics drivers shipped with the unit are not up to date, and that by upgrading to the current version, you can get the option to turn it off. I sent this question to ACER support before I figured it out myself, but they were completely useless didn't read the question, and answered something I didn't ask). I also had a problem with my keys repeating while typing, and spent a few minutes changing the settings on the keyboard so I could type without every letter repeating twice.

They've done a nice job keeping the bottom of the laptop cool while working. I'm wearing shorts right now, and while it's a little warm, it's a little better than the Toshiba, and a heck of a lot better than the old Dell my wife has been using.

The screen is crisp and clear, so no issues there.

I'm definitely going to pop a couple more GB of memory into it so I can run my usual suite of work apps.

I think this is a keeper. Would I have bought it for myself? Probably not. I never had ambitions to have a tablet PC. It'll be interesting to see if I use it for what Microsoft intended.

Friday, November 20, 2009

PDC Day 4

Day4 was all about hitting as many sessions as possible.  It was also all about trying to keep the brain from entering a totally cramped state.

I’ve spent a lot of time in the last year working on Windows Azure.  The one thing I haven’t gotten to use was the Service Bus.  The Building Hybrid Cloud Applications with Windows Azure and the Service Bus session was a great introduction to practical usage of the service bus, and I couldn’t help but to try to come up with ways to integrate it into my clients’ applications.  However cool it is, it still has a huge dependency on ACS for security, and as I said yesterday, ACS isn’t quite ready for prime time.  It’s pretty apparent that ACS has to be the focus of Azure development over the next few months.  The coolest thing was the multicast of errors from one client to another.  Spectacular solution to a problem I face today and was going to resolve with an RSS feed, but hadn’t yet due to security issues.

Next, I sat in on a Lap around Microsoft Visual Studio and Team Foundation Server 2010.  I loved the new features for burn down charts and hierarchical tasks.  We built in the iteration date functionality into our TFS by altering the database and building web sites to show us our statuses, but VS2010 is obviously a lot more complete.  We’re going to have a hard time waiting to use it in production right now.

I watched Brad Abram’s excellent presentation on RIA services next.  I had intentionally avoided learning anything about RIA services this year until that moment, so everything was completely new to me.  It’s something that really shows promise, and I’ll want to try it out soon.  The main thing I want to make sure is that we maintain the separation of layers, and the close binding between the UI and the models is a little disconcerting.  We’ve been down that road before, and it always comes back around to bite us.  I hate to say this, because I really have only seen RIA for that one hour, but I couldn’t help but thinking I was being Visual Basic’d.  Not that the code was written in VB.  It just seemed VBish.  Too much power that leads to bad design.  I hope my first reaction was just fear of being replaced by code automation.

I missed the lunch session I wanted to go to on Microsoft Visual C# IDE Tips and Tricks, but will definitely watch that video.  I heard it was packed.  Instead, I talked to the Visual Studio Team and talked through some TFS issues we’ve been having, and hopefully, we’ll be able to resolve them with their help.

The most disappointing session of the conference was the Scrum in the Enterprise.  The first half was not bad and worth a watch.  The second half was mind numbing and I, along with a bunch of others, took an early leave.  I spent a little time in the Application Server Extensibility session, but honestly didn’t understand a thing that was said there.

The final session of the day was Automating Done Done in the Team Workflows with Visual Studio Ultimate and TFS 2010.  Great session, definitely worth watching.  When they had a little problem with the demo, I couldn’t help but to suggest that the problem might be connected to the VSMDI file.  That got quite a few laughs, and was told that they guaranteed that the VSMDI file could not be the cause.  I think it’s finally dead.

A quick shuttle trip to the airport, a quick (and half decent meal at LAX) and a lucky break to catch an early flight home meant I got into Seattle at 9:00 instead of 11:30.  Nice to be home.

Overall, PDC was a great experience, and I hope to get to do it again, ideally with more of the senior folks from my company so we can cover more ground.  The guys up on stage were all pretty much top notch speakers, with one or two exceptions, and really show that they are some of the best and brightest in the world.  I don’t know I could ever do something like that. 

I came out inspired to get back to learning.  I’ll be drawing up a technology target chart for myself and the company in the next few days, to set up goals for the coming year.  I’ve got a bunch more videos to watch, and side reading to do.  But first, I need to get some products finished and out the door before the end of the year.

This blog entry was typed completely on my new ACER 1420P laptop courtesy of Microsoft.  Initial review?  Not bad.  Need to figure out how to turn of Tap to Click and how to slow down key repeating.  The laptop is light and probably needs a couple of GB’s of memory to make it useful at work.  Screen is nice and clear.  Windows 7 will take a little getting used to.  I’m not completely sold on the keyboard compared to my Toshiba Satellite, but I think I can get used to it, as long as I can turn off Tap To Click.  Otherwise, I’m giving it to my wife.

Thursday, November 19, 2009

PDC Day 3

PDC Day 3 was kicked off with the keynote that began with a focus on how devs can use the new features in Windows 7 to maximize the use experience, and emphasized that devs need to learn new ways to code to take advantage of parallel processing and built in capabilities to shunt processing to video cards and other areas of the system.  A lot of this talk was above my head, but I guess that’s the point.  It can’t stay that way.

Scott Guthrie launched Silverlight 4 BETA on stage, and though his demo had some technical issues, there was a collective gasp and cheer from the crowd during his demo of a built in capability of Silverlight to make a jig-saw puzzle of a video.  Pretty freaking amazing.  Scott Hanselman demoed new OData services and ties to SL4.  Simple demo, but there’s a lot there some programmers might take for granted that was really hard if not impossible just a few months ago.

The last segment was on Sharepoint 2010 and Office 2010.  Truth be told, I almost walked out, since I was scarred for life by my brief experienced with Sharepoint 2003.  But I stayed, and was glad I did.  They’ve really gone back and fixed the Visual Studio integration and everything from debugging to deployments just works.  I’m ready to give Sharepoint another chance, and with a little work, I think I will soon consider it a viable platform for solutions for my clients.

The first session of the day was on Windows Identity Foundation, formerly named Geneva.  I’ve spent a lot of time working in Geneva, but it was good to level my knowledge and get a refresher.  I don’t know how Vittorio will come across on video, but in person, he was fantastic. 

I briefly sat in on a lunch session for ASP.NET MVC – Share Your Stories From the Trenches.  Good session with everyone putting their experiences out there.  I was pretty proud to know that our approach is pretty damn close to spot on, with one exception.  Never put an ‘if’ statement in a view.  Factor that out into your ViewModel.  If you have to put a null check in to determine whether or not to show something, add a bool to your view model and bind the isVisible property of the control or group to that.  Great advice.

ADO.NET Data Service:  What’s new with the RESTful Data Services Framework.    First off, ADO.NET Data services has been renamed to WCF Data Services (Pablo pointed that out in his talk).  Pablo is a very fast talker, and covers a lot of ground very quickly.  He lost me at first by using JQuery to attach to a web service, but showed some very cool features out during the demo.  Be prepared to watch this a few times to see it all.  There’s one hell of a lot of power there, but it’s pretty intimidating.

Enabling Single Sign On to Windows Applications:  This one moved fast, and even less than 24 hours after it, I can’t say I remember much.  But as soon as I have a project with an ADFS 2.0 server in place, I’m going to go back and watch this one again.

Rest Services Security in Windows Azure using the Access Control Service:  Justin Smith is a very passionate speaker about this topic and a great guy.  Unfortunately, a lot has changed in ACS since last year, but it doesn’t look much more mature that it did a year ago.  They’re launching another CTP, which is understandable as the first one didn’t quite cut it.  There’s also a management change going on with ACS with the architect of WIF taking over the group.  I suspect that they’re 6-12 months out from a release that’s usable, which is also unfortunate, because this is a really critical piece of the puzzle.

At Ask the Experts, I spent a lot of my time with the Azure Team.  Anu, Steve and Vikram are all great guys.  Anu remembered working on some of my bugs from the early days of the CTP, and we talked for quite a while. 

What I find amazing is how accessible Microsoft has made all these guys this week.  I know that while I’m gone, the work doesn’t stop coming in, and I can’t imagine what they’ve got going on, yet they took the time to let me bring up my laptop to look at code to try to figure out a bug I’ve been working on for a few months. 

Anyway, on to Day 4.

Tuesday, November 17, 2009

PDC Day 2

Highlights from Day 2 of PDC.

The Keynote:  I got the feeling that this was less about new stuff this year than completing stuff from last year.  A lot of Azure announcements, and a little about server tools, but the message was clear.  Microsoft is banking on Azure for the next big thing server wise.  It released pricing model for bigger instances of Azure, and outlined the plan for supporting tools and infrastructure.  They also talked about Microsoft Pinpoint.  I expect to hear more about that in the coming weeks.  Project Dallas is also pretty cool.

There were a couple of ‘aha’ moments for me:  System Center for Azure rocks, but won’t be available until a Beta sometime in 2010.   The next version of Microsoft’s mobile platform won’t be available until Mix 2010.  They’re falling behind badly but I’ll give them credit for not knee jerking a half baked solution to production before it’s ready.  The other ‘aha’ was around he new data access technologies ODATA.  My catch phrase for ODATA… “There’s gold in them there hills.”  If you can only watch part of the keynote, catch the ODATA portion.

After the keynote, I went into session overload, including:

  • Data Programming and Modeling for the .NET Developer. 
    • The most fun session of the day.  Don Box and Chris Anderson showed off a lot of the new functionality in EF 4.  As a guy who spends a great deal of my day working with EF 1, I’m very tempted to grab the bits for EF 4 now and push it to my clients.
  • Lessons Learned:  Migrating Applications to the Windows Azure Platform
    • Didn’t learn a whole lot here that I either didn’t already know from my own experience with Demo Showcase, or from Chris Auld’s session yesterday.  In retrospect, I should have jumped to a different session.
  • Windows Azure Present and Future
    • Pretty deep dive into the Azure story by Manuvir Das.  Gained some insight into where things are going, but again, I already knew much of it.  I wish I had gone to the Agile Session
  • Evolving ADO.NET Entity Framework in Microsoft .NET Framework 4 and Beyond
    • Awesome session.  Highly worth a rewatch to learn how to use some of the new functions.  Very, very fast pace.  These guys love their jobs.
  • Advanced Windows Presentation Foundation Application Performance Tuning and Analysis
    • I’m going to recommend this to our UI guys to help make the apps that incremental step better to make the difference between sent out and shipped. 
  • Quick stop at the Partner Expo.  Ran into Justin Smith.  Great guy who was really generous with his time back when we were getting ready to ship Demo Showcase and ran into a security issue.  Looking forward to his session later in the conference.  I watched a lot of his presentations from PDC08 when we were getting started with DSS, and I’m going to jokingly blame him and Steve Marx for getting me into this mess.
  • Went to an INETA Meeting.  Informal, interesting round table.  Two themes emerged:
    • a) People perceive that technology is changing faster than ever and that it is impossible to keep up.
    • b) Microsoft is making a mistake targeting big business and highly variable sites with Azure.
    • I disagree with both statements.
      • Technology always changes fast.  We’re just more aware that it is changing than we used to be.  We also need to know about more types of technologies than we used to because a lot of organizations are smaller and you have to know about more of them in order to do you job.  There’s less compartmentalization.  People who want to compartmentalize their work, and focus on just one thing, carve too small of a niche for themselves, and can’t find home in the do everything small companies.
      • Azure is perfect for what Microsoft is asking it to do.  It is not perfect for what the individual devs want it to do.  They aren’t going to make money spinning up VM’s for http://www.joebeernink.com unless they charge a lot more for it to make entry prohibitive for guys like me.  If I am willing to pay the price, I can get almost the same service the big boys do.  So once they start charging for it, I will move to GoDaddy or soemthing like that until I have enough business to justify the load (how long till I have that first book on the NYT Best Seller List?")
    • Scott Hanselmann had the best quote of the night, but I’ve seen it misquoted a few times on Twitter already.  He said that the greatest power an architect has is to break the build.  That is, to put controls in place like FXCop to prevent bad code from being checked in.  What’s being quoted is that the greatest power Jr. devs have is to break the build.  Jr. Devs should never break the build.  If they are truly junior, they should be mentored and guided into checking in code that never breaks the build.  If they are junior, but only due to seniority and not skill, then they need to gain the cred to add to the build process when needed.  If no one in the organization listens, and that situation won’t change, move on.

Looking forward to tomorrow’s keynotes and a ridiculous number of sessions at 11:30 that I want to attend.

Monday, November 16, 2009

PDC Day 1 – Architecting & Developing for the Windows Azure Platform

Day One isn’t quite over yet.  Next up is a movie on the origins of Visual Studio.  I figure if I’m going geek this week, I'm going all in.

Chris Auld from intergenz.com did a great presentation on all kinds of aspects to take into account when architecting Azure applications.  He paid special attention to economic factors and how they play in to decisions on what Azure Technologies to use, and hen to use them.  He’s got street cred because he’s built out a very large application on Azure for selling tickets for events.

His blog is http://www.syringe.net.nz and he’s on twitter at http://twitter.com/cauld

He’ll be posting his presentation deck later this week, and it’s well worth the read, especially for architects and technical management.  Some portions of the presentation would be good to walk sales teams who are trying to sell Azure as a platform through so they understand what the advantages and disadvantages are.  The presentation should also be on-line soon.  It’s long (an all day event isn’t going to be short).

I still have some Azure questions I need to resolve, but many of those are specific to the areas I’m working on, so I didn’t ask them during question period.  Hopefully I can have some productive hallway conversations.

On a side note (as I’ve already complained about this on twitter and Facebook)… Why do people put onions on everything?  I couldn’t find a single lunch option without onions today, and as someone with a food allergy to onions, it drives me crazy, because they never label things as containing onions.  Eggs, wheat, soy, milk, no problem.  Onions, we’ll just let you bit in before you figure it out.

Monday, August 3, 2009

Demo Showcase Suite – A History – Part 9

Continuing on my series recounting how we got going on Demo Showcase Suite

7.  I wanted a reliable build and deployment process from the start

Okay this is pretty basic, and something I’ve insisted on for each project I work on.  You can’t afford to build production software on a developer’s machine.  It’s a no brainer though I’ve seen it done many, many times.  What we did on this project, was to convert from using CruiseControl.NET to TFS Builds.  This was new for me.  Why would we do this you ask?

First, we had converted from VSS to TFS.  I thought it would be better to be consistent and use the integrated tools we had.  Second, we were also having scaling problems with our current CruiseControl server, and we were going to need to upgrade it anyway.  So we made the leap to TFS with the plan to retire the CC.NET Server.

The other thing converting to TFS did was to take me out of being the expert on the builds, and I was able to delegate learning about it, and doing the setup to someone else.  I didn’t have time to do it, and as long as we were using something that I had put together, the likelihood of me continuing to be ‘the build guy’ was too high.  My very first job when I started in IT in 1994 was managing the nightly builds and version control.  In the 14 years leading up to the start of the project, I was always the build guy.  Now, I’m not the build guy anymore. 

We ran into some issues with deployments to Azure later when we tried to build on a 32 bit machine, so we had two build servers, a 32 bit machine for the client application and a 64 bit machine for the server apps.  This is probably not needed.  With a better understanding of configuration manager in Visual Studio, we probably could have everything working on just one, but we had so many problems with the vsmdi files that by the halfway point of the project, we were seriously considering other options.  Having a split build, and running the tests only in the Test build, not Prod and QA, helped a lot as well.

As far as the deployment process went, we scripted everything to get as far down the Azure path as we could, but we still had to manually upload to Azure, deploy and swap. There were two of us who became the experts on this, and kept everyone else’s fingers out of the Azure pie, just to be safe.

We spent a lot of time building the MSI for the client to have it do everything we wanted it to do.  A lot of what we learned we are able t0 apply to other projects right away, and though it took far more time to get it perfect than we wanted to spend, it sure is nice to have a process we just always trust.

I’m feeling ambitious tonight (and wanting to get onto technical stuff sooner, rather than later), so I’ll cover #8 and #10 tonight.  #9, as it turns out, is a whole series unto itself.

8) I wanted the team to be able to provide me with feedback quickly and easily and to give both myself and the PM a heads up well in advance of a red flag issue becoming critical.  No surprises.

I think I’ve covered this already, but this was all about communication.

  • We implemented the scrums, as previously talked about. 
  • We used task tracking in TFS to flag blocked items.
  • When someone is worked up about something, we dug in and broke the task down so I could see and understand what the problem was.  Don’t let people stew, whether it be a technical issue or a personal one.  Get it out in the open and deal with it.
  • We had whiteboards in the middle of the work areas.  If there was a major issue, we grabbed people and whiteboarded it out with input from everyone.

I think we did really well in this one.  There were almost no surprises from within the team, and the few that were came from me not listening carefully enough when tasks were starting to slide in the early days of the project.  I learned to listen for the signs that trouble was brewing, and made an effort to head it off earlier.

10)  I wanted to enrich my technical knowledge base.

I had no idea how much I would learn on this project.   I still can’t believe how much I was working on that almost no one else in the world was doing.  When it came to doing the first part of the security system, I think there were two or three of us on the MSDN Forums bouncing ideas back and forth.  Dependency Injection, Rhino Mocks, WPF MVVM.  I learned so much in the first few months, I thought I was going to drown.

For the record, I am not going to claim, as I begin writing about the technical aspects of the project, to be the expert in any one area.  I am sure (absolutely positive, actually), that for each area I will get into, that there are a dozen people, if not a thousand, who know more.  I’m just going to show you what we did, for better or worse.  Some will laugh and point and snicker at some of the code.  Hopefully I can get some feedback on better ways to do things.  Hopefully someone out there getting started on Azure or MVVM or Silverlight or Demo Showcase will get some information that saves them some time or money.

If you have a specific area of the Demo Showcase architecture you want to hear about, let me know, and I’ll try to cover it in an upcoming entry.  If you have a question about how to do something in Demo Showcase, or have a suggestion for a new feature, send it in.  I’ll get it on the list.

Sunday, August 2, 2009

Comparing Crafts

I’m going to take a brief diversion from my series on Demo Showcase to discuss a parallel I found this weekend between developing software, and writing. As you can probably tell by the length of the entries on this site, that I love to write. I love it so much, that I have a web site completely devoted to my writing at www.joebeernink.com. I write novels. I write short stories. Hell, I like writing specs. But I also like to code. I like to dig into new technology. I like to solve hard problems. I thrive when faced with tough challenges. But back to the writing.

This weekend, I attended the Pacific Northwest Writer’s Association Conference new Seattle. This is a four day writer’s version of NerdDinner. Instead of hanging with Scott Guthrie and Scott Hanselman, I got to hang with Terry Brooks, Robert Dugoni, Will North, Joseph Finder, Richelle Mead, Caitlin Kittredge, and a thousand other writers and aspiring writers.

People talk often about the craft of writing software, and I think that has obvious connections to the craft of writing. Writing crappy software is easy. Writing crappy stories is easy. Writing good software takes education, practice, persistence, self discipline, structure and patterns, research. Writing is exactly the same.

You need to be skilled at the language you are writing, and know the vocabulary. Writing a great novel doesn’t happen on the first try. Just because you have spoken English your whole life doesn’t mean that a good novel will result when you put pen to paper, or fingers to the keyboard.

You have to keep writing, even when people give you negative feedback. If we all stopped writing code every time a user found a bug, or the compilation failed, or took it personally, we’d never get better at it.

You need self discipline to always strive to learn, to read that dull book on ASP.NET MVC, even though you think you already know ASP.NET. I mean, nothing will ever make VB 6 outdated… right?

You need to understand the structure of a good story, and the structure of a good piece of software. There are patterns and practices that have evolved, and yes, you may find another way to do something, and it doesn’t mean you are wrong, but there are some things that are just not worth reinventing. At least know what options are out there, and use the power that’s in them until you find a need to diverge. This isn’t meant to stifle progress. It simply means that not every piece of software is going to be a unique and brilliant creation. We get paid to turn out software that works. Novelists get paid to turn out writing that sells. If you don’t want anyone to read your book or use your software, go crazy.

Research the stuff that interests you. Read code from similar programs. Novelists need to read other authors from within their genres, not to copy, but to learn. Know what that code does before you copy and paste. Understand why it works. Find the patterns in plots of great novels. What tools does the author use?

None of these are spectacular revelations, or at least they shouldn’t be. But something hit me, a parallel between my career as a writer and my career as a software developer.

I started programming computers on a Commodore 64 in 1982-83. I was 11 I think. Actually, I had programmed on a PET CBM prior to that, but the C-64 really got me started. I took programming courses in high school and college, did an internship in college, and spent the years 1994-2007 working in the field taking a few courses here and there, picking up a book when I needed to, but never really taking charge of my career. But at the end of 2007, I realized my career was at a dead end. I had let my technical knowledge stagnate, and was blaming everyone else for my lack of opportunities. I made the leap from IT department at a large airline to a small consulting firm at the start of 2008, and vowed that I would spend a minimum of 3 months reading everything I could, as fast as I could, about the technology I was going to be working in, to get better, I wasn’t going to take courses, or expect someone to teach me.

I read on the train. I read at night until I couldn’t keep my eyes open. I read on the weekends. At the end of the three months, I realized that I had learned a lot, and had a lot more to learn. I filled up my wish list on Amazon with all books on every area of software development that I wanted to learn. I got a subscription to MSDN Magazine. I read the articles, instead of skimming them. I added dozens of blogs to my RSS feed. I read whatever I could there. I gradually brought my skills up to the point where I felt that I could talk intelligently about software development again.

Similarly, I started writing stories in grade school. I wrote them freehand until I had a Quick-Brown-Fox word processor for my C-64. Then I typed them out and printed them on my MPS 801 printer. I took a lot of English classes in high school. My schedule was too full to take classes in college, but I did write my first novel while there. I loved writing for the escape it provided from the pressure of my classes. I wrote a bit after college, but it was in fits and starts. At one point, I stopped altogether, until August of 2008. That’s when I got the laptop I am typing this on (a Toshiba U405-S2854 which I love).

I started writing a new novel on August 8th, and finished the first draft on January 1, 2009. But I missed something. I thought that writing was something I could just jump back into, and do without reading technical books on how to. I didn’t bother to figure out what genre I was working in. That has huge implications on what rules you need to follow. I didn’t bother to read other authors in my genre to see what was working. I didn’t practice my craft and look for feedback. I just assumed I was good, and smart, and that this writing thing wasn’t that hard.

This weekend was an eye opener for me. The successful authors study their craft. They practice, sometimes for more than a decade, before they get something published. They treat their hobby like a business. They are the CEO’s of their own corporation. They get feedback from their customers. They manage their brand. They work their asses off. You could see those in the audience that weren’t prepared to do that. They blamed agents and editors for their work not getting sold, or argued with the agents and editors about the quality of their work. For a new writer, that’s like arguing with the C# compiler that it should know what you meant, not what you entered. The compiler will be right 99.9999% of the time. If you go to three different machines, and the problem is repeatable each time, it’s your problem. If three agents tell you your writing sucks, it’s your problem. Fix it.

So why am I writing about this on a tech blog? It’s because it took me 13 years of my software career to realized that no one is going to make me a better coder / architect but me. It took me a year of writing, and a brutal slam from an agent to realize that I need to work as hard at writing as I did at improving my tech skills. And that takes time, and a plan. I filled up my shopping cart at Amazon last night with great books on writing, and expect to spend the time between now and Christmas reading those, and using the lessons I learn from them to improve my writing.

Where does that put my technical career? Right where it was. I just need to do both. To find more hours in the day. To work smarter. To learn the lessons I’ve been talking about in all these other blog entries so that on the next project, I don’t spend days regression testing manually. To do it right the first time. There’s just so much to do and learn out there, why screw around reinventing the wheel? Hopefully, after reading these entries, I help you to save time you could better use to bring your skills up to date, to create that vicious, incredible cycle of learning and doing things better and faster to give you the success and the time to do the other things in you life that mean something to you.

Good luck!

Thursday, July 30, 2009

Demo Showcase Suite – A History – Part 8

Goal #6:  I wanted to plan for change, in scope, in the composition of the development team, in technology and in time lines.

Change on a project is inevitable.  Even on short projects of a couple of weeks, there will always be a small change that causes some consternation amongst the team.  I was conscious of four types of change:

1.  Scope. 

Everyone has dealt with this one.  You can’t fight it.  The best way to deal with it is to track it, to have good processes for handling it, and for providing feedback to management and the customer as to how each change in scope will affect the project.

Prior to the start of this project, our change tracking was minimal, if nonexistent.  We used a web product called Gemini for tracking bugs and some feature requests for existing projects, but we didn’t track deliverables and tasks for new projects getting started.  We used VSS (horror of horrors) for our source control.  It was quite clear that as an organization, we were outgrowing these tools. 

We made the decision to try something else as soon as the project was announced.  We had access to Team Foundation Server and Visual Studio Team System Team Suite without needing to spend a bunch of money, so we used that.  It was a big step forward from VSS, and it took quite a while to get up to speed on how to use it, and how to use it properly.  There were definitely some very painful moments, and we’ll all be glad when we can upgrade to TFS 2010 to get things like hierarchical task trees and better integrated reports.  We went back and forth on the best way to use the system, and I think, at the end, we were doing the best we could, though there are more features that we can probably gradually integrate into future projects to make us more efficient.  We didn’t take advantage of  the Estimate / Actual / Work Completed feature, and that is the one thing I would really like to change on the next project, to not just get a picture of how many tasks are outstanding, but how we are tracking effort to the goals.  It would also greatly improve our ability to provide feedback to everyone when a scope change comes in to see how it really affects the project time line.  Right now a ten minute bug fix carries the same weight as a 40 hour feature implementation on the task list, and that is obviously misleading and wrong.

2.  Team Composition

We only had one change in the team in terms of losing someone from the team from beginning to the end of the project, but we did gain two new people as well.  On a small team, it’s essential that in one person’s absence or departure, that another can step in and fill that role, or at least be able to understand what that person was working on so the work does not come to a grinding halt.  The only way I know how to do this is for everyone to be doing things the same way.  If everyone uses the same tools, writes code to the same standards, there’s a lot less chance that code will be a mess when someone turns it over.  I harped a lot to the team about a couple of things in particular

a)  Everyone had Resharper and was expected to make that little light at the top of the file go green every day.  Okay, part of this is due to the fact that I am anal-retentive when it comes to clean code, but part of it is a matter of enforcing standards.  I can’t sit over every developer’s shoulder and watch them name their variables, and set their tabs.  But Resharper, and the lovely Ctrl K D really left us with some pretty clean code.  It’s amazing how much easier it is to work on someone else’s code when you are not scared to open the file.

b) Nightly Builds:  We used TFS to do our nightly builds, right from the start, and constant check ins were encouraged.  Everyone feared the words ‘Who Broke the Build?  UNACCEPTABLE!” being shouted across the office.

c) Unit Tests:  As previously discussed, we could have done much better here, and we are working on making this better for the next phase.

d) Unity:  We used the Unity Application Block for dependency injection (DI).  I’ll go into more of the Unity overview in a later post, but because we had Unity set up early in the project, we were able to create mock objects and able to swap code in and out easily so that itwo people were rarely dependent on one another’s work.  This was especially important prior to bringing on the new people to fill a void in some of the back end code areas so that UI development could proceed until they were up to speed.

I also spent some time creating a ‘Technical Overview’ document that summarized some of the technical decisions we made early in the project, some of the issues that were left to be resolved, and a quick reference for some of the more complex processes we were using.  I don’t know how much this helped the new guys, but it made me feel better that I had at least tried to help them come up to speed without them needing to spend a week constantly asking questions we should have documented.

When I started my first IT job back in 1994, my boss gave me ownership of the New Employee Orientation / Standards book, and told me to update it when I found a question it didn’t answer, as I would be handing the book off to the next new person, who would then come to me first to ask questions.  I love this chaining of documentation pattern.  It gives the new people immediate ownership and responsibility.  Of course, some people weren’t as robust in their documentation as others were, but if we held true to the pattern, everyone would see the benefit of keeping the document up to date.  It made sense back then, anyway.

3.  Technology

We knew the technology was going to change, and change rapidly during this project.  Managing this change was a constant challenge.  Again, Unity and DI played a large role in helping to resolve this issue.  There were many times where we had issues with Azure, or did not know where the data would actually reside at the end of the project, but that didn’t stop the developers from continuing their work.  We built interfaces and mocks, and swapped the managers in and out using the configuration files. 

During most of the project, there were only two of us who ever touched Azure, just to insulate the team from the environmental changes.  We defined the processes and worked through the issues, but the rest of the team barreled forward, not needing to know the specifics of how the system would work on Azure.  This was true for deployments, data storage, security and configuration.  Almost everything we built could run directly in IIS7 or in the dev environment without Azure being available.  Pieces were converted over to run on Azure one at a time as they were ready with a few configuration changes.  I can’t emphasize enough how important this is when working with dependencies that are constantly changing.  Isolate them, mock them, keep the team moving.

4.  Time Lines

The final date was not moving, but that didn’t mean that there wouldn’t suddenly be a need to give a quick demo of what we had to the customer in the middle of a big dev cycle.  The builds saved us there, and having two (I would have preferred 3) environments to run the app in from early in the project allowed the devs to keep working, while the ‘PROD’ site was only updated as we achieved more stable releases.  Having the multiple environments saved our bacon more than a few times.

Wednesday, July 29, 2009

Demo Showcase Suite – A History – Part 7

This is the next in the series of entries about Demo Showcase Suite and the goals I wanted to accomplish.

5.  I wanted the team to feel the urgency of working hard and smart early in the project to avoid the death march mindset at the end

Here were the things we knew, right at the start of the project:

a)  The deadline was July 13th.  It was not moving.  No matter how many times I wished we had a few more months, weeks, days (or even hours), we were launching during the keynote on opening day of WPC.

b)  We had huge technological hurdles to overcome on areas we had never seen before.

c)  The customer wanted frequent updates on the work, samples of what we were doing.

To cope with all of this, we implemented some project management processes

1)  We broke the project into 3 week sprints.  This was our first time doing this, and I think we took the word sprint a little too seriously.  We were always up against a deadline of some sort, but that was driven by the final, immovable deadline.  When we saw our iterations slipping, we weren’t able to slide requirements to follow-on releases, but we did see that we were behind, and worked harder earlier in the project to get back on track.  We also were able to keep the developers focused on the important features instead of getting wrapped around the axle on something that wasn’t supposed to be started for a couple of months.

Iterations worked for us as well, as we were able to better manage the customer expectations for the functions they would see in the next release.  They knew what to expect, and knew what not to expect.  That went a long way in keeping down the surprises for both parties.

In the end, we did one ‘Alpha’ release, two ‘Beta’ releases, and 3 ‘RTM’ releases.  In retrospect, we should have called them Alpha 1, Alpha 2, Beta 1, Beta 2, Beta 3 and RTM 1 to properly set everyone’s expectations not just to the functionality, but to the quality, but that’s a lesson we learned the hard way.

2) We held a team scrum every day at 1:00 PM.  We tried to keep this to 15 minutes a day, and I, as scrum master, tried to keep the team focused on three things:

i)  What did you complete since the last scrum – Just a list, not a detailed dissertation.  This helped everyone else know what was ready to test, or what dependency they had been waiting on was now complete.

ii)  What are you working on between now and the next scrum  - Just a list.  If this was the same list you provided the day before, we hadn’t broken the tasks down far enough.  If that continued for more than a few days in a row, it was time to take it off line, break down the tasks some more, and add resources if needed.

iii)  What red flags are still outstanding that are keeping you from completing your work – Details as needed, especially if the task was taking much longer than expected or progress could not be made without more help

We tracked all of the tasks in TFS, though not as consistently as I would have liked, and added details / blocked status as needed.

At the end of the scrum, I stepped in to try to work any red flag issues to get the tasks back to green flag with each developer.

I’m a morning, person, and do my most effective work early in the day, so holding the scrum later in the day enabled me to break my day into two pieces.  Some days, some of the red flag issues carried over to the next morning, but usually, I was able to focus on the big picture tasks that were assigned to me in the morning.  The afternoon was spent solving problems, to keep the developers moving.  It stopped a lot of wheel spinning (I’d like to think) and made sure no one was stuck on an island.

Having the daily scrums gave me, and the rest of the team, constant feedback on how we were doing to meet our next milestone, and how much effort we needed to drive into the current iteration that week to get it done.  Sometimes that meant a weekend in February or March was spent in the office trying to clear up the backlog.  But we knew we couldn’t grow the schedule, so it was better then, than to wait till the last minute when other problems would undoubtedly arise.

I know for absolute certainty that without these two processes, we could not have been successful.  I also know that far more ‘project management’ time was charged to this project than was planned because of these scrums, but we probably saved just as much dev time by stopping the wheel spinning, and allowing me to focus on my tasks in the morning without constant interruptions.  With a team of more than two developers on a project, I would absolutely do this again.

Tuesday, July 28, 2009

Demo Showcase Suite – A History – Part 6

Honestly, I’m trying to do these once a day, but I took 3 days off for a vacation, and it’s taken me a couple of days to get caught back up.  Here’s the next installment on my series of goals that I had when we stated on Demo Showcase.

4. I didn’t want to get to a week from the delivery date, need to make a change, and have to regression test everything by hand.

As I write these entries, what I thought were distinct topics, all seem to be related.  This is talking about unit testing, right?  No.  Wrong. Different topic altogether.  Application regression testing has always been a thorn in my side.  I’ve never had the chance to put a good test automation tool to work for me to help with UI regression testing.  Along with unit tests, I wanted to have a tool that would let the QA’s quickly and efficiently automate their tests so that the tests they did in March would be instantaneously rerunable in July.  Sure, unit testing will get you part way there, but nothing beats a human looking at the screen and seeing if the right result is a reasonable result.

If there was a swing and a miss on a goal on this project, this was the big one.  In fact, I didn’t even get the bat of my shoulder.  In the post mortem discussions I’ve been having with the team, this one popped up to the top of the list for the QAs as something that directly affected their personal satisfaction with the project and with their jobs.  With the rapid pace that we were kicking out builds each day, the QA’s rarely got a chance to work on a stable version of the application for more than a few hours, and some tests would take an hour to go through by hand if they didn’t run into bugs.  They had a ‘happy path’ test they repeated each time, and there were many versions of the application where  they never got to complete the full test.  If we were to look at the number of hours spent testing on this project, I would think that we spent far more hours testing than the value we got from it.  That is, since so much time was spent testing the same thing over and over, little time was available to test edge cases until the build really stabilized.

So coming out of this project, we still have this as an open item to find an automated regression test suite that works for what we do.  I don’t think you can really evaluate the return on this investment prior to becoming proficient in the use of the tool, so it may take a few projects before we really see the benefits.  But it is definitely something we will be carefully considering before the end of the year, and likely before the next release cycle.

Tuesday, July 21, 2009

Demo Showcase Suite - A History - Part 5

Lesson #3: I wanted unit tests.

I’ve been on a lot of projects where we took a very short sighted view of the world, and planned for the initial release, and did no planning for life after that. It was always blamed on a lack of time in the project. “When we get this done, we’ll do our documentation.” or “Next time, we’ll do it right, and write some test cases”. It never works out that way. We all know it, and every time we utter those words, we know we’re lying to ourselves.

If you have enough time after a project to go back and write tests, your business is about to go down the crapper because the sales guys haven’t sold anything new, and there’s no backlog of work to get done, and you sure as heck know that your customer, who didn’t plan enough time for you to get your work done in the first place, isn’t going to plan budget for you to go back and write tests they expected you to write in the first place. If you ever admit to a customer that you didn’t test all of your code, you’ll probably lose that contract. They assume you have tested everything.

I started buying in to the idea of Test Driven Development last year on another project that involved building a dynamic search engine completely written in LinqToSql. The engine sat over a huge database, and some of the combinations of search terms they wanted to use involve 23 table joins. There were 50+ possible search terms, range searches, boolean searches, ‘like’ searches, and enum searches. This all took place in a legacy .NET Win Forms app built in VS2005. I built a VS2008 dll with Linq2SQL, and began adding test cases, one at a time, as I added the capability to do each type of search.

The engine itself was rewritten 2 or 3 times as it evolved, sometimes for speed, sometimes for maintainability, but those test cases that I wrote on day 1 were still my measuring stick to see how close the new engine was to returning the right results after each refactoring. To this day, I still trust those cases, even though I haven’t touched that app in almost a year.

At the same time that project was going on, I was working on another project that had no tests. It was a large ASP.NET application, with a couple of dlls. The majority of the code was in code-behind files of the pages, user controls, and a few internal classes. It was a pain in the butt to test, and every time I opened the code, there were code smell areas I wanted to fix, but was too scared to. I couldn’t afford to regression test the whole app just to make one small change. I’m still scared to work on it.

At the start of the Demo Showcase project, I did a lot of reading about Test Driven Development, looked into mock objects, and tried to get the team to buy into the concept. Unfortunately, I misunderstood a key premise of TDD, and made promises to the team that would prove to be inaccurate. I thought that we could have the QA group take the specs, and begin to write the test harnesses, and the developers would just have to make the tests turn green. I thought that MSTest and TFS would work together flawlessly, and take away any excuses that tests were too hard to write, or they didn’t know what was out there. I thought everyone would see the value of Mocks and Stubs and have an epiphany, and that TDD would sell itself. All I had to do was to sit back, show that my tests were working, and how great my life was, and everyone else would be jealous and follow suit. Not so much.

It took a conversation with Glenn Block and Jim Newkirk at a NerdDinner in January to help me clear up a key misconception. The QA’s could not, and should not, write the tests. At least not the first tests. The first tests were supposed to help the developer flush out the functionality of the code. When I brought this change back to the group the next day, the QA’s were relieved. The developers were disappointed.

The biggest problem we had with our TDD approach after that, was depending on MSTest and TFS. MSTest in VS2008 (and 2005) has a really annoying bug (that is supposed to be fixed in VS2010), relating to VSMDI files getting checked out and locked by developers or testers. I’d get our tests all organized and working, then next thing I know, the test lists were blank or out of date. I’d get upset with our team, and wonder who kept wiping out all my work. It turned out that VS2008 kept creating new versions of the VSMDI file if it couldn’t get a lock on the file, and would jump you into a new version without telling you. Running a specific set of tests became a frustrating step, and people stopped doing it. And when they stopped doing it, they stopped writing them in the first place.

The other big issue I found was that if you wrote a bunch of tests too far in advance, and set ‘Not Implemented exceptions’ in the code, the light on the builds stayed red for days, weeks or possibly months. People stopped trusting the tests, and stopped looking at them. I fell into this trap as well. I’m not sure how to fix this, except to flag tests that I know are not ready yet as ‘Ignored’. Write them, then ignore them until the code is ready to be worked on, then re-enable them. Put a To-Do Task in the code to remind yourself to come back and re-enable them. Never take a Not Implemented exception out of the code without writing a test for it first.

With regard to Mocks and Stubs, I chose RhinoMock, but I didn’t understand that there were two versions of it floating around, and started with the older version, which, in my opinion, is much harder to use than the Act, Arrange, Assert style of the 3.5 version. And I didn’t truly understand how to use it correctly until I read Roy Osherove’s book ‘The Art of Unit Testing’ just last month. I highly recommend this book, and have been pushing it down the throats of all the developers and QA’s here.

Finally, trying to break the cycle of developers not thinking they have enough time to write tests is a huge effort. It does take time to write good tests. It takes a lot more time when you don’t know what you are doing, when the technology and the concepts are all new. Learning TDD and RhinoMock, on top of learning the other new technologies that were part of this project became a daunting task for everyone. Learning TDD was pushed to the bottom of the list as we strived for results in early stages of the project, and the unit tests suffered. Later in the project, we came back around, and made a few other valiant attempts at it, but we definitely could have done better, and I know we will on the next one. It’s about educating ourselves, a little at a time, on processes that do and don’t work, and making adjustment as we go. We tried a lot of big changes on this project, and it may have been too much, too fast. But TDD is definitely something I am not giving up on, and will use it where ever we can in the future.

Monday, July 20, 2009

Demo Showcase Suite – A History – Part 4

A continuation of the lessons I learned from previous projects.

Lesson 2:  I wanted to accurately track scope changes

If you’ve never been burned by a scope change that wasn’t accurately tracked, this might not seem important.  This is an agile environment you say.  You should be able to adjust your plan every couple of weeks.  You start the project knowing full well that you don’t know everything.  That’s what agile is all about.  Well, as a developer I used to work with said frequently,  yes, and no.

I’m not an expert on agile.  I’m sure I’m doing it wrong, and about to give bad advice.  I know requirements will change as the product comes to life.  Feedback will drive changes.  Features that were once viewed as critical will be swept aside as new features take center stage.  I know this.  I just don’t like it.  I like having a nice stable target, and a plan to get from there to here.  I hate surprises.  In my work, surprises are never good.  I’m never pleasantly surprised by a customer request.  I’m never optimistic about a conversation that starts with ‘What would it take to…?’.  Remember, I’m not in sales.  I have to get the app built.  I rely on sales and the product owner to drum up new business.  I have to get ‘er done.

I know these changes are going to happen, and I know I’m going to grump about it as I try to protect the team.  That’s my job.  But I also know that customers have extremely short memories.  They never remember, and they hate being reminded, that the change they asked for resulted in the two week launch delay, or another feature being pushed to a follow-on release.  I’ve actually been on projects where I had multiple customers who all thought they were in charge of the product, and didn’t like the final product because someone else made a change to our marching orders without informing them.

This all comes back to the change tracking section at the top of the requirements document.  Every change has to be annotated there, even if the change tracking is turned on in the document.  Changes can be ‘Accepted’, and the history can be lost.  Religious adherence to updating the change tracking section is the great equalizer to the argument over why something changed, was late, or was missing.

As time went on, and the project evolved, we began adding the changes into our change management system (TFS), but at the start, we used plain old Word documents.  I’ll get into TFS and our development infrastructure later.   That’s a whole other conversation.

Was this approach successful?  I think so.  Unlike other projects, there were no last minute surprises from the client.  There were no gaping holes in the product that were missed requirements.  The customer was (I think) quite happy.  But I haven’t had to go back to those documents yet.  Maybe we just did a good job with them in the first place, or maybe we haven’t stepped back and looked hard enough yet.

Regardless, it’s a practice I intend to keep doing, one way or another.

Friday, July 17, 2009

Demo Showcase Suite – A History – Part 3

In my last post, I detailed 10 things that were on my mind when we started the project.  I’m going to go into a little bit of depth on those now.

1.  I wanted specs that were easy to write, easy to read, and easy to maintain.

I’ve done a lot of projects, in a lot of styles.  My first project was back in 1994 working for EDS on a General Motors project called ASPEN, building the new Assembly Line Support and Production Environment.  To this date, it is still the largest project I’ve ever worked on, even though I was just a minion on the production support team running the nightly builds, managing the source code repository and fixing bugs.  It was a typical waterfall project with 60+ developers and analysts and thousands of pages of specs written and managed by an entire team of people.  That works great when you have an entire team of people to manage the docs.

Most of the rest of my career has been on projects with 5 or fewer people on the team, where once a spec was created, it was forgotten about, or rarely ever referred to again after the initial development was done.  In one of the projects I worked on last year, the specs were pretty sparse, the details and functionality hidden in poorly formatted emails that were chopped up and inserted into a Word doc.  Some emails never made it into the docs.  Some changes were done by a phone call request or hallway conversation.  There was no continuity, and the specs were never maintained after development began, even when change requests came in from the customer.   Scope creep was inevitable, and it led to a lot of issues in the final release as to what was, and what wasn’t supposed to be included in the final deal.  We had to hunt through email to figure it all out.  That memory was fresh in my mind when we started Demo Showcase, and I vowed to prevent that from happening again.

Still, with a small team, limited time, and final requirements that weren’t yet ready, I couldn’t implement full fledged specs.  I started simple, using very simplified UML diagrams like the following built in Visio:

SimBuilderFunctions

I stole this concept from someone else’s blog (don’t remember where now, or I would pass the credit).  My wife, who is a project manager by profession, had never seen something like this, and really liked it.  It allowed me to build up a list of high and medium level functions and to group those functions into our release plan.  We could easily sit down with the customer and review what would, and wouldn’t be in each release, and walk through the app functionality.  We took this diagram into meetings with a projector, edited it, and moved things around.  If something needed a deeper level of documentation here, we added it, but in general, two levels was enough, and we never went deeper than three.  If a function had more than three, it was a candidate for a ‘use case’ of its own.

It’s extremely easy to train management, customers and developers to read this diagram.  It’s almost obscenely easy.   The only problem I had was with Visio trying to move the bubbles around for me when I was adding a new one.

Once the diagram was flushed out, we split up the work between three of us (the Project Manager, the Product Owner, and myself) to create a set of Word documents with more detail for each function, which became, not coincidentally, the Functional Spec.

FunctionalSpec

For each level 1 bubble (use case), we had a section in the document.  This document was maintained throughout the project by the QA Team and the Project Manager.  It had a change log at the top, and changes were tracked.  We did a final walkthrough of each document prior to starting the development on that part of the project that caught a lot of issues. That’s not to say we had completely flushed out all the details, but it was amazing how much we could put down in a short amount of time.

The QA’s test cases came right from this document.  If there was a discrepancy between the test and the app, the document won out, or at the very least, it was updated to reflect the change.  If changes started to sneak into the app without being updated in here, the QA knew pretty quickly.

Again, this document didn’t go down to the line level in the code, but if there was a particular architecture I wanted to use, a trick in program flow, or some other note I wanted to add, I put it in here.  No searching around for miscellaneous emails, no arguments about who told who what. 

I’m not saying this was perfectly executed all the time, but our QA’s did an excellent job of holding both the developers’ and the PM’s feet to the fire and trying to keep this in good shape.  Hopefully, on the next project, and the one after that, this becomes second nature, and it just gets done.

Of the things we did right on this project, this has to be at the top of the list.  Before we did this, we were headed down a very chaotic road, and the developers had that look on their faces that signaled we were in for a mutiny if we didn’t get our management act together.  This at least put off the mutiny for a couple of months.

Thursday, July 16, 2009

Demo Showcase Suite – A History – Part 2

Before I go into the details of what we did during the project, I’m going to cover two areas:  my role in the project, and the lessons I learned from previous projects and what I tried to do in this project to use those lessons.

First, I was the technical architect on the project.  At a small company, that can be a blurry title, and there were times where I ranged outside of what would normally be expected of the application architect.  What it meant to me was that I focused less on what the application did, and more on how it did it.  For instance, I chose the technologies, and resolved technical problems, but there were screens in the applications that I never saw until the project was almost complete.  There may still be screens out there that I have never seen.  I know for sure I have never looked at the help file in the application.  As long as it is there, I wasn’t concerned with it.  It is there, isn’t it?

In an ideal world, the application architect is the second busiest person during the first phase of the project.  The busiest should be the Project Manager / Product Owner as they gather requirements, get customers to sign off and kick out specs.  During the first months, I spent a lot of time trying to figure out how I wanted the project to work, what technologies I was willing to work with, and which ones I couldn’t risk.  This project was all about new technologies, and we were banking a lot of our project success on technology that didn’t even exist yet, but that was due to be delivered to us by a certain date.  i.e.  The ability to run native code in Azure apps didn’t exist when we started in January, but was delivered in March. 

Once the project gets going, gets traction, the architect should be able to slowly step aside and let the developers develop, and let the testers test.  My role was to then became the problem solver.  When a problem popped up that blocked development, I stepped in, took the issue, and tried to find a solution that worked within the overall architecture.  Sometimes that was a matter of interfacing with the Azure team, sometimes that was a weekend of trying to figure out how to create Undo / Redo patterns in Entity Framework with SQL Server CE.

The theory is that by the last few releases, the architecture work is done, and I could move on to the next phase, or the next project, gradually reducing my time dedicated to the project so that when the devs are done, the next project was ready to go, and a consistent pace could be sustained throughout the project and the year.  In theory.  We’ll come back to that later.

As far as lessons go, I’d learned a lot in the last year working on those three or four other projects, and, of course, had a dozen plus years of experience on other projects.  Here were the chief thoughts on my mind when we started:

  1. I wanted specs that were easy to write, easy to read, and easy to maintain.
  2. I wanted to accurately track scope changes.
  3. I wanted unit tests.
  4. I didn’t want to get to a week from the delivery date, need to make a change, and have to regression test everything by hand
  5. I wanted the team to feel the urgency of working hard and smart early in the project to avoid the death march mindset at the end.
  6. I wanted to plan for change, both in scope, the composition of the development team, technology and time lines.
  7. I wanted a reliable build and deployment process from the start.
  8. I wanted the team to be able to provide me with feedback quickly and easily and to give both myself and the PM a heads up well in advance of a red flag issue becoming critical.  No surprises.
  9. I wanted an architecture that minimized code smell, and one I could proudly show off as the best we could possibly do
  10. I wanted to enrich my technical knowledge base.

Only one of these goals would I consider an architectural goal.  But all of  them tie into making the project, and the company successful.  Some of these goals we achieved, some we got close to achieving, and some I plan to work towards achieving on the next project with the lessons learned on this one.

I’ll go into each one of these in the next entry in this series.

Tuesday, July 14, 2009

Demo Showcase Suite – A History – Part 1

I work for a small software consulting company that had been doing work for Microsoft for a number of years, creating the original Contoso Demo Showcase demos that were available on the Microsoft Partner web site.  By October of 2008, we had created over 200 of these demos in eight different languages (so about 25 in English that had been translated 8 times), with each one taking anywhere from two to six weeks to create.  Much of this work was done before I had ever joined the company in January 2008, but I witnessed many of the late nights and long weekends that it took to get a demo just right by manually shooting each screen, and building a WPF executable from a series of screen shots.  Some of these demos had 200-300 screen shots in each, and a missed step or slightly misplaced hotspot would cost hours of work.  The developers had built some tools, and an engine to run the sims, but both needed a lot of work to make them usable for a non-developer user. I think the developers on that original effort are still having nightmares about it.  They do get the shivers when you mention it.

In October of 2008, I had just finished three projects for other customers, one small ASP.NET website, one large ASP.NET website, and a major upgrade to a very large Win Forms application in C#.  Throughout those three projects, I kept a mental list of what we did right, and what we did wrong, with the intent of applying the lessons to the next project.

Dan, one of the owners of the company I work for, who doesn’t like being called my boss, told me at the beginning of October that he was lining up my next project, and that it would be huge.  It would involve this idea they had sold to Microsoft to automate the production of these demos.  Dan and Benjamin, the other partner here, dropped some big Microsoft names into the conversation to entice the glory hound in me out of hiding.  I didn’t know anything about the project at this point, and though I was flattered by the trust they were putting in me, and excited by the possibility of working on a big, high profile project, I was nervous about being associated with this Contoso project and the late nights that I heard had come with it.    They assured me those days were over.  This was a completely different project.  Yes.  It was.

I was a couple of weeks late getting started on the project as I had to wrap up one of my previous projects.  It’s never good to start out behind the eight ball, but in the scheme of things, that wasn’t a major impact on the project.  I officially started looking at the project documents on Friday, October 24th.  I remember going home all excited to be working on something new, starting from the ground up.  I returned on Monday morning, October 27th, and started digging in to the project in more detail, and started to outline my approach.  We were going to do this one right.

On Tuesday, October 28th, my boss sent me a note, asking if I heard of this Azure thing Microsoft had just launched, and wondered if it might be worth looking into for this project.  Sure, why not?  The Microsoft PDC in Los Angeles was going on that day, and info was coming out fast.  I spent the afternoon watching the PDC presentations on my pc.  By noon the next day, I reported to Dan that Azure looked pretty cool, and that it would save us a bunch of time doing the hard things.  Scaling with load, synchronization, blob storage, queues, security.  All things we had dealt with on previous projects that caused us major issues.  Developers could focus on the business app, and not on the environment.  Perfect.  I don’t remember his exact words, but it was something like “Good, because the customer wants the site built on Azure.”

I had never worked on a project with an OS that was in CTP.  Hell, I had never worked on a project that involved any CTP technology.  I came from a business systems background that waited at least a year and a half after software was released before using it.  In 2007 at my last job, we were barely upgraded to Visual Studio 2005.  I didn’t know what CTP meant.  CTP is after Beta, right?  Well, not exactly.

We started the project with a Proof of Concept application.  There was the web site (at the time called Simulation Server), a web service (SimSync), a web app (SimBuilder), a WPF Simulation Engine (SimEngine) and a Windows Service (SimCompiler).  Azure wasn’t quite ready for us to use, and we were still working out the details of getting into the Azure early adopter program, so we deployed the POC in a Win2K8 / SQL Server environment.  We delivered the POC a week or two before Christmas to give the execs a chance to look it over, and to get a feel for the project before we really dug in. 

We took the time during the Christmas lull to get fully indoctrinated with Azure, to begin to flush out a few details, to organize ourselves, and to begin to look at some of the more complicated features.  We also took the time to take a well deserved break while the details of the system got worked out.  We knew there would be a lot of work to do after the New Year. 

We were accepted into the Azure early adopter program in late December.  The execs signed off on the project in early January.  Everyone was rested and raring to go.  It was time to get coding.

To be continued…

Monday, July 13, 2009

Demo Showcase Suite Launch

This blog has been a little devoid of content for the last few months.  It’s not because I had nothing to say, it was because everything I had to say, couldn’t be said.  The three letters a blogger hates to hear are ‘N D A’, non-disclosure agreement.  There are still a few things I can’t talk about, but by and large, the ban has been lifted.

This morning, at 9:00 AM Central Time, Allison Watson, Corporate VP of the Microsoft World Wide Partner Group, announced the release of Demo Showcase Suite.  

“Microsoft’s® Demo Showcase Suite is a collection of demonstration resources that includes the new Demo Showcase application for creating your own click-through demos as well as this community site to manage your demos, search for demos, distribute demos you’ve created and stitch different demos together to make your own. Sign in with Windows Live ID and take a look through this site and download the Demo Showcase application to get started creating your own demos that you can share through Silverlight or download as standalone executables”

For the last 9 months, I have been immersed in this project as the lead architect.  It’s the biggest project I’ve ever led, and a high water mark in my career.  With a small team of developers and QA’s, we turned out a product that we are all quite proud of. 

You can see what the app does by going out and using it.  I’m going to focus my next few blog posts on the architecture of the product, the history of the project, and lessons and technology we learned while doing it.  For those of you that can’t wait, here’s a brief sample of the technologies we used (in no particular order):

  • Windows Azure
  • Windows Live ID Authentication
  • Unity Framework + Enterprise Application Blocks
  • Entity Framework
  • LinqToSql
  • WPF
  • MVVM in WPF
  • Silverlight 2
  • ASP.NET
  • Test Driven Development
  • Microsoft Geneva Framework
  • ACS
  • SQL Server 2008
  • Custom ASP.NET Handlers
  • Object Graphs in EF using QuickGraph.NET
  • ASP.NET Dynamic Data
  • WCF
  • REST
  • RSS Feeds
  • Custom WPF Compilation Engine
  • Log4Net
  • ASP.NET AJAX
  • Resharper 4.5

As you can imagine, things are still kind of hectic around here today.  I’ll try to update the blog this week with more details.  For now, go give the app a try.  If you can’t get to it because you are not a partner, please contact Microsoft Partner Group, and get enrolled today!

Thursday, July 9, 2009

Allow MSI Downloads on ASP.NET Web Site

I can't tell you how many hours I spent over the last few weeks trying to debug a problem with a web site where we were trying to allow users to download an MSI from a blob. Sometimes it would work the first time, and then the next time, the site would try to download the .aspx file the code was running in.

What made it really confusing was that this problem only happened when we were running in Azure with IE 8. In IIS 7 running on Win2K8, it was fine. With Azure and FireFox, it was fine. At one point, we had it working on everything except IE 8 running on Windows 7 with a backend of Azure.

What I finally had to do, was to create an ASP.NET Generic Handler (ashx) that did the heavy lifting for me.  To give credit where credit is due, the suggestion came from someone I met while at the latest NerdDinner in Bellevue, WA on July 7.  (I wish I could give you his name, but I do know he was on the ADO.NET Services Team at Microsoft)   This is the second time that I have attended one of those dinners, and the second time I was able to solve a problem the next day based on advice given to me by someone there.   Many thanks to Scott Hanselman for getting these events put together.

Here was the final code:

public class MsiDownload : IHttpHandler



{



    protected readonly ILog _log = LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType);



 



    public void ProcessRequest(HttpContext context)



    {



        string msiName = ConfigurationManager.AppSettings.Get("MSIName");



        byte[] returnedBytes = null;



 



        var blobManager = UnityFactory.Current.Resolve<IBlobManager>();



 



        try



        {



            returnedBytes = blobManager.GetMsi(msiName);



            if (returnedBytes == null)



            {



                _log.ErrorFormat("Error obtaining MSI from blob storage service. Returned bytes is zero length.");



            }



        }



        catch (Exception exc)



        {



            _log.ErrorFormat("Error obtaining MSI from blob storage service. Exception type: {0}, Exception Message {1}.", exc.GetType().ToString(), exc.Message);



        }



 



        if (returnedBytes != null)



        {



 



            context.Response.Buffer = true;



            



            // force download



            context.Response.ContentType = "application/x-msi;";



            context.Response.AppendHeader("X-Content-Type-Options", "nosniff");



            context.Response.AppendHeader("content-disposition", "attachment; filename=" + msiName);



            context.Response.AppendHeader("Content-Length", returnedBytes.Count().ToString());



            context.Response.OutputStream.Write(returnedBytes, 0, returnedBytes.Count());



            context.Response.Flush();



            context.Response.End();



        }        



    }



 



    public bool IsReusable



    {



        get



        {



            return false;



        }



    }



}