Release retrospective on extreme programming practices – level 300

Tomorrow, my team will make a software release that’s been in the works
since the end of January (2.5 months of work).  We’ve been using exteme programming
practices, and I’ll lay out the good, the bad, and the ugly as I look
back on the release.  First, here is a rundown of some things we
have been doing:

  • Source control with Subversion
    Every developer also receives email notifications of every commit to
    source control.  Each developer is responsible for reviewing
    commits.
  • Continuous integration with CruiseControl.Net.  Every time code is committed to source control, CC.Net runs our automated build that is based on NAnt
    Each developer receives a success or failure notification in his
    CCTray.  Broken builds are not acceptable and must be fixed
    immediately.  Code cannot be checked into source control while the
    build is broken.  Our build runs unit tests, integration tests and
    acceptance tests as well as creating a deployment package that’s used
    for the install.
  • Automation.  We automate everything.  For example, we
    have 8 development databases, and when the database scripts are updated
    in source control, all 8 databases are dropped and recreated from the
    changed scripts.  Builds of components are propogated to
    components that depend on them.  Installation of the latest build
    on the dev and test environment is completely automated allowing our
    tester to pull the latest build with the click of a button. 
  • Test-driven development (TDD)
    – we drive the design of our classes through unit tests.  The
    byproducts are a loosely-coupled design and a large battery of unit
    tests that are run continuously with every build.
  • Pair programming
    All production code is written by a pair of developers.  We have
    single tasks too (like tweaks to the CC.Net build, etc), but all new
    code and code changes are written by two developers.  We use VNC
    for pairing because it allows us more comfort by having our own mouse,
    keyboard and display even though we are working on a single workstation.
  • Collective code ownership
    We don’t have a concept of a single person’s code.  If we need to
    change some code, we change it.  No need to consult someone for
    “permission”.
  • The simplest design that will work
    Not the simplest design that will compile.  “Work” is defined by
    the customer.  We will defer longer-range designs because in
    practice we know that code written for 6 months down the road will
    likely be wasted work because in 6 months, the customer will decide to
    go in a different direction.  If not, it’s just as easy to add the
    same code 6 months from now.
  • Constant improvement.  We are constantly improving the
    system and the way in which it’s tested.  We have an idea wall
    (whiteboard) with a list of items we try to squeeze in that will allow
    us to go faster.
  • Iterations
    We use 2 week iterations.  We define and execute work two weeks at
    a time.  The customer is allowed to reprioritize every two
    weeks.  We know change is a part of software engineering, so we
    don’t try to fight it.

So now, I’ll run down the good, the bad, and the ugly:
The good:

  • We finished a week early.  Our automated testing and
    installs really paid off by allowing us to move very quickly. 
    From the time we changed some code, a new build could be on the test
    environment within 30 minutes (after being confirmed by all automated
    tests passing).
  • Very, very few bugs.  Because of the heavy emphasis on
    automated testing, bugs are squeezed out of existence.  The only
    place where bugs creeped in were places where we missed an opportunity
    for an integration test or a small feature overlooked.  The code
    with tests had absolutely no bugs.
  • Easy deployment.  Because of our emphasis on automation, our
    system installs on development and test servers, and it’s super easy to
    use the same process for deployment to our hosted servers. 
  • No deathmarch at the end of the release cycle.  We used a
    sustainable pace throughout the release, and we avoided a last-minute
    firedrill to make a date.

The bad:

  • It was hard.  To maintain the sustainable pace, we worked
    hard all day every day.  It was mentally tiring because of the
    discipline required to work at this level of efficiency.
  • It makes it hard for the wife to be able to call at any
    time.  Because we all work in the same war room and are pairing on
    all production code, when my wife calls, it’s never a good time to
    talk, and I have to help her understand our way of working and that I
    can’t have a phone conversation at any time.  Family emergencies
    obviously take precendence, but my wife likes to call and chat, and
    that doesn’t work with extreme programming because we are engaged all
    the time.
  • It takes a certain type of programmer to work in this disciplined
    fashion, and not all programmers want to work this way.  It is
    taxing for us because sometimes we don’t want to pair program, but we
    know we must. 

The ugly:

  • The code we inherited.  We are improving a system with
    inadequate test coverage.  The test that are there are integration
    tests and when they fail, it’s a mystery to discover what part of the
    code is actually the problem.  We made vast improvements on the
    testability of the system, and we’ve employed acceptance tests with the
    FitNesse wiki to allow for automated full system tests.
  • Technical debt and politics.  We constantly made
    improvements to the code and the way we worked.  Sometimes this
    wasn’t obvious to management.  We had to make some changes to
    facilitate some needed behavior, but management wanted the behavior
    without paying for the prerequisite work.  In this case, it was
    our responsibility to push back and not allow technical debt to
    accumulate just for political reasons.  We try as much as possible
    to keep technical decisions out of political ones.  It’s our job
    to take customer requirements and translate them into working
    software.  The manner in which we do that is our decision (the
    decision of the technical players on the software team).

Overall, the employment of these extreme programming practices has been
wildly successful.  We’ve improved the system in a very short
period of time while adding functionality, and we’ve paid down
technical debt we inherited without incurring more.  We are
encouraged by these results, and we will continue these practices that
have been working for us.  We’ll look for even more ways to go
faster by improving efficiency.  We’ll increase our level of test
automation to free up our tester for more in-depth exploratory
testing.  Overall, management is pleased because we’ve produced
the business value that matters.  That’s all that matters: 
delivering software that adds business value.