Using source control can’t prevent conflicting changes – level 200

For those of you who use source control (and it should be everyone), you know the title of this post to be true.  Source control is just a technology, and no technology makes all problems go away.  In my previous post about software builds, I received a comment about locking checkouts and check-in conflicts.  Rob was particularly interested in whether I check in project and solution files.  The answer is YES.  I check in everything.


If you use VSS, you use the check-out/check-in paradigm.  In this paradigm, someone checks out a file, and it is unavailable until they check it in (much like a library book).  VSS does have the option for shared checkouts so that checked out files are locked against change, but most VSS users I know use locking and are afraid of two code files merging.


Subversion, the SCC system I prefer, uses a different paradigm.  To get the latest source, you would check out a repository or a branch of a repository.  The “Check out” step only happens once when setting up the working copy.  You always have the source checked out, but it is never locked.  If you need updated source from a team member’s changes, you issue the “Update” command.  When you are ready to push some of your changes to the SCC server, you issue a “Commit” command.  Every update is a merge from the source of record into your working copy, and it preserves any changes you may have made to a file while retrieving changes to the rest of the file.  Every commit is a merge of your change into the repository.  Every code move is a merge.  No code file is every locked against change, and everyone can work at once.


VSS users may become afraid of making a change to the same lines of code that another is changing.  This is where you have a communication problem.


SCC systems _cannot_ compensate for communication within a software team.  Communication is essential to working as a team, and if someone is doing a major refactoring, he should communicate with the rest of the team that some big changes are coming through the next time they update.  At the end of the day, the team should coordinate commits so that each commit is verified by the automated build before another is allowed.  This keeps you from committing to a broken build.


You may work with a distributed team, and your job is much harder because you don’t have the benefit of others with you in the war room.  You should still be constantly communicating over IM and telephone.  You must overcome that communication barrier.


In software, communication is key.

Break the dependency on HttpContext in order to test web functionality – level 300

HttpContext.Current is very useful, and it’s easy to sprinkle website code with it.  The bad side effect is that code that calls HttpContext.Current cannot be run in a test harness.  This is a big problem.  This post will show how to test code that needs HttpContext.Current.


The key is to break the dependency on HttpContext.Current.  By breaking the dependency, we can run our code in a test harness (like NUnit) and verify that it’s working correct (test harnesses are also great for debugging).  To start breaking the dependency, consider the following code we have:


        public string GetLoggedInUserName()
        {
            return HttpContext.Current.User.Identity.Name;
        }


This code dives right into the ASP.NET API, and code that depends on this method will be impossible to test.  To break this dependency on HttpContext.Current, we have to know what we really need.  We need the user name of the currently logged in user.  We are pulling this from the IPrincipal object.  We’re going to strip out this code and put an interface in it’s place:


    [PluginFamily(“Default”)]
    public interface ICurrentHttpContext
    {
        IPrincipal Principal { get;}
    }  


Next, we need to have a way for the original class to find a class that implements this interface for runtime.  We’ll use StructureMap for the dirt-simple linking through attributes:


    [Pluggable(“Default”)]
    public class CurrentHttpContext : ICurrentHttpContext
    {
        public IPrincipal Principal
        {
            get { return HttpContext.Current.User; }
        }
    }


Now we need to modify the original class with a testing constructor and a default constructor for dependency discovery:


    public class MyThingy
    {
        ICurrentHttpContext _context;
 
        public MyThingy(ICurrentHttpContext context)
        {
            _context = context;
        }
 
        public MyThingy()
        {
            _context = (ICurrentHttpContext) ObjectFactory.GetInstance(typeof(ICurrentHttpContext));
        }
 
        public string GetLoggedInUserName()
        {
            return _context.Principal.Identity.Name;
        }


    }


Notice here that we have a default constructor that asks StructureMap for the right implementation of ICurrentHttpContext, and for a unit test we have the constructor that accepts a mock instance.  This example shows that it is very easy to break a dependency on HttpContext.Current.  We can continue to use the fantastic services of HttpContext.Current while keeping our codebase testable. 

A software “build” is a lot more than just compiling the solution – level 200

Many developers don’t use source control and don’t use any automated tools.  This is extremely inefficient and troublesome.  Those on teams are forced to use source control in an effort to share the latest code with all members of the team.  In the source control environments, there is a tacit agreement not to commit any code to the repository that will break the build.  If the build breaks, it hinders the velocity of the other developers on the team because they cannot move on while the build is broken.

What does “build” mean?
Some folks use the term “build” to mean compile, and that is incorrect.  On the teams that use no automated tools, the compile might be the only step in their build process, but the two are still different.  The “build” is a process of taking the source of a software system and making it ready for deployment.  Some teams will manually compile the source and stop before deploying to a development environment.  These teams are short-changing themselves because the only feedback they’ve obtained about the current bits is that there are no syntax or linking errors.  There is no verification that any part of the software functions as intended.  Next, they may manually perform some steps to get all the bits and configuration in order to deploy the system to a development environment.  Then after some manual testing, they’ve obtained some level of feedback.

Let’s compare the above with an Agile build.
Here are some steps that are often performed in the build process of an Agile team – these steps are always automated so they run fast and are repeatable:

  • update latest code _and dependencies_ from source control.  (automated process will get latest code from the SCC repository)
  • compile solution (standard compile and link)
  • copy application files to test location (output binaries moved to location to prep for automated testing)
  • run automated unit tests. (automated tests produced through TDD or otherwise – give immediate feedback on the state of the system)
  • automated environment setup to prepare for an integrated test of the system
  • run integration tests. (gives even more feedback that the integration points of the system are functioning correctly – might include a database)
  • run regression tests (if you have them – verifies that all past functionality is still working as before – this is a type of integration test)
  • tag source control with build number if successful (only tag successful builds – discard unsuccessful ones)
  • Notify development team members of success or failure

Some teams add more steps depending on their needs, and some teams don’t have integration or regression tests suites yet.  Each build process should be developed by the team and tailored to the system.  The above are some of the more common steps that Agile teams include in a build process.

The point of an automated build process is to transform the current code into a working system and get feedback on the current quality as fast as possible.  If the entire process is fast, you will run it often and obtain feedback often.  If it’s slow, you won’t do it often.  The only requirement for the developer is to start the build.  Many Agile teams even automate that step by having a program kick off a build after every commit to the SCC repository.  That process is called “Continuous Integration”. 

At the end of a build, the team should be confident that if they deployed these bits to an environment, it would work.  There still may be bugs discovered, but they are confident that old bugs haven’t resurfaced and the system works at least as good as it did on the last build.  The extra testing steps in the build process ensure that the state of the software is always moving forward.  Without these steps, developers have no way of knowing if a change broke an existing feature.

Feedback is key in a build process.  The team should decide what steps can be added to the build process to generate as much feedback as possible.  My team recently inherited a system with a build duration of 25 minutes.  This is way to slow for us, and our initial goal is to reduce that duration to 10 minutes.  We’ll be able to do this by emphasizing fast unit tests more and doing away with some of the really slow integration tests (that have delicate, cumbersome data setup scenarios).

Tools my team uses in our build process:

  • Subversion (source control)
  • CruiseControl.net (build runner and tagger – keeper of the builds)
  • CCTray (system tray notification of successful or failed builds)
  • NAnt (Xml scripting format to describe steps to be run during the build)
  • NUnit (automated test harness)
  • .Net SDK (includes compiler and linker)

Resharper 2.0 alpha has many interesting new features – level 200

You can try out the Resharper 2.0 alpha for Visual Studio 2003 or 2005.  I tried it out for Visual Studio 2005.  Immediately, some cool new features jumped out at me. 



  • Resharper runs NUnit tests for you right from the code.  It includes a graphics window similar to the NUnit gui.

  • Many new refactories with context-sensitive offering.  One common one is to breat a variable declaration from the constructor.

I tried it out for about an hour, and then I uninstalled it.  It is an alpha, and I did experience some of the known issues which include hangs at several places:



  • Adding new class to project.

  • Opening a solution.

  • Saving a file.

  • Compiling

These known issues are published on the download site.  After all, it’s an alpha, but when it performs as snappily as Resharper 1.5, it’ll be a great productivity tool.


For the time being, I’m evaluating another refactoring tool called JustCode!.  It has similar features to the other tools in the space, and I’m giving it a shot.  I recognize that I have a bias because I’m already familiar with the Resharper shortcut keys, but JustCode! is nicely filling in the gaps of the VS 2005 refactorings.

MSF for Agile update released – still doesn’t hit the mark – level 200

A new beta of MSF
for Agile is available from Microsoft
. 
I read through it (more reading that I normally like to do about
process-oriented stuff), and it’s better than classic MSF.  

Classic MSF was very waterfall even though it called the
waterfall a cycle.  There were very rigid
phases of envisioning, planning, build, stabilize, deploy, and maintain.  The new MSF does a lot to emphasize Agile
concepts that keep us more focused on the software than the process, but it
keeps a death grip on all the documents that classic MSF had.

Documents

Some documents have been renamed and shuffled around, but
here are _some_ of the documents required by the MSF Agile process:

  • Vision
    statement
  • Persona
  • Scenario
    List
  • Quality
    of Service Requirement List
  • Project
    Checklist
  • Threat
    Model
  • Scenario
    Description
  • Iteration
    Plan
  • Storyboard
  • Application
    Diagram
  • System
    Diagram
  • Test
    Approach
  • Release
    Plan

I have no doubt that there are full-time folks at Microsoft
who maintain these documents all the time, but the average software company isn’t
the mammoth that Microsoft is.  Most of
the points of all these documents can be summed up by some bullet points on a
whiteboard.  My team uses a wiki to keep
current information since we have stakeholders in another city, so anything
that concerns them is jotted on the wiki so anyone can view or change it.  Most stuff goes on a whiteboard until it’s no
longer useful.

Documents are only useful for a time, so I believe there is
too much document overhead still in the MSF process.

Builds

MSF for Agile describes a daily build and an accepted
build.  There isn’t mention of continuous
integration, and it allows many checks before a daily integration build is
done.  I believe that more feedback is
desirable.  Even with dependencies from
other teams, you always have a good build of those, so a build on every
check-in would give more instant feedback. 
This point also stresses the need for fast builds.  If your build takes hours to complete, you
have no choice but to run it daily.  I
build measured in minutes (10 or less) will be run very often and build
confidence that the software is still working.

Work Items

The MSF for Agile process uses Visual Studio team system
terms, naturally.  They have different
levels of work items:

  • Scenario
    Overview
  • Quality
    of Service Requirement Overview
  • Task
    Overview
  • Bug
    Overview
  • Risk
    Overview

All work items must be tracked permanently according to the
process.  The one that jumps out as
overkill is the task work item.  Tasks
are very small, and merely recording them in a tool might lengthen the duration
of the task 20% for small ones.  My team
tracks at the story level and tasks out on a whiteboard when necessary.  If there is a very large story with large
task within, the product manager might record the large tasks (that might need
to be called separate stories anyway), but we as developers concentrate on
creating working software.

Conclusion

MSF for Agile misses one major point:

Software teams should be self-organizing.

Each software team is different and has different
goals.  The process should be customized
for each team.  MSF for Agile, even if
used, can be used out of the box.  I
believe the process should stress that inappropriate items be omitted.  MSF for Agile still seems pretty heavy for
me, and there is a lot of tracking that doesn’t directly validate that the
software works as intended.  Tracking
every work item and having every document doesn’t matter if the customer isn’t
happy.  The key metric should be customer
acceptance at every level, not just the end of the release.  Automated testing wasn’t stressed either, and
that is a key way to get build feedback quickly and ensure that something a
customer liked yesterday isn’t broken today by an unrelated change.

Overall, it appears that MSF for Agile is trying to embrace
real Agile, but it won’t let go of waterfall. 
Right now, it’s sitting on the fence looking at both sides.

 

 

Rhino Mocks are strongly typed. Refactor unit tests with ease – level 200

I completed my first unit test with Rhino Mocks today. 
With well-designed code, one can pick apart a section, mock the
interfaces it needs and run/debug/test it in isolation.  This is a
huge advantage of a loosely-coupled design.  The alternative is a
tightly-coupled design where every class knows about the API of every
other class, and nothing can be run unless _everything_ is
operational.  The worst of this is if you have a development
resource like a database or message queue that isn’t operational at the
moment.  You can’t run any of your code because three classes
away something depends on it.

The biggest win, in my opinion, for a loosely-coupled design is
testability.  If every dependency is hidden behind a custom
interface, those interfaces can be mocked at will, and your code that
uses these interfaces can be run/debugged/tested at will.  This is
where mocking frameworks come in.

I currently use NMock in my automated tests.  I’ve also used the mocking framework in nunit.mocks.dll that is available with NUnit (by
the way, version 2.2.5 is out).  NMock uses expectations as
strings to define how it will simulate an object that you need. 
You can read up on NMock here:

Rhino Mocks serve
the same purpose, but instead of strings to represent the method that
should be called, it uses the actual method.  If follows a record
and play pattern, and you don’t use a string while setting up any of
it.  This is a huge time-savor.  When you want to rename a
method, you don’t have to search for strings that contain that method
name.  The compiler will now catch any errors resulting from the
rename, and your refactoring tool will rename all calls to that method
for you.  Here’s a sample test for a custom MembershipProvider I was playing with:

 

        [Test, Explicit]

        public void ShouldUpdateUserInformationInProviderWithRhinoMocks()

        {

            Rhino.Mocks.MockRepository mocks = new Rhino.Mocks.MockRepository();

 

            MembershipDataSet dataSet = new MembershipDataSet();

            dataSet.Users.AddUsersRow(new Guid(), “lerma”, “lerma”, “lksd”, “”,

                 DateTime.Now.ToShortDateString());

 

            IDataSetStore store = (IDataSetStore)mocks.CreateMock(typeof(IDataSetStore));

            Rhino.Mocks.Expect.Call(store.GetDataSet()).Return(dataSet);

            store.SaveDataSet(dataSet);

 

            mocks.ReplayAll();

 

            StructureMap.ObjectFactory.InjectStub(typeof(IDataSetStore), store);

 

            IMembershipProvider provider = (IMembershipProvider)StructureMap.ObjectFactory.GetInstance(

                 typeof(IMembershipProvider));

            Assert.AreEqual(1, dataSet.Users.Count);

 

            MembershipUser user = provider.GetUser(“lerma”);//call should trigger the GetDataSet() call;

            user.Email = “lerma@address.com”;

            provider.UpdateUser(user);//Should call SaveDataSet(…);

 

            user = provider.GetUser(“lerma”);//DataSet should be cached, so no call.

            Assert.AreEqual(lerma@address.com, user.Email);

 

            mocks.VerifyAll();//make sure the interaction with IDataSetStore was correct.

           

        }

 

Note one very big difference between Rhino and NMock.  When you
first create the mock, you are in record mode, and for void methods, to
set up the expectation, you actually _call_ the method.  Rhino
sees that and adds the expectation.  For non-void methods, you use
Expect.Call to set up the return value.  When you are ready
to run the code under test, you have to switch to play mode using the
ReplayAll() method.  And then at the end there is the obvious
VerifyAll() method.  Rhino supports Replay and Verify for all
mocked objects at once or one by one.

In this code sample. I’m using StructureMap in my production code to
link dependencies.  Here, I tell StructureMap to use the mocked
object instead.  Read up on StructureMap here.

So far, I’ve found Rhino Mocks to have every feature that I currently use with NMock plus the strongly-typed expectations
I’m considering making a switch.  For the time being, I’ll
continue to use Rhino for new tests as a longer evaluation.  To
get more information on how Rhino Mocks work, read the documentation, which is very good.