How to produce a software product quickly, part 2 – level 300

This is a follow on to part 1 of this series.  I'm talking about how to produce software quickly.  To be clear, I'm not talking about producing brittle software quickly.  Software is too expensive to be built cheaply.  This mantra is a good tagline, but it is so true.  The software I'm asked to produce is important.  It can help make or break the company.  The stakes are too high to take shortcuts.  I've seen software systems launch with a glory of "congratulations" emails flying around.  "The project was a huge success", everyone cries.  Then, 2 short years later, developers are threatening to quick if they are forced to attempt one more change to what is now seen as a complete and utter flop.  Then management calls for a rewrite.  Never mind that there is no introspection about what happened?  How could a roaring success turn into a flop in 2 years?  No, no time for retrospective, we need a rewrite.  And the cycle continues every two years.  That is not what I'm talking about.  The software needs to be sustainable.  It can never become so complicated that newcomers to the team have a hard time figuring it out.  It has to glow of simplicity.  Anything large and complex can't be simple, can it?  I think it can.  In this installment, I'm going to talk about a favorite mantra of mine when working on a software product.

Part 2:  Dodge as much work as possible

You are laughing at me right this moment, but I'm serious.  If I can get away with NOT doing something, I will.  There is an infinite amount of work to do on the product, and my team has to produce business value quickly.  Logically, we have to maximize business value delivered with each unit of work chosen.  Certainly product management needs to prioritize items so that what we work on actually matters, but along with feature stories, technical stories creep in.  What other type of work do we find ourselves doing that doesn't translate directly into business value?

Performance tuning

First of all, if a high measure of speed is important for the product, the customer will communicate that.  Software that flies a fighter jet has to be sufficiently responsive that when the joystick moves, the plane moves with it.  1/2 second delay would be completely unacceptable.  Now think about an enterprise business application.  Think about Microsoft Outlook.  How often is there a 1/2 second delay or more when performing an operation?  Is it a show-stopper?  Is the application unusable when the progress bar pops up to "check email"?  Absolutely not.  It is tempting to stroke our technical prowess and ponder ways to save some CPU cycles.  After all, I'm iterating over that collection twice.  Maybe I could trim it down to just once. . .but those operations are in different classes. . . hmm. . .could I alter my design to save that 2nd iteration?  That sounds absurd, especially when your next operation is calling a web service in another state.  You might save a few milliseconds, but then you promptly wait 1 second for the web service call to complete.  While performance tuning, you have some other high-priority stories assigned.  The next stand-up meeting includes your 4 hours of performance tuning, and the customer can't tell a difference in the speed of the application.  In other words, developer time was spent on something with no business value.  Database access is often another slow area that is optimized.  Database access is an "out of process" operation, and is inherently slower in orders of magnitude than any in-process operation.  A typical application, when profiled, will find 80%+ of processing time* in data access operations, not in-process object model manipulation or screen drawing.

*Statistics made up on the fly with 95% accuracy.

It makes sense to optimize data access then, doesn't it?  I don't know.  Does it?  If you don't do it, what will happen?  Will the customer report that the application is too slow?  Will the customer even care?  Is it a SQL Server 2000 database with 1,000,000 records total and no more than 10 concurrent users?  If so, your database server laughs at the load you place on it every day.  It can serve up your requests with one CPU tied behind its back.

Technical stories

It's easy to accept work given by the customer as "#1 priority".  It's not so easy when the team comes up with technical stores.  Many technical stories have merit, and we, as professionals can see things coming, and we need to be able to responsibly allocate work for ourselves that otherwise would not have been brought by product management.  For instance, we must take reasonable measures to secure the application.  The basic one is the database connection string.  How do we store an secure it.  The customer doesn't know about databases or connection strings.  The customer may love the application, but it a script kiddy can find the connection string, access our database and call our delete stored procedures, then we have a big risk in the software.  Explain that to the customer, and they will understand the time spent on securing the connection string.  Judgement comes in to play, though, because we could symmetrically encrypt the connection string, but then a savvy developer could probably hack our software.  Judgement:  Is the customer paying for protection from ill-meaning savvy developers in the company?  This technical story could quickly explode if it's taken too far.

Back to data access.  Wouldn't it be great if we could somehow detect which table columns were changed so that when we did an update, we only updated that column?  To that, I'd say "that would be terrible. . . if it took more than 10 minutes to implement".  If my product were a super-high-traffic site, maybe.  If my product is a enterprise app with a maximum of 10 users actively on the system at any given time, then no.  1000 users of a system in any given day might not even translate into 10 database operations happening concurrently (during the same second).  In this case, there is absolutely no business value derived with this technical story.  If accepted, we are effectively wasting time.  We need to eliminate waste.

Frameworks

"To build our application right, we need to build a framework first."  Other people have written about this, and I've been there.  In fact, I've been a framework writer.  Boy, did I crank out a lot of code that nobody uses!  If a framework is the deliverable, then ok.  If an application is the deliverable, then we'll be building the application, not a framework.  Besides the fact that it's hard to know what to build before something exists to use it, a framework is a technical story that the customer doesn't benefit from.  I am a big fan of using frameworks to build the application quickly, though. 

Build vs. buy

I default to buy and then entertain convincing arguments to build.  Think about the extremes and then work your way back.  Would you build the .Net Framework?  No, buy it (obtain it).  Would you build your own web framework or would you use ASP.NET, struts, webwork, rails, etc?  Would you build your own ADO.NET provider, or would you use DataReader and DataAdapter?  Would you build your own data access plumbing or would you use an O/R Mapper or code generator to build this mechanical, boring code?  Would you build your own screens of the application completely or would you buy a UI controls library?  All these questions have the same thing in common.  Commonality!  .Net Framework is used in all .Net applications.  Web frameworks abstract away Http plumbing.  ADO.NET providers handle the binary communication with a database.  O/R Mappers deliver the transition from a rich object model to the relational storage view of the data.  UI controls abound to get a nice look and feel by leveraging the UI expertise of the industry.  What is left?  I'll make an assertion.  The only thing that should be left is the code that is unique to your product.  This is the code that makes the application important.  Everybody does UI.  Everybody does data access.  Only you deliver application X.  Your customer needs application X for a specific purpose, and that purpose is modeled by you.  It is the one thing you can't buy.  You can't buy the distinct business value you are delivering with your custom software.  In fact, that value is the only thing custom about the software.  It is what matters, though.  By defaulting to buy, I can dodge quite a bit of work.  I don't have to spend time on fancy UI controls.  I don't have to spend time on boring data access plumbing.  I can focus solely on providing unique business value.

Not created here syndrome

This is the fear of tools and libraries.  Essentially the fear of the unknown.  If it's not from Microsoft and not from us, then we're not using it.  I firmly believe that if Microsoft hadn't delivered VSS, many more shops would have never began using source control.  There are countless tools and libraries available for use that ignoring them can be irresponsible.  It's different in the Java world.  The beginning of a project starts with the selection of tools and libraries.  Often for web apps, they'll choose Struts, Spring, and Hibernate.  This combination gives them the shell of an app, and developers are able to focus on the object model that makes the software valuable.  Hibernate is very common for Java apps.  Microsoft doesn't have an O/R Mapper.  Once they do, no one will ever write data access code again (mark my words) in enterprise apps. 

In the .Net world, it can be a struggle because some folks think that Microsoft is the only entity capable of producing a quality library.  Not created here syndrome leads to 3 times as much work as necessary.

YAGNI:  You ain't gonna need it

If the customer specifically asks for a function build it.  If not, don't.  Let's say your customer needs a windows app for managing contacts (I know, trivial example).  The customer needs to be able to add, edit, and delete contacts.  Say you start working on the add feature, and you speculate that you should probably make a screen so that several can be added quickly.  It seems like a logical extension to the feature, and it seems that it could provide value.  The danger is that while you work on the multi-add screen, the edit screen isn't getting done.  A savvy customer will quickly question and correct this type of behavior.  It might have been a logical variation of the feature, but it's not the most important thing.  Like I've said before, there is an infinite amount of work to do.  The art of product delivery is focusing on the small subset of work that will be valued the most.  Essentially, if I demo an incremental build to a customer and I have to point out something, then we point in a feature that could have been deferred in exchange for something more important.  The customer will come to the demo asking if three things are done:  add, edit, and delete.  Until those 3 things are done, the product team has no right to insert other work in front of the key stories.  With the YAGNI mantra, I assume that if the feature isn't specifically requested, it isn't going to be needed.  Maybe it'll be needed later, but that's just speculation.  When the priority is to deliver value quickly, I have to be able to defer nonessential work.

Conclusion

It's somewhat of an art to be able to filter that infinite pile of potential work done to a small subset of work that will satisfy the customer.  Ratholes and scope creep are very dangerous to the timely delivery of software, and I keep in my mind always:  "Can we put this off 'till later"?

How to produce a software product quickly, part 1 – level 300

This is harder than it sounds.  I’m thinking about this topic because I’m the manager of a software product team.  I’m responsible for the product’s health and speedy delivery.  Because of that, I need to steer the team in the direction with the shortest path to the finish line.  Some of the things I’m focusing on are as follows:

Part 1: Eliminate Waste

I think there is merit in the “Lean” notion of software development.  Earlier in my career, I worked for Dell, Inc. as a software developer.  At the time I was pretty pleased to be working for the worlds largest computer manufacturer, but being there taught me a lot about waste and how to indulge in it.  I’m sure other large companies have these problems, but I observed so much waste, it hurt my morale.  All the talk about “work smarter, not harder” was hard to apply inside the work culture.  Mechanical tasks were being done by humans, and manual tasks often had to be repeated several times.  I remember spending days working on items that our business partners were never able to benefit from.

Logically, if we eliminate waste, all that will be left will be tactical and strategic tasks that have a direct impact on the business.  In software, what things could be waste?  Manual tasks:  database migrations, build delivery, pre-production software installations, manual refactoring (without the aid of a smart IDE), typing code instead of generating it, reporting status, slow communication, etc.

Database migration

This screams to be automated.  Perhaps there are some testing databases with realistic data preloaded.  Suppose these are used for reviewing an incremental release to stakeholders.  After the stakeholders are finished reviewing the current build, they will have changed the data in the database, and over time it won’t be so realistic.  For every build review, it’d be nice to have that realistic database back, so we restore it from backup, detach/attach, etc to get a fresh database for the stakeholders.  The key is to not spend human time on such a task.  This task is repeated every 1 to 2 weeks.  Human time is often the most expensive part of software development.  A quick batch script could easily automate the refresh of this database and free up human time for more critical thinking.

Build delivery

To demonstrate the incremental build, how do we install it?  Who builds it?  Does a “build master” build it in release mode?  Why should a human have anything to do with this mundane task.  CCNet and NAnt are more than capable of building and delivery the software package in a zip file.  Extract the zip file on the demo machine and run.  Again, this type of activity is not worthy of human attention.  Make the machine work. 

Pre-production software installations

All software is different.  Some have client components, server components, distributed components.  Mature software teams have environments set up for testing.  These environments are for testing an incremental build.  How does the incremental build get installed?  If there are multiple servers with distributed services, who sets it all up.  I don’t mean to sound like a broken record, but this task doesn’t require critical thinking.  Leave it to the machine to deploy to the testing environment.  The task that requires some thought is putting together the deployment script for the machine to run.  Invest some time in an install script using NAnt, MSBuild or good old DOS commands, and then you can turn it over to a machine to reliably perform over and over again.  In fact, would your testers appreciate a command to run any time they are ready for the next build?  Why not have it in 2 minutes rather than scheduling an environment refresh?

Manual refactoring

If you’ve read any of the other posts on my blog, you see that I’m a fan of tools.  I especially love Resharper because of all the time it saves me.  I remember not using it to.  I remember renaming a public property and then using CTRL+SHIFT+F to do a solution string search for the property.  For a popular property, this might take a few minutes.  With Resharper, it is sub-second.  That’s right.  No more search and replace.  Looking back, why did it take me so long to demand a better tool?  What about pulling a method from a concrete class up to an interface?  I’d never do it manually now when a tool can do it with a few keystrokes.  Again, it’s trading human time for cheaper (and faster) machine time. 

Typing code instead of generating it

I’m not talking about software generators.  I’m talking about micro-generation.  If I need a class with 3 fields, a constructor, and some properties, I can type every character, and I have in the past.  It is much quicker to allow a tool to do it for me.  Resharper as well as CodeRush make use of micro-generation to throw in standard constructors, properties, and they do standard code completion too.  In fact, I let Resharper name my variables for me.  It guesses so well that I have very descriptive variable names after only hitting 4 keys.

Reporting status

This can take quite a bit of time.  Often a stakeholder or project manager interrupts developers to inquire on status.  There is no need for this.  The software team already tracks status somewhere, whether it be in an excel spreadsheet, on a whiteboard or a storywall.  Wherever status is available, just make it more broadly available.  Welcome your stakeholders to take frequent looks at it.  There is no need for in-person interruption just for status. 

Slow communication

Manual gather of status is a form of slow communication.  I’ll throw out a tip on how to slow down communication if it happens too quickly at your company. </tongueInCheek> Give every member of the software team their own offices and make sure all conference rooms are scarce resources and hard to book.  In fact, location members of the software team in different parts of the building, or maybe in a different time zone.  That should slow down communication sufficiently.  Slow communication will slow the production of software.  This is a form of waste.  Waiting for the answer to a question is wasteful.  To eliminate this, locate all members of the team in the same room without physical barriers.  Product managers too.  This will foster instant communication.

Conclusion

Eliminating waste is key to a productive team.  Identifying waste takes some critical thought, though.  Some teams are so busy with wasteful tasks that they can’t slow down to think about remedies.

[tags: software, lean, eliminatewaste, tools, productivity]

Review James Shore’s new book “The Art of Agile Development” online – level 200

James Shore is taking lessons learned and teaming up to write a book called “The Art of Agile Development”.  He’s posting sections on his website for review.  You can read a section and post what you think to his Yahoo group.  I’ve followed James’ blog for quite a while, and his normal diary has been very useful to me.  He has a great deal of insight into how to manage a software team.

I’d recommend this reading for anyone involved in a software project.

Microsoft republishes Guidelines for Test-Driven Development – level 200

If you remember the article in October that was on MSDN on Test-Driven Development, you remember the hub-bub that it caused because of the inaccuracies, and how it soon was pulled from the web.

Microsoft has published new Guidelines for Test-Driven Development.  See the article to see why I think they got it right. 🙂

Many thanks to Paul Schafer and Rob Caron.

Tech Ed 2006: I’ll be facilitating BoF “Agile Development with .Net” – level 000

If you are going to Tech Ed 2006:

  • Come to Party with Palermo
  • Stop by my Bird’s of a Feather session on “Agile Development with .Net”

The session will be on Tuesday at 7:45PM

This BoF will not seek to convince people on the merits of Agile: there are plenty of print resources that weigh in on that topic.  It will focus on what has been working and what hasn’t for real teams practicing some or many of the Agile practices.  I personally with have some good success stories as well as a few “if we had only. . .” stories.

I hope you will come and share what has/has not been working for your team.

Integration testing demonstrated – level 200

In this
post, I’ll talk about and demonstrate integration testing.  If you are just starting out with integration
testing, you want to test small before you test big.  Full-system tests are good, but if they fail,
they don’t give much of a hint as to where the failure is.  Smaller integration tests will help narrow
the area where the failure lies.

 

A few
rules to live by

·                    
An
integration test must be isolated in setup and teardown.  If it requires some data to be in a database,
it must put it there.  Environmental
variables should not cause the test to fail randomly. 

·                    
It
must also run fast.  If it is slow, build
time will suffer, and you will run fewer builds – leading to other
problems. 

·                    
Integration
tests should be order-independent.  It
should not matter the order you run them. 
They should all pass.

·                    
Feel
free to make up rules that objectively result in fewer bugs.

 

Testing
a custom SiteMapProvider

In my
example, I have a custom SiteMapProvider (PageInfoSiteMapProvider).  This site map provider gets it’s data from
inside my EZWeb application, specifically, the IPageConfigProvider interface.  I use StructureMap for service location, so
one of the things that an integration test will validate is that my interface
implementations can be resolved correctly. 
I’m going to focus on an integration test for one method on the site map
provider, FindSiteMapNode(url). 

 

Here
is the constructor and method on my custom site map provider:

        public
PageInfoSiteMapProvider()

        {

            _provider = (IPageConfigProvider) ObjectFactory.GetInstance(typeof (IPageConfigProvider));

            ICurrentContext
context = (ICurrentContext)ObjectFactory.GetInstance(typeof(ICurrentContext));

            IConfigurationSource
config = (IConfigurationSource)ObjectFactory.GetInstance(typeof(IConfigurationSource));

 

            _applicationPath =
context.ApplicationPath;

            _defaultPage = config.DefaultPage;

        }

 

        public override SiteMapNode
FindSiteMapNode(string rawUrl)

        {

            VirtualPath
path = new VirtualPath(rawUrl,
_applicationPath, _defaultPage);

            PageInfo
config = _provider.GetPageConfig(path);

            SiteMapNode
node = MakeNodeFromPageInfo(config);

            return
node;

        }

 

This is
simple enough.  The site map provider is
just a wrapper around the interface call. 
Notice in the constructor, that I’m using a call to StructureMap’s
ObjectFactory class to resolve the interfaces that I need.  I need the current HttpContext and some stuff
in the web.config file.  Obvious in my
integration test I don’t have the ASP.NET runtime, and I don’t have the
web.config file, so I’ll need to simulate thing (mock, stub, fake, whatever you
want to call it).  In my integration
test, I’m going to have to use fake implementations of these interfaces.

 

Here
is the integration test fixture that tests this provider all the way down to
the point where it reads the data from the xml file on the disk. 
I’ve chosen this scope because it’s not too large, and it’s
not too small.

    [TestFixture]

    public class PageInfoSiteMapProviderTester

    {

        [SetUp]

        public void Setup()

        {

            IConfigurationSource
source = new TestingConfigurationSource();

            ICurrentContext
context = new TestingCurrentContext(source);

 

            ObjectFactory.InjectStub(typeof(IConfigurationSource),
source);

            ObjectFactory.InjectStub(typeof(ICurrentContext),
context);

        }

       

        [Test]

        public void ShouldGetRootSiteMapNode()

        {

            string
xmlFileContext = @”<?xml
version=””1.0″”
encoding=””utf-16″”?>

                <PageInfo
xmlns:xsi=””http://www.w3.org/2001/XMLSchema-instance”&#8221;
xmlns:xsd=””http://www.w3.org/2001/XMLSchema””&gt;

                 
<VirtualPath>/</VirtualPath>

                  <Title>My home
page</Title>

                  <Template />

                  <Theme />

                  <Plugin />

                 
<HasParent>false</HasParent>

                  <Children />

                  <Links />

                  <Editors />

                </PageInfo>”;

 

            string
fileName = FilePageConfigProvider._fileName;

            string
fileSubDir = FilePageConfigProvider._filesDir;

 

            string
fileDirectory = Environment.CurrentDirectory + “\” + fileSubDir;

            string
path = fileDirectory + “\” +
fileName;

 

            FileHelper
helper = new FileHelper();

            helper.SaveFileContents(path,
xmlFileContext);

           

            PageInfoSiteMapProvider
provider = new PageInfoSiteMapProvider();

            SiteMapNode
node = provider.FindSiteMapNode(“/mywebapplication/default.aspx”);

           

            Assert.AreEqual(“My home page”, node.Title);

            Assert.AreEqual(“/myWebApplication/Default.aspx”,
node.Url);

            Assert.AreEqual(“/”, node.Key);

        }

    }

 

This is
my entire integration test to make sure that the SiteMapNode object is put
together correctly. Notice that I have data setup with the xml file.  Then I call the provider and assert on the
results.

 

Faking
the “cruise missile” with StructureMap

If my
code was in charge of launching a cruise missile correctly, then I would have
to fake out the cruise missile when testing my code.  In fact, my code might not ever run in its
true environment until the world plunged into war again.  On a small scale, I have to fake the two
interfaces that are not reproduceable in my test context.  Direct your attention to the Setup
method.  You’ll notice that I’m creating
two fake instances and instructing StructureMap to “InjectStub”.  Because of this, when my code asks for an
instance of one of those interfaces, the class I created in Setup will be
returned.

 

Here
are the stub classes

    public class TestingCurrentContext
: CurrentContext

    {

        public
TestingCurrentContext(IConfigurationSource
source) : base(source) { }

 

        public override string
MapPath(string path)

        {

            string
executingDir = Environment.CurrentDirectory;

            string
combinedPath = executingDir + “\”
+ path;

            return
combinedPath;

        }

 

        public override string
ApplicationPath

        {

            get

            {

                return
“/myWebApplication”;

            }

        }

    }

 

    public class TestingConfigurationSource
: ConfigurationSource

    {

        public override string
ClientFilesPath

        {

            get

            {

                return
@””;

            }

        }

 

        public override string
DefaultPage

        {

            get

            {

                return
“Default.aspx”;

            }

        }

    }

 

These
classes are simple enough.  They merely
return expected environmental settings so that I can test my code.

 

Integration
testing is hard at first

If your
style of coding isn’t loosely coupled, then it will be difficult to do
automated integration testing.  Seams
must exist in the code where fake implementation can be inserted.  In my case, I had a web.config file and the
HttpContext of the web application that get in the way.  I stub these out for my test.  My tools list for integration testing is:

·                    
NUnit
for organizing and running the tests.

·                    
StructureMap
for resolving interfaces and for stubbing interfaces at hard boundaries.

 

My way
or the highway

Just kidding.  I’m not saying that this
is the “golden” way of doing integration testing.  There are many ways, and that’s why software
engineering is engineering and not assembly-line work.  The above approach has worked well for me,
and I put interfaces where they make sense (everywhere) in my code to maintain
flexibility.  I’m open for critique or
questions.

StructureMap 1.0 released – Don’t create a loosely coupled system without it – level 200

StructureMap is very easy to use, but it makes creating loosely coupled OO systems a breeze (ok, that’s exaggerating.  it still requires engineering).  I use it in every project I work on, and I like it better than Spring.Net.  Spring needs Xml configuration for every mapping between an interface and a class.  This causes the Xml file to continue to grow and grow and grow.  StructureMap uses Attributes to tag the interface and class definition.  At runtime, it hooks them up.  This type of metadata is much easier to manage.

Jeremy Miller recently released version 1.0 of StructureMap.  I was please he included a feature I explicitly asked for.  If you need a port to .Net 2.0, let me know.  I’ve done it.

Unit testing demonstrated – level 300

I use NUnit for my
automated tests.  Because of that, all my
tests are unit tests, right?

WRONG!  The name of
the testing framework has no bearing on the type of test you have.  NUnit is a framework for running automated
tests.  You _can_ write unit tests with
it, but you can also write integration tests as well as full-system tests with
it.  A
unit test is a special type of developer test and can be done with or without
NUnit.
 

 

A unit test tests a
single line of code.

How big is a unit? 
Well, that’s up to you, and there is not scientific answer.  Typically, you only give a class a single
responsibility.  The class may have several
methods since the class may need to do several things to accomplish the single
responsibility.  The class may have to
collaborate with several other classes to accomplish its purpose.  A unit of code is an identifiable chunk of
code needed to accomplish part of a responsibility.  Is that vague enough for you?  In my example below, I’ll clear this up a
bit.

 

A unit test isolates
the code being tested.

A class will need to talk to other classes.  That’s a given.  Sometimes this is ok for testing, and
sometimes this just gets in the way.  It
might be ok to talk to a class that just builds a string (like StringBuilder),
but it’s not ok to talk to a class that grabs information from a configuration
file.  In a unit test, you need to take
environmental dependencies out of the equation so that a pass or failure is
truly dependent on the code being tested. 
You don’t want the test failing because the configuration file wasn’t in
the right spot.  There are plenty of
techniques available for this.  To start,
you need to code against interfaces and use fake objects like stubs and
mocks.  I like the Rhino mock framework
for this.

 

Here are some
dependencies that will need to be frequently simulated for unit testing:

  • Config
    files
  • Registry
    values
  • Databases
  • Environment
    variables
  • Machine
    name
  • System
    clock (Yup.  Even that has the
    potential to get in your way).

 

Example:

This example will show a real web user control that I’ve
unit-tested.  This is not
theoretical.  This is inside my EZWeb
software.  The purpose of the following
screen is to maintain a few pieces of information for the page being
viewed.  The user can set the title of
the page and some other things.

I’ve used the Mode-View-Presenter pattern to make unit
testing this easier.  Obviously if all my
code is in the code-behind of the ASCX, then I won’t be able to test any of it
because I can’t run that code outside of the ASP.NET runtime.  If you aren’t familiar with the MVP pattern
now, take some time to read up on it. 
The presenter is the controlling class that will be tested.  The code-behind becomes very dumb.  The code-behind will implement my view
interface and be responsible for taking information and setting the correct
control.  The view is very small, and all
the intelligence is in the controlling class (the Presenter).  The presenter is where the bugs will hide, so
I’ll unit test that class.  The model is
represented by an interface IPageConfig that you’ll see being referenced.

 

The following example
shows a unit test of the code that gets data from the model and publishes it to
the view.  The textboxes and drop-downs
need to be set properly.  This is not the
full code.  The full code also reacts to
the save button being clicked and taking modifying information and saving it.

 

Here is the view interface:

namespace
Palermo.EZWeb.UI

{

    public interface IPagePropertiesView

    {

        string
Title { get; set;
}

        bool HasParent
{ get; set; }

        string
Template { get; set;
}

        string
Theme { get; set;
}

        string
Plugin { get; set;
}

        string
Parameter { get; set;
}

        bool
IsPostback { get; }

        DictionaryList
GetTemplateChoices();

        void SetTemplatesDropDown(DictionaryList
list);

        DictionaryList
GetThemeChoices();

        void
SetThemesDropDown(DictionaryList list);

        DictionaryList
GetPluginChoices();

        void
SetPluginDropDown(DictionaryList list);

        void
EnableTitle(bool enabled);

        void
EnableTemplate(bool enabled);

        void
EnableTheme(bool enabled);

        void
EnablePlugin(bool enabled);

        void
EnableParameter(bool enabled);

        void
ReloadParent();

    }

}

 

Here is the
code-behind that implements the view interface (truncated):

      public partial class PageAdministration : UserControl,
IPlugin, IPagePropertiesView

      {

        private
PagePropertiesPresenter _presenter;

 

        public string Title

        {

            get
{ return txtTitle.Text; }

            set
{ txtTitle.Text = value; }

        }

 

        public bool HasParent

        {

            get
{ return Convert.ToBoolean(ddlHasParent.SelectedValue);
}

            set
{ ddlHasParent.SelectedValue = value.ToString();
}

        }

 

        public string Template

        {

            get
{ return ddlTemplate.SelectedValue; }

            set
{ ddlTemplate.SelectedValue = value; }

}

. . .

 

Here is the part of
the Presenter that we’ll be focusing on:

    public class PagePropertiesPresenter

    {

        private
readonly IPagePropertiesView
_view;

        private
ICurrentContext _context;

 

        public
PagePropertiesPresenter(IPagePropertiesView
view)

        {

            _view = view;

            _context = (ICurrentContext) ObjectFactory.GetInstance(typeof (ICurrentContext));

        }

 

        //testing
constructor

        public PagePropertiesPresenter(IPagePropertiesView view, ICurrentContext
context)

        {

            _view = view;

            _context = context;

        }

 

        public virtual void
LoadConfiguration()

        {

            if
(!_view.IsPostback)

            {

                IPageConfig
config = _context.GetPageConfig();

                _view.Title = config.Title;

                _view.HasParent =
config.HasParent;

 

                DictionaryList
templates = removeBadItems(_view.GetTemplateChoices());

               
_view.SetTemplatesDropDown(templates);

                _view.Template =
config.Template;

 

                DictionaryList
themes = removeBadItems(_view.GetThemeChoices());

               
_view.SetThemesDropDown(themes);

                _view.Theme = config.Theme;

 

                DictionaryList
plugins = removeBadItems(_view.GetPluginChoices());

               
_view.SetPluginDropDown(plugins);

                _view.Plugin = config.Plugin;

 

                _view.Parameter = config.Parameter;

            }

}

.  .  .

 

Notice that the LoadConfiguration() method checks for
postback (through the view) and then uses the MODEL to set pieces of
information on the VIEW.  You may think
that this code is boring, but it’s essential for the behavior of the screen.

 

Now for the
test.  Note that we’re simulating the
view and the ICurrentContext interface since these are collaborators.  The ICurrentContext provides the MODEL to the
Presenter:

    [TestFixture]

    public class PagePropertiesPresenterTester

    {

        [Test]

        public void ShouldSetAllInformationOnPage()

        {

            MockRepository
mocks = new MockRepository();

            IPageConfig
mockConfig = (IPageConfig)mocks.CreateMock(typeof(IPageConfig));

            IPagePropertiesView
view = (IPagePropertiesView)mocks.CreateMock(typeof(IPagePropertiesView));

            ICurrentContext
context = (ICurrentContext)mocks.CreateMock(typeof(ICurrentContext));

            Expect.Call(view.IsPostback).Return(false);

            Expect.Call(context.GetPageConfig()).Return(mockConfig);

            string
title = “fake title”;

            Expect.Call(mockConfig.Title).Return(title);

            view.Title = title;

 

            bool
hasParent = false;

            Expect.Call(mockConfig.HasParent).Return(hasParent);

            view.HasParent = hasParent;

 

            string
selectedItem = “foo”;

 

            Expect.Call(mockConfig.Template).Return(selectedItem);

            DictionaryList
list = new DictionaryList();

            list.Add(“first”,
“first”);

            list.Add(“Foo”,
selectedItem);

            list.Add(“.svn”,
“.svn”);

            list.Add(“_something”,
“_something”);

 

            Expect.Call(view.GetTemplateChoices()).Return(list);

            view.SetTemplatesDropDown(null);

            LastCall.On(view).Constraints(new PropertyIs(“Count”, 2));//.
and _ should be stripped out.

            view.Template = selectedItem;

 

            Expect.Call(mockConfig.Theme).Return(selectedItem);

            Expect.Call(view.GetThemeChoices()).Return(list);

            view.SetThemesDropDown(null);

            LastCall.On(view).Constraints(new PropertyIs(“Count”, 2));

            view.Theme = selectedItem;

 

            Expect.Call(mockConfig.Plugin).Return(selectedItem);

            Expect.Call(view.GetPluginChoices()).Return(list);

            view.SetPluginDropDown(null);

            LastCall.On(view).Constraints(new PropertyIs(“Count”, 2));

            view.Plugin = selectedItem;

 

            string
fakeParameter = “faky”;

            Expect.Call(mockConfig.Parameter).Return(fakeParameter);

            view.Parameter = fakeParameter;

 

            mocks.ReplayAll();

 

            PagePropertiesPresenter
presenter = new PagePropertiesPresenter(view,
context);

            presenter.LoadConfiguration();

 

            mocks.VerifyAll();

}

. . .

 

Your first thought might be that this unit test method is
too long.  It certainly pushes my comfort
level as well.  I could have chosen a
small one for this example, but I chose my largest one instead.  I’ve seen other unit testing examples that
are so trivial that they don’t demonstrate much.  In this sample, I chose one of the most
difficult things to unit test:  A UI
screen.  Notice that I’m using Rhino
mocks to set up my fake objects.  The call
to “mocks.VerifyAll()” does a check to ensure that the collaborators were
called with the correct input.  After all,
my presenter method is in charge of getting information from the MODEL and
publishing them to the VIEW.  If you
spend some time going over this test, you can see some of the rules the code
has to live by.  One of the side-effects
of the unit test is documentation of the code (developer documentation).  At this point, I can refactor my method
knowing that I have this test as a safety net.

 

How do I unit test my
legacy code?

Change it.  This
screen started out several years ago with all the code in the code-behind
class.  It was impossible to test this
way.  I had to refactor to the MVP
pattern to enable testing.  I had to
break some things away by inserting an interface so that I’d have a seam to
break dependencies.  In short, you must
refactor your existing code to get it to a point where it is testable.  The reason it’s not testable is that it’s
tightly-coupled with its dependencies.  I
hope by now that the words “loosely-coupled” are recognized as “good” and “tightly-coupled”
are recognized as “bad”.  Testable code
is loosely-coupled.  Loosely-coupled code
is testable.  And now the big leap:  Testable code == good.

Automated testing with .Net – an overview – level 200

Reality

In reality, developers don’t like to do much testing.  Developers aren’t testers.  We typically will write code while making
certain assumptions about variables and, objectively knowing the expected behavior
of the syntax, we might run it once and call it done.  Typically, bugs hide where the code wasn’t
rigorously tests or in paths that weren’t tested at all.  I think everyone agrees that without testing,
the software will have bugs (and often even after testing).  I heard of a crazy management quote that is
very sad: “If you have to test it, you aren’t building it right.”  I sure am glad that manager wasn’t involved
with automobile development!  In reality,
we need testing

 

Types of common
developer testing

  • Actually
    running the code written
  • Simple
    console application or winforms test harnesses
  • Running
    the application through the UI locally or on a development server.

 

Types of testing

  • Unit
    testing
  • Integration
    testing
  • Acceptance
    testing
  • Load
    testing
  • Performance
    testing
  • Security
    testing
  • Exploratory
    testing
  • And
    many more

 

Approaches

  • Do all
    manual testing with or without the help of small tools.
    • With
      this approach, the cost is high because for every run of the test, human
      time is required.  For every
      release, every test case must be repeated.
    • This
      approach doesn’t scale.  When more
      testing is needed, that directly calls for more human time.  Human time is very expensive compared
      to computing time.
  • Automate
    all testing.
    • The
      cost is low for about 80%, but then goes up.
    • Some
      types of tests are hard or impossible to automate effectively, such as
      security, exploratory, and concurrency testing.  This type of testing needs more human
      attention and isn’t easily repeatable.
    • To
      fully succeed at this approach, testers need to be highly skilled at
      scripting.
  • A
    pragmatic approach – automate the testing that is easiest.
    • Unit
      testing
    • Integration
      testing
    • Acceptance
      testing

Unit testing

What is a unit test? 
From Wikipedia:  A “. . . unit
test
is a procedure used to validate that a particular module of source
code is working properly”.  A unit test
is a very unique type of test with some fundamental constraints.  If a test violates a constraint, it ceases
existence as a unit test.

 

Characteristics of a
unit test

  • A unit
    test isolates and tests a single responsibility.
  • A unit
    test isolates a failure.
  • A unit
    test simulates other collaborators using various methods such as fake
    objects.
  • A unit
    test can run and pass in any environment.

When an automated test reaches out to a file, registry key,
database, web service, external configuration, environment variable, etc, it
becomes an integration test and is polluted with dependency on its
environment.  At that point, the binary
can no longer be sent to another location with the expectation that test
continue to pass. 

 

Unit testing is hard. 
It requires constant vigilance to ensure application code is
loosely-coupled.  Because a unit test
must test a piece of code in isolation, the code must be designed so that
collaborators can be easily separated. 
In short, calling out to a static method from inside a constructor is a
tight-coupling.  This type of code will
hinder unit testing.

 

Integration testing

What is integration testing? 
An integration test is a developer test that combines several units of
code and tests the interaction among them. 
A good integration strategy will include tests along all integration
boundaries.  If a unit of code is tested
at the unit level and then tested during interaction with all collaborators,
the developer has confidence that the code will work well when the entire
system is assembled.  Integration
testing
is the second step in a “test small, test medium, test large”
strategy.  Larger integration tests are certainly useful, but
large tests alone don’t help to pinpoint a problem if they fail. 

 

Characteristics of a
good integration test

  • It
    only includes a few classes.  It
    typically focuses on a single class and then includes collaborators.  If a class has 3 collaborators, the test
    would include 4 classes (the first class and its 3 collaborators).  This test would aim to fake the
    collaborators twice removed so that the focus on the test can remain
    narrow.  This is a subjective rule,
    and the developer should use good judgment regarding how many classes to
    include in an integration test.
  • It is
    completely isolated in setup and teardown. 
    Should not be environmentally sensitive.  If the test requires a file to be on the
    drive, then setup for the test should place the required file there
    first.  If the test needs data in a
    database, the setup should insert the required records first.  It should not depend on any environment
    setting being present before the test is run.  It should run on any developer
    workstation.
  • Must
    run fast.  If it’s slow, the build
    time will suffer, and you will run fewer builds.
  • It
    must be order-independent.  If an
    integration test causes a subsequent test to fail, it is indicative of a
    test not owning its own environmental setup.

 

Acceptance testing

What is acceptance testing? 
An evaluation of acceptance criteria on the system to ensure the system
meets the customers’ needs.  Before a
piece of software can be written the software team must have acceptance
criteria.  This is often called
requirements.  Acceptance testing seeks
to execute the requirements on the system for a pass/fail result.

 

Characteristics of a
good acceptance test

  • Easily
    understood by non-developers as well as the customer.
  • Easily
    created by non-developers as well as the customer.
  • When
    it passes, the customer is assured that the system behaves as needed.
  • Very
    expressive using jargon of the business domain.
  • Is
    repeatable.

 

Automated acceptance testing is the final big gain in the
suite of automated tests.  The acceptance
tests actually define requirements in an objective and executable way.  If a tester finds a bug, he can write an
acceptance test that fails in the presence of the bug.  When the bug is fixed, the test will
pass.  Acceptance tests become an
important part of a regression test suite over time.

 

Stay tuned for more
detailed information on the following testing topics:

  • Unit
    testing
  • Integration
    testing
  • Acceptance
    testing

Code analysis from an XP project – level 300

I’ve posted on a retrospective of my team’s current release, and I’ve run a few code analysis numbers to get a baseline trend.

I normally don’t do code analysis since working code is our real goal, but here it is:
I analyzed our latest component, which is a part of a larger software product.  This component delivers tremendous business value, and it was developed from scratch on my team using XP methods.  Here’s the stats:
Statements: 6600
Productions statements: 2500.  The rest is test code.
Number of classes 141 – 71 production classes.  The rest are test classes.
7.5 methods per class.
About 5 lines of code per method on average.
Maximum cyclomatic complexity of any method: 6.  Average is 1.5

We have a few methods that are close to 20 lines of code, but the number of those can be counted on 1 hand.

This release has seen very few bugs
I don’t see any value in using code metrics as a direct measurement of the quality of the code.  It may be a trend, but there is no causality between the two.  It is, however, interesting to look at the trends from time to time.

  • We ended up with 2 times as much test code as production code.
  • Our classes ended up very small.  Our methods even smaller.
  • Our method cyclomatic complexity averaged between 1 and 2.
  • We ended up with about 5 actual bugs in the release.  This might seem unreal, but I credit all the automated test coverage for this result.

I know some of you will cringe at the thought of writing two times as much test code as production code, but given the results we have achieved, I consider it worth it.