Unit testing demonstrated – level 300

I use NUnit for my
automated tests.  Because of that, all my
tests are unit tests, right?

WRONG!  The name of
the testing framework has no bearing on the type of test you have.  NUnit is a framework for running automated
tests.  You _can_ write unit tests with
it, but you can also write integration tests as well as full-system tests with
it.  A
unit test is a special type of developer test and can be done with or without
NUnit.
 

 

A unit test tests a
single line of code.

How big is a unit? 
Well, that’s up to you, and there is not scientific answer.  Typically, you only give a class a single
responsibility.  The class may have several
methods since the class may need to do several things to accomplish the single
responsibility.  The class may have to
collaborate with several other classes to accomplish its purpose.  A unit of code is an identifiable chunk of
code needed to accomplish part of a responsibility.  Is that vague enough for you?  In my example below, I’ll clear this up a
bit.

 

A unit test isolates
the code being tested.

A class will need to talk to other classes.  That’s a given.  Sometimes this is ok for testing, and
sometimes this just gets in the way.  It
might be ok to talk to a class that just builds a string (like StringBuilder),
but it’s not ok to talk to a class that grabs information from a configuration
file.  In a unit test, you need to take
environmental dependencies out of the equation so that a pass or failure is
truly dependent on the code being tested. 
You don’t want the test failing because the configuration file wasn’t in
the right spot.  There are plenty of
techniques available for this.  To start,
you need to code against interfaces and use fake objects like stubs and
mocks.  I like the Rhino mock framework
for this.

 

Here are some
dependencies that will need to be frequently simulated for unit testing:

  • Config
    files
  • Registry
    values
  • Databases
  • Environment
    variables
  • Machine
    name
  • System
    clock (Yup.  Even that has the
    potential to get in your way).

 

Example:

This example will show a real web user control that I’ve
unit-tested.  This is not
theoretical.  This is inside my EZWeb
software.  The purpose of the following
screen is to maintain a few pieces of information for the page being
viewed.  The user can set the title of
the page and some other things.

I’ve used the Mode-View-Presenter pattern to make unit
testing this easier.  Obviously if all my
code is in the code-behind of the ASCX, then I won’t be able to test any of it
because I can’t run that code outside of the ASP.NET runtime.  If you aren’t familiar with the MVP pattern
now, take some time to read up on it. 
The presenter is the controlling class that will be tested.  The code-behind becomes very dumb.  The code-behind will implement my view
interface and be responsible for taking information and setting the correct
control.  The view is very small, and all
the intelligence is in the controlling class (the Presenter).  The presenter is where the bugs will hide, so
I’ll unit test that class.  The model is
represented by an interface IPageConfig that you’ll see being referenced.

 

The following example
shows a unit test of the code that gets data from the model and publishes it to
the view.  The textboxes and drop-downs
need to be set properly.  This is not the
full code.  The full code also reacts to
the save button being clicked and taking modifying information and saving it.

 

Here is the view interface:

namespace
Palermo.EZWeb.UI

{

    public interface IPagePropertiesView

    {

        string
Title { get; set;
}

        bool HasParent
{ get; set; }

        string
Template { get; set;
}

        string
Theme { get; set;
}

        string
Plugin { get; set;
}

        string
Parameter { get; set;
}

        bool
IsPostback { get; }

        DictionaryList
GetTemplateChoices();

        void SetTemplatesDropDown(DictionaryList
list);

        DictionaryList
GetThemeChoices();

        void
SetThemesDropDown(DictionaryList list);

        DictionaryList
GetPluginChoices();

        void
SetPluginDropDown(DictionaryList list);

        void
EnableTitle(bool enabled);

        void
EnableTemplate(bool enabled);

        void
EnableTheme(bool enabled);

        void
EnablePlugin(bool enabled);

        void
EnableParameter(bool enabled);

        void
ReloadParent();

    }

}

 

Here is the
code-behind that implements the view interface (truncated):

      public partial class PageAdministration : UserControl,
IPlugin, IPagePropertiesView

      {

        private
PagePropertiesPresenter _presenter;

 

        public string Title

        {

            get
{ return txtTitle.Text; }

            set
{ txtTitle.Text = value; }

        }

 

        public bool HasParent

        {

            get
{ return Convert.ToBoolean(ddlHasParent.SelectedValue);
}

            set
{ ddlHasParent.SelectedValue = value.ToString();
}

        }

 

        public string Template

        {

            get
{ return ddlTemplate.SelectedValue; }

            set
{ ddlTemplate.SelectedValue = value; }

}

. . .

 

Here is the part of
the Presenter that we’ll be focusing on:

    public class PagePropertiesPresenter

    {

        private
readonly IPagePropertiesView
_view;

        private
ICurrentContext _context;

 

        public
PagePropertiesPresenter(IPagePropertiesView
view)

        {

            _view = view;

            _context = (ICurrentContext) ObjectFactory.GetInstance(typeof (ICurrentContext));

        }

 

        //testing
constructor

        public PagePropertiesPresenter(IPagePropertiesView view, ICurrentContext
context)

        {

            _view = view;

            _context = context;

        }

 

        public virtual void
LoadConfiguration()

        {

            if
(!_view.IsPostback)

            {

                IPageConfig
config = _context.GetPageConfig();

                _view.Title = config.Title;

                _view.HasParent =
config.HasParent;

 

                DictionaryList
templates = removeBadItems(_view.GetTemplateChoices());

               
_view.SetTemplatesDropDown(templates);

                _view.Template =
config.Template;

 

                DictionaryList
themes = removeBadItems(_view.GetThemeChoices());

               
_view.SetThemesDropDown(themes);

                _view.Theme = config.Theme;

 

                DictionaryList
plugins = removeBadItems(_view.GetPluginChoices());

               
_view.SetPluginDropDown(plugins);

                _view.Plugin = config.Plugin;

 

                _view.Parameter = config.Parameter;

            }

}

.  .  .

 

Notice that the LoadConfiguration() method checks for
postback (through the view) and then uses the MODEL to set pieces of
information on the VIEW.  You may think
that this code is boring, but it’s essential for the behavior of the screen.

 

Now for the
test.  Note that we’re simulating the
view and the ICurrentContext interface since these are collaborators.  The ICurrentContext provides the MODEL to the
Presenter:

    [TestFixture]

    public class PagePropertiesPresenterTester

    {

        [Test]

        public void ShouldSetAllInformationOnPage()

        {

            MockRepository
mocks = new MockRepository();

            IPageConfig
mockConfig = (IPageConfig)mocks.CreateMock(typeof(IPageConfig));

            IPagePropertiesView
view = (IPagePropertiesView)mocks.CreateMock(typeof(IPagePropertiesView));

            ICurrentContext
context = (ICurrentContext)mocks.CreateMock(typeof(ICurrentContext));

            Expect.Call(view.IsPostback).Return(false);

            Expect.Call(context.GetPageConfig()).Return(mockConfig);

            string
title = “fake title”;

            Expect.Call(mockConfig.Title).Return(title);

            view.Title = title;

 

            bool
hasParent = false;

            Expect.Call(mockConfig.HasParent).Return(hasParent);

            view.HasParent = hasParent;

 

            string
selectedItem = “foo”;

 

            Expect.Call(mockConfig.Template).Return(selectedItem);

            DictionaryList
list = new DictionaryList();

            list.Add(“first”,
“first”);

            list.Add(“Foo”,
selectedItem);

            list.Add(“.svn”,
“.svn”);

            list.Add(“_something”,
“_something”);

 

            Expect.Call(view.GetTemplateChoices()).Return(list);

            view.SetTemplatesDropDown(null);

            LastCall.On(view).Constraints(new PropertyIs(“Count”, 2));//.
and _ should be stripped out.

            view.Template = selectedItem;

 

            Expect.Call(mockConfig.Theme).Return(selectedItem);

            Expect.Call(view.GetThemeChoices()).Return(list);

            view.SetThemesDropDown(null);

            LastCall.On(view).Constraints(new PropertyIs(“Count”, 2));

            view.Theme = selectedItem;

 

            Expect.Call(mockConfig.Plugin).Return(selectedItem);

            Expect.Call(view.GetPluginChoices()).Return(list);

            view.SetPluginDropDown(null);

            LastCall.On(view).Constraints(new PropertyIs(“Count”, 2));

            view.Plugin = selectedItem;

 

            string
fakeParameter = “faky”;

            Expect.Call(mockConfig.Parameter).Return(fakeParameter);

            view.Parameter = fakeParameter;

 

            mocks.ReplayAll();

 

            PagePropertiesPresenter
presenter = new PagePropertiesPresenter(view,
context);

            presenter.LoadConfiguration();

 

            mocks.VerifyAll();

}

. . .

 

Your first thought might be that this unit test method is
too long.  It certainly pushes my comfort
level as well.  I could have chosen a
small one for this example, but I chose my largest one instead.  I’ve seen other unit testing examples that
are so trivial that they don’t demonstrate much.  In this sample, I chose one of the most
difficult things to unit test:  A UI
screen.  Notice that I’m using Rhino
mocks to set up my fake objects.  The call
to “mocks.VerifyAll()” does a check to ensure that the collaborators were
called with the correct input.  After all,
my presenter method is in charge of getting information from the MODEL and
publishing them to the VIEW.  If you
spend some time going over this test, you can see some of the rules the code
has to live by.  One of the side-effects
of the unit test is documentation of the code (developer documentation).  At this point, I can refactor my method
knowing that I have this test as a safety net.

 

How do I unit test my
legacy code?

Change it.  This
screen started out several years ago with all the code in the code-behind
class.  It was impossible to test this
way.  I had to refactor to the MVP
pattern to enable testing.  I had to
break some things away by inserting an interface so that I’d have a seam to
break dependencies.  In short, you must
refactor your existing code to get it to a point where it is testable.  The reason it’s not testable is that it’s
tightly-coupled with its dependencies.  I
hope by now that the words “loosely-coupled” are recognized as “good” and “tightly-coupled”
are recognized as “bad”.  Testable code
is loosely-coupled.  Loosely-coupled code
is testable.  And now the big leap:  Testable code == good.

Automated testing with .Net – an overview – level 200

Reality

In reality, developers don’t like to do much testing.  Developers aren’t testers.  We typically will write code while making
certain assumptions about variables and, objectively knowing the expected behavior
of the syntax, we might run it once and call it done.  Typically, bugs hide where the code wasn’t
rigorously tests or in paths that weren’t tested at all.  I think everyone agrees that without testing,
the software will have bugs (and often even after testing).  I heard of a crazy management quote that is
very sad: “If you have to test it, you aren’t building it right.”  I sure am glad that manager wasn’t involved
with automobile development!  In reality,
we need testing

 

Types of common
developer testing

  • Actually
    running the code written
  • Simple
    console application or winforms test harnesses
  • Running
    the application through the UI locally or on a development server.

 

Types of testing

  • Unit
    testing
  • Integration
    testing
  • Acceptance
    testing
  • Load
    testing
  • Performance
    testing
  • Security
    testing
  • Exploratory
    testing
  • And
    many more

 

Approaches

  • Do all
    manual testing with or without the help of small tools.
    • With
      this approach, the cost is high because for every run of the test, human
      time is required.  For every
      release, every test case must be repeated.
    • This
      approach doesn’t scale.  When more
      testing is needed, that directly calls for more human time.  Human time is very expensive compared
      to computing time.
  • Automate
    all testing.
    • The
      cost is low for about 80%, but then goes up.
    • Some
      types of tests are hard or impossible to automate effectively, such as
      security, exploratory, and concurrency testing.  This type of testing needs more human
      attention and isn’t easily repeatable.
    • To
      fully succeed at this approach, testers need to be highly skilled at
      scripting.
  • A
    pragmatic approach – automate the testing that is easiest.
    • Unit
      testing
    • Integration
      testing
    • Acceptance
      testing

Unit testing

What is a unit test? 
From Wikipedia:  A “. . . unit
test
is a procedure used to validate that a particular module of source
code is working properly”.  A unit test
is a very unique type of test with some fundamental constraints.  If a test violates a constraint, it ceases
existence as a unit test.

 

Characteristics of a
unit test

  • A unit
    test isolates and tests a single responsibility.
  • A unit
    test isolates a failure.
  • A unit
    test simulates other collaborators using various methods such as fake
    objects.
  • A unit
    test can run and pass in any environment.

When an automated test reaches out to a file, registry key,
database, web service, external configuration, environment variable, etc, it
becomes an integration test and is polluted with dependency on its
environment.  At that point, the binary
can no longer be sent to another location with the expectation that test
continue to pass. 

 

Unit testing is hard. 
It requires constant vigilance to ensure application code is
loosely-coupled.  Because a unit test
must test a piece of code in isolation, the code must be designed so that
collaborators can be easily separated. 
In short, calling out to a static method from inside a constructor is a
tight-coupling.  This type of code will
hinder unit testing.

 

Integration testing

What is integration testing? 
An integration test is a developer test that combines several units of
code and tests the interaction among them. 
A good integration strategy will include tests along all integration
boundaries.  If a unit of code is tested
at the unit level and then tested during interaction with all collaborators,
the developer has confidence that the code will work well when the entire
system is assembled.  Integration
testing
is the second step in a “test small, test medium, test large”
strategy.  Larger integration tests are certainly useful, but
large tests alone don’t help to pinpoint a problem if they fail. 

 

Characteristics of a
good integration test

  • It
    only includes a few classes.  It
    typically focuses on a single class and then includes collaborators.  If a class has 3 collaborators, the test
    would include 4 classes (the first class and its 3 collaborators).  This test would aim to fake the
    collaborators twice removed so that the focus on the test can remain
    narrow.  This is a subjective rule,
    and the developer should use good judgment regarding how many classes to
    include in an integration test.
  • It is
    completely isolated in setup and teardown. 
    Should not be environmentally sensitive.  If the test requires a file to be on the
    drive, then setup for the test should place the required file there
    first.  If the test needs data in a
    database, the setup should insert the required records first.  It should not depend on any environment
    setting being present before the test is run.  It should run on any developer
    workstation.
  • Must
    run fast.  If it’s slow, the build
    time will suffer, and you will run fewer builds.
  • It
    must be order-independent.  If an
    integration test causes a subsequent test to fail, it is indicative of a
    test not owning its own environmental setup.

 

Acceptance testing

What is acceptance testing? 
An evaluation of acceptance criteria on the system to ensure the system
meets the customers’ needs.  Before a
piece of software can be written the software team must have acceptance
criteria.  This is often called
requirements.  Acceptance testing seeks
to execute the requirements on the system for a pass/fail result.

 

Characteristics of a
good acceptance test

  • Easily
    understood by non-developers as well as the customer.
  • Easily
    created by non-developers as well as the customer.
  • When
    it passes, the customer is assured that the system behaves as needed.
  • Very
    expressive using jargon of the business domain.
  • Is
    repeatable.

 

Automated acceptance testing is the final big gain in the
suite of automated tests.  The acceptance
tests actually define requirements in an objective and executable way.  If a tester finds a bug, he can write an
acceptance test that fails in the presence of the bug.  When the bug is fixed, the test will
pass.  Acceptance tests become an
important part of a regression test suite over time.

 

Stay tuned for more
detailed information on the following testing topics:

  • Unit
    testing
  • Integration
    testing
  • Acceptance
    testing

Writing software is too EASY these days – 300

Writing software is too easy these days. 
Since it’s so easy, there is a bunch of bad software out there. 
It’s easy to write bad software.  It’s hard to write good software.

 

Everyone needs software these days.  From the home consumer to the large
enterprise.  For the home consumer, we
have shrink-wrap software developed by large teams, such as Microsoft Office,
Quicken, etc.  As a business grows,
however, it’s more likely to hire or contract some programmers to write some
custom software to drive efficiency in the business. 

 

This is where the “software is too hard/easy” argument comes
in to play.  Some complain that software
is too hard. 

 

Paul
Reedman in 2004 blogged
how he thinks software development is still too
hard.  Paul focuses on Java
technologies.  He complains that research
has focused on the language as the way to solve problems instead of the tools.

 

Steve at
furrygoat.com ponders
whether writing software is too hard because it’s a
little difficult to learn how and it takes some effort.

 

Rocky
Lhotka declares that software is too hard
. 
He laments that software developers have to worry about similar tasks
from project to project (plumbing).  In
this article, the tools are to blame because they provide too much flexibility
(require too many decision), whereas a good tool would reduce the creation of
software to merely configuration (my interpretation).

 

I declare that writing software is too EASY.  For the semi-technical
folks that need to make something to help out the business, we have tools like
MS Access, Excel and even word macros. 
You can do some pretty cool stuff with MS Access using linked tables to
SQL Server!

 

When it comes to custom
software
, it requires a properly skilled person.  Software is a craft.  It’s bordering on engineering and overlaps as
well, but it still has some maturing to do. 
An unskilled person is dangerous when attempting custom software.  There are plenty of products a company can
buy and then let the semi-skilled office worker configure it, but custom
software is just that, custom.  You
wouldn’t have a custom chopper built by someone who wasn’t very good, would
you?

 

Tools today make
custom software too easy to develop
. 
This statement may seem controversial, but I believe it.  I’ve seen too many unskilled people take a
programming tool and throw together something that seems to work initially.  The business folks can’t tell a fraud from an
expert or evaluate the work, so they use it and then feel the pain later when
they actually have to maintain it. 

 

Software isn’t a toy.  You don’t “play around” with writing
software.  Companies rely heavily on the
custom software they pay for.  My company
could be sued out of business if the software screwed up.  Software is serious.  From bank transactions to supply lines to
taking orders on the web.  The business
relies on the software.  If it’s buggy,
the company could be losing money and not even know it.  If it was developed by an unskilled person,
it’ll be hell to maintain, and it’ll suck money out of the company later.

 

There’s been some talk on different types of “programmers” –
Mort/Elvis/Einstein (most recently by Scott
Bellware
and 2nded by Sam
Gentile
.  This thought is that these
types of programmers aren’t valid anymore and that there needs to be one type (the good type)

 

The only type of
programmer should be the good type.

This is what it comes down too.  Companies should not trust an unskilled
person to be a programmer for them. 
Ultimately it’s the managers decision, but it also their
responsibility.  A manager should know
what a good programmer is.  That is a big
problem in a lot of companies.  The
management has no way to know if a programmer is good or not.  What happens is that an unskilled person
throws a piece of crap together that might work initially but after that person
leaves and another has to maintain the application, it is revealed that it
makes no sense how the ball of mess keeps working.  At that point, management couldn’t bribe a good developer to maintain it –
and management has to pay for the software to be rewritten.  What a
waste.  This happens all too often,
though.  How many times have we been on a
rewrite project?

 

If you have a rewrite
a piece of software from scratch, take the time to learn from past mistakes.

If you’ve ever been on a rewrite project, you’ve heard
project managers belabor the faults of the previous edition and claim that
“we’re going to do it right this time”. 
What fails to happen is a critical retrospection of the method of
development of the initial package and what possibly caused the software to go
to hell?  This step doesn’t happen, and
the rewrite occurs using the same methods. 
To then expect a different result would be insanity.

 

It’s too easy.

It’s too easy for an unskilled person to throw a screen
together and deploy it.  It’s too easy
for Joe blow to create a database application that pulls over entire tables to
the client for modifying one record (but it works – initially).  It’s too easy for a newbie to get excited
about a new technology and completely screw up an application with web service calls to itself and overdo sending XML to Sql
Server 2000.  It’s too easy for a database
guy to throw tons of business logic in stored procedures, call them from ASP
and call it an application (until a skilled programmer looks at it later and
has a heart attack).

 

If software is to be
easy, then it must also be disposable.

That’s right.  If an
unskilled person throws together a piece of crap that is then used for a while
by a business, the management must know that if they need to change it and the
original author isn’t there, he’ll have to pay for a total rewrite.  Like in the car insurance business, if the
cost to repair a vehicle is greater than the cost to replace, then the car is
totaled. 

 

Apprentice,
Journeyman, Master

These are time-tested levels of skill in many
disciplines.  An apprentice is not
trusted to work alone because the apprentice is unskilled.  The apprentice could have the best of
intentions and know exactly what the customer wants, but he’ll make a mess of
things if left to work alone.  Not all
apprentices make it to Journeyman (where they are trusted to produce good
work), and not all Journeymen make it to Master (where they are given the
responsibility of ensuring quality and training the others). 

 

I’m not the only one
who thinks software is too easy:

 

 

 

 

Code analysis from an XP project – level 300

I’ve posted on a retrospective of my team’s current release, and I’ve run a few code analysis numbers to get a baseline trend.

I normally don’t do code analysis since working code is our real goal, but here it is:
I analyzed our latest component, which is a part of a larger software product.  This component delivers tremendous business value, and it was developed from scratch on my team using XP methods.  Here’s the stats:
Statements: 6600
Productions statements: 2500.  The rest is test code.
Number of classes 141 – 71 production classes.  The rest are test classes.
7.5 methods per class.
About 5 lines of code per method on average.
Maximum cyclomatic complexity of any method: 6.  Average is 1.5

We have a few methods that are close to 20 lines of code, but the number of those can be counted on 1 hand.

This release has seen very few bugs
I don’t see any value in using code metrics as a direct measurement of the quality of the code.  It may be a trend, but there is no causality between the two.  It is, however, interesting to look at the trends from time to time.

  • We ended up with 2 times as much test code as production code.
  • Our classes ended up very small.  Our methods even smaller.
  • Our method cyclomatic complexity averaged between 1 and 2.
  • We ended up with about 5 actual bugs in the release.  This might seem unreal, but I credit all the automated test coverage for this result.

I know some of you will cringe at the thought of writing two times as much test code as production code, but given the results we have achieved, I consider it worth it.

TDD makes refactoring easy – level 300

Here is the main reasoning for this:  TDD states that a unit test
is written before a unit of code.  Each unit of code will have a
unit test.  When a unit of code needs to be refactored (changed
in structure without affecting behavior), the unit test will preserve
the behavior of the code.  The unit can be refactored quickly, and
the unit tests will assert that the code still behaves as originally
intended.

Consider refactoring without unit tests or with only integration
tests.  First, without tests.  You refactor a piece of code
that doesn’t have a test.  Now to ensure that you didn’t break
anything, you have to run your application with some scenarios that are
sure to exercise the changed code.  This is more time-consuming
that merely running the unit tests for the unit in question. 

Suppose you have _some_ tests for the unit, but they aren’t unit
tests.  The tests involve several other pieces of code as
well.  If all is well, the tests will still pass, but it they
fail, you would have to debug into the test to find out what’s really
going on since the test has a larger scope.  A unit test’s scope
is only that of the code unit, and if the test fails, it pinpoints the
area of failure.

TDD reveals poor class design – level 300

If a class is well-designed, it will be easy to unit test.  Good
design speeds up unit testing, and TDD reveals bad design.  In
this way, TDD speeds up unit testing.

For a very simple example, consider a controller class that takes a
domain object and saves it to the database using a data access
class.  When test-driving this code, it will be impossible to
create a unit test for this if it is poorly designed.  Consider
this poor design:  the SaveXX() method instantiates the data
access class and passes the domain object to it for saving.  With
TDD it would be impossible to write a unit test for that because you
can’t fake the data access class, and you must test the controller
class apart from the dependency on the data access layer.

Because of the major roadblock in designing a test for code designed
poorly, it forces the developer to come up with a better design. 
This better design will be a cinch to unit test because it will be
loosely coupled.  Loose coupling is essential for unit testing.

To write a test for this controller class, I’ll have to define an
interface for this portion of the data access layer.  I’ll inject
this interface into the constructor of the controller class and call it
in my SaveXX() method.  With this design pending, I can fake the
interface in my test and assert that it was called with the correct
object.  This will be my test.  Next, I’ll write the code
that makes the test pass. 

Poorly designed code is hard to test, and thinking about testability
will force a more thoughtful design.  I would even go as far to
say that at the class level, a testable class is a well-designed class.

TDD speeds up unit testing – level 300

Before reading this post, I recommend reading a previous post of mine along with the comments and trackbacks to related posts.

First, you may wonder why I’m contending that TDD speeds up unit
testing if TDD itself stresses unit testing so much.  One can unit
test without employing TDD.  I can write some code and write a
unit test for it.  If I use this method, I’m likely to have a hard
time unit testing the code because it may call out to a database or the
file system or a web service or some other object that is difficult to
create in isolation.  If I think about how I’m going to test the
code as I’m writing it, I’ll end up with code that’s easier to test
because I’ll depend on interfaces and employ other techniques to ensure
my code is loosely coupled
Because my goal was a piece of code for which it was easy to write a
unit test, I’ll end up with code designed for that.  At this
point, I’m blurring the lines between just unit testing and TDD. 
I didn’t write my unit test first, but I _thought_ about how I would
write my unit test first.  I imagined the method of testing before
writing the code.  From this point, it’s a snap to then actually
write the test that’s floating around in my head before coding the
production code.

From a pragmatic sense, it probably doesn’t matter which code is
actually typed first if you’ve already decided on the unit test in your
head.  You’ve written the unit test in your head before the
production code.  The typing is then just semantics. 

If you aren’t thinking about unit testing at all (if you don’t
currently do unit testing), then this whole topic is worthless for you,
but if you are attempting to write unit tests after you design your
code, I’m sure you can relate that it’s often difficult because you
have to pull in other classes just to test a single class.  If
this happens, then it’s not a unit test at all.  It’s an
integration test because it tests multiple things.  The difference
in unit testing and integration testing is a topic for another post,
but the difference is very important, and if the difference is not
understood, then arguments for/against TDD don’t make much sense at all.

The above is how TDD speeds up writing unit tests (if you have already been doing unit testing).

If the goal is code with unit tests, then the code will be designed for
easy unit testing.  To design for easy unit testing, a developer
has to think about the test first.  This leads to mental TDD, and
after the code is designed, the unit test can be written quickly. 
The next step would naturally be to just go ahead and type the test
first instead of keeping it in your brain.  If at this point you
prefer to continue to type the test after the production code, go
ahead.  You will have benefited some by focusing more on how to
test the code.

How to design a single method – level 200

What?  I know how to design a single method!  What can
Jeffrey Palerm possibly have to tell me about how to design a single
method?

Take a look at your product’s code base.  Are any of the following true?:

    * You have a method that you can’t view without scrolling.
    * You have a method that takes 10 arguments.
   
* You have a method that not only returns a result but causes
disastrous things to happen if called more than once.
    * You have a method that throws exceptions if you don’t set certain properties first.
    * You have a method that has 5 nested foreach/if/try-catch code blocks.
    * You have a method whose name doesn’t give you a clue as to what it does.
    * You have a method that will not throw an exception because it swallows the exception without handling it.

I
could go on and on (ok, actually I’m out of ideas right now), but
chances are that you can find a method that meets one of the criteria
above.  Find one of those methods, and fix it.  Here are
general rules I try to live by when designing a single method:

    * The name of the method should describe what the method actually does.
    * A method should do one and only one thing.
   
* A method should either DO something or RETURN something.  If it
returns something, you should be able to call the method over and over
with no negative consequences.
    * The method
should be viewable without scrolling.  Fewer than 10 lines of code
is desirable (my pain point).
    * The method should
take few arguments.  If the method needs more, refactor to require
and object that contains all the information required (parameter
object).
    * The method depends only on method parameters or members passed into the class constructor.
    * A method should only have one indented code block – don’t nest code blocks.  Extract method first.
   
* Don’t try/catch in a method unless you can handle the exception or
add value.  Let the exception bubble up to a point where you can
actually do something about the exception.

If you have any good points for method development, please feel free to leave a comment.

Design patterns are only useful if. . . – level 200

Design patterns are only useful if the team has a shared vocabulary.  In other words, when you say the name of the design pattern, the others on the team must know what that means.  Otherwise, it’s like a foreign language.


I was having lunch today with a co-worker, and we discussed the meaning of design patterns.  A singleton is only a singleton if we all agree that we are going to call it “singleton”.  An “observer” pattern is only that if I can say “observer” and the other members of the team know what I’m talking about.  Design patterns are a shared vocabulary.  It’s a way to speak in shortened jargon and convey meaning. 


Every field has their own jargon, and in software, we one of our categories of jargon is design patterns.  In football, you can tell a receiver to run a bootleg pattern.  Some teams have unique names for pass routes as well.  The words only have value because everyone on the team shares the vocabulary.


Consider this scenario:  A developer goes out and reads several design pattern books and then comes back and starts using the new terms in design conversations.  This has no value since the new terms have no meaning to the rest of the team.


Design patterns are useful when they can be used to convey specific meaning in design conversations.  On my team, if I say “I’m going to stub the web service call”, everyone knows that I am going to make a class that will fake the behavior of the web service so that the client code can function without the SOAP call actually occurring.  A stub obviously isn’t the most glamorous design pattern, but all these small pieces of jargon help when communicating on a software team.


It is very helpful for design patterns to be a shared understaning throughout the industry since we communicate about software beyond team boundaries.  I love that there are plenty of books on the topic these days, but it’s a shame that more developers don’t share the terminology.