How to produce a software product quickly, part 3 – level 300

In my first two installments of this series, I talked about:

These first two guidelines are very important.  The first drives efficiency, and the second drives effectiveness.  Even doing these first two well can leave a team spinning its wheels, however.  If parts 1 and 2 are fully implemented, we are now certain that the team is focusing on the small portion of work that delivers great value.  We are also sure that the team's time isn't being wasted with nonessential or repetitive tasks.  What's next?

Part 3: Use Powerful Tools

A skilled craftsman needs good tools.  First, the craftsman needs the correct tools.  Second, the craftsman needs those tools to be powerful.  A craftsman plowing along with inadequate tools is exerting more effort than necessary.  Why use a hammer and nail when a nail-gun is available?  Why use a screwdriver and wood screws instead of a power drill?

Workstation power

This is the most obvious, but sometimes neglected, tool for a software developer.  Does it make sense to have a highly-paid professional on staff using a mediocre workstation?  The programmer will spend time waiting on a machine.  The answer, of course, is no.  The newest member of my team came on board with a 3.4Ghz Pentium D (dual core) workstation.  1Ghz FSB, 10,000 RPM SATA drive, 2GB RAM.  He had never seen Visual Studio 2005 install in 10 minutes.  Compared to what I'm paying him, his workstation cost pennies.  He spends less time waiting on compiles.  The local build runs very fast as well.  In my mind, it is well worth the cost.

IDE & related tools

Many programmers with whom I talk use Visual Studio.  Just Visual Studio.  Some haven't been exposed to other tools for software development.  First and foremost, Visual Studio is a pretty good solution/project linking and compiling package.  It's pretty poor at helping the coding process, though.  There are a few neat extras you can unlock through SHIFT+ALT+F10 in 2005, but they are sparse.  Install Resharper into the IDE, and it comes alive.  It does use more RAM, but RAM is cheap, and the boost it provides is more than worth it.  Without Resharper, Visual Studio is just a glorified text editor that compiles.  With Resharper, you know about compile errors on the fly, and unused code is grayed out, so you know it's safe for deletion. 

Team tools

It's one thing to give an individual programmer the best tools money can buy.  It's another to focus on the team tools.  There are some tools that all members of the team depend on.  Besides the network connection and the printer, I'm talking about a revision control system and a build server.  It's important to have a quality revision control system.  The team needs to be able to rely on the branching and merging capabilities as well as the commit transactions.  My team uses Subversion.  It has proven itself time and time again. 

The build server is another shared asset.  It must be fast as well since every member of the team will have to wait on it at some point.  The build server can either have the build script run on it manually, or a CI server can run the build when new code is checked in.  Either way, the build needs to run fast, be configurable, and be reliable.  My team uses CruiseControl.Net with NAnt and MSBuild.

Other general, programming and debugging tools

There are many tools that help with a single part of development.  The IDE is the tool that stays open all day long, but other tools are just as critical for specialized tasks.  There is no way I can list them all, but I'll give a run-down of the tools I use:

  • Resharper – productivity add-in for Visual Studio (refactoring, code generation, etc).
  • Lutz Roeder's Reflector – disassemble the .Net Framework.  It's better than documentation.
  • CruiseControl.Net – build server for .Net
  • RealVNC – share the desktop.  Good for pair programming or informal presentations.
  • RssBandit – Keep up with the programming industry through RSS news, blogs, etc.
  • GAIM – Universal IM client.  Use to shoot over snippets of code, svn urls, etc to team members.  Supports all IM services.
  • Google Desktop – Launch programming tools quickly without using the start menu.  Find code files easily.
  • TestDriven.Net – run/debug unit tests right from the code window.  Integrates with NCover for code coverage metrics.
  • Windows Server Admin Pak – Remote desktop to many servers with the same MMC snap-in.
  • TortoiseSVN – Windows client for Subversion.
  • NAnt – build scripting.  In fact, you can script anything with XML using this.  We even use it for zipping up files.
  • NUnit – unit testing framework.
  • StructureMap – IoC container.  Uses attributes for linking interfaces with implementations.  Minimizes Xml configuration.
  • Who's Locking – find out what process has your precious assembly locked.
  • DebugView – hook into the debug stream locally or on a remote server.
  • DiskMon – monitor what's going on on the disk.
  • RegMon – see what keys are being read.
  • FileMon – see what files are being read.
  • Subversion – fast becoming the standard for source control.
  • Ethereal – monitor network traffic.  Find out exactly when and how often your application is communicating over the network.
  • VirtualPC or VMWare – test the software on all supported platforms.
  • Visual XPath – quickly create the correct xpath to get a node from an xml doc.
  • Regulator – Hash out regular expressions quickly with this tool by Roy Osherove.

The list is never ending

Always be on the lookout for tools that will save time or speed you up.  Tools are always improving.  It's important to talk with other professionals from different companies to see what tools are helping them.  Chances are that those tools can help you as well.

GAIM & MSN crash bug fixed – level 100

I use GAIM for all my instant messaging.  I used to use Trillian, but I switched.  I use MSN, Yahoo, AOL, and GoogleTalk (Jabber).  I dropped ICQ some time ago because no one I knew still used it.

Recently, some MSN server changes caused GAIM to crash when logging in.  The GAIM team has fixed this bug and has released a beta with the bug-fix.  I've installed it with no problems, and I'm on universal IM once again.

Martin Fowler evolves his Model-View-Presenter pattern – level 300

I subscribe to Martin's MVP pattern.  If you are new to it, please have a read.  It's a variation of Model-view-controller that puts more behavior in the controller and less in the view.  I have tended to vary the amount of logic that belongs in the view depending on the scenario.  Martin has split the pattern into two:  one part leans toward balancing the logic and putting UI-specific behavior in the view and application behavior in the controller.  Read it here.  The other seeks to make the view as thin as possible and renders the view very passive.  In this case, the controller has every bit of behavior, including setting every single field.  Read it here.

I'm glad he made the split because it really is two different ways to do it.  I tend to throw a domain object at the view and say "here, show this", whereas PassiveView would say to set each field individually and not to let the view know about the domain object.  In Supervising controller (which I favor), the view can know about the domain object and how to bind it to it's GUI elements.

As with all patterns, they have advantages and drawbacks.  The worst thing we can do is be dogmatic about one and declare its applicability to all scenarios.  I've used Supervising Controller in ASP.NET and WinForms, and I like the way it separates behavior from visual goo.  I also like how it pulls behavior into a class that's easily tested.

How to produce a software product quickly, part 2 – level 300

This is a follow on to part 1 of this series.  I'm talking about how to produce software quickly.  To be clear, I'm not talking about producing brittle software quickly.  Software is too expensive to be built cheaply.  This mantra is a good tagline, but it is so true.  The software I'm asked to produce is important.  It can help make or break the company.  The stakes are too high to take shortcuts.  I've seen software systems launch with a glory of "congratulations" emails flying around.  "The project was a huge success", everyone cries.  Then, 2 short years later, developers are threatening to quick if they are forced to attempt one more change to what is now seen as a complete and utter flop.  Then management calls for a rewrite.  Never mind that there is no introspection about what happened?  How could a roaring success turn into a flop in 2 years?  No, no time for retrospective, we need a rewrite.  And the cycle continues every two years.  That is not what I'm talking about.  The software needs to be sustainable.  It can never become so complicated that newcomers to the team have a hard time figuring it out.  It has to glow of simplicity.  Anything large and complex can't be simple, can it?  I think it can.  In this installment, I'm going to talk about a favorite mantra of mine when working on a software product.

Part 2:  Dodge as much work as possible

You are laughing at me right this moment, but I'm serious.  If I can get away with NOT doing something, I will.  There is an infinite amount of work to do on the product, and my team has to produce business value quickly.  Logically, we have to maximize business value delivered with each unit of work chosen.  Certainly product management needs to prioritize items so that what we work on actually matters, but along with feature stories, technical stories creep in.  What other type of work do we find ourselves doing that doesn't translate directly into business value?

Performance tuning

First of all, if a high measure of speed is important for the product, the customer will communicate that.  Software that flies a fighter jet has to be sufficiently responsive that when the joystick moves, the plane moves with it.  1/2 second delay would be completely unacceptable.  Now think about an enterprise business application.  Think about Microsoft Outlook.  How often is there a 1/2 second delay or more when performing an operation?  Is it a show-stopper?  Is the application unusable when the progress bar pops up to "check email"?  Absolutely not.  It is tempting to stroke our technical prowess and ponder ways to save some CPU cycles.  After all, I'm iterating over that collection twice.  Maybe I could trim it down to just once. . .but those operations are in different classes. . . hmm. . .could I alter my design to save that 2nd iteration?  That sounds absurd, especially when your next operation is calling a web service in another state.  You might save a few milliseconds, but then you promptly wait 1 second for the web service call to complete.  While performance tuning, you have some other high-priority stories assigned.  The next stand-up meeting includes your 4 hours of performance tuning, and the customer can't tell a difference in the speed of the application.  In other words, developer time was spent on something with no business value.  Database access is often another slow area that is optimized.  Database access is an "out of process" operation, and is inherently slower in orders of magnitude than any in-process operation.  A typical application, when profiled, will find 80%+ of processing time* in data access operations, not in-process object model manipulation or screen drawing.

*Statistics made up on the fly with 95% accuracy.

It makes sense to optimize data access then, doesn't it?  I don't know.  Does it?  If you don't do it, what will happen?  Will the customer report that the application is too slow?  Will the customer even care?  Is it a SQL Server 2000 database with 1,000,000 records total and no more than 10 concurrent users?  If so, your database server laughs at the load you place on it every day.  It can serve up your requests with one CPU tied behind its back.

Technical stories

It's easy to accept work given by the customer as "#1 priority".  It's not so easy when the team comes up with technical stores.  Many technical stories have merit, and we, as professionals can see things coming, and we need to be able to responsibly allocate work for ourselves that otherwise would not have been brought by product management.  For instance, we must take reasonable measures to secure the application.  The basic one is the database connection string.  How do we store an secure it.  The customer doesn't know about databases or connection strings.  The customer may love the application, but it a script kiddy can find the connection string, access our database and call our delete stored procedures, then we have a big risk in the software.  Explain that to the customer, and they will understand the time spent on securing the connection string.  Judgement comes in to play, though, because we could symmetrically encrypt the connection string, but then a savvy developer could probably hack our software.  Judgement:  Is the customer paying for protection from ill-meaning savvy developers in the company?  This technical story could quickly explode if it's taken too far.

Back to data access.  Wouldn't it be great if we could somehow detect which table columns were changed so that when we did an update, we only updated that column?  To that, I'd say "that would be terrible. . . if it took more than 10 minutes to implement".  If my product were a super-high-traffic site, maybe.  If my product is a enterprise app with a maximum of 10 users actively on the system at any given time, then no.  1000 users of a system in any given day might not even translate into 10 database operations happening concurrently (during the same second).  In this case, there is absolutely no business value derived with this technical story.  If accepted, we are effectively wasting time.  We need to eliminate waste.

Frameworks

"To build our application right, we need to build a framework first."  Other people have written about this, and I've been there.  In fact, I've been a framework writer.  Boy, did I crank out a lot of code that nobody uses!  If a framework is the deliverable, then ok.  If an application is the deliverable, then we'll be building the application, not a framework.  Besides the fact that it's hard to know what to build before something exists to use it, a framework is a technical story that the customer doesn't benefit from.  I am a big fan of using frameworks to build the application quickly, though. 

Build vs. buy

I default to buy and then entertain convincing arguments to build.  Think about the extremes and then work your way back.  Would you build the .Net Framework?  No, buy it (obtain it).  Would you build your own web framework or would you use ASP.NET, struts, webwork, rails, etc?  Would you build your own ADO.NET provider, or would you use DataReader and DataAdapter?  Would you build your own data access plumbing or would you use an O/R Mapper or code generator to build this mechanical, boring code?  Would you build your own screens of the application completely or would you buy a UI controls library?  All these questions have the same thing in common.  Commonality!  .Net Framework is used in all .Net applications.  Web frameworks abstract away Http plumbing.  ADO.NET providers handle the binary communication with a database.  O/R Mappers deliver the transition from a rich object model to the relational storage view of the data.  UI controls abound to get a nice look and feel by leveraging the UI expertise of the industry.  What is left?  I'll make an assertion.  The only thing that should be left is the code that is unique to your product.  This is the code that makes the application important.  Everybody does UI.  Everybody does data access.  Only you deliver application X.  Your customer needs application X for a specific purpose, and that purpose is modeled by you.  It is the one thing you can't buy.  You can't buy the distinct business value you are delivering with your custom software.  In fact, that value is the only thing custom about the software.  It is what matters, though.  By defaulting to buy, I can dodge quite a bit of work.  I don't have to spend time on fancy UI controls.  I don't have to spend time on boring data access plumbing.  I can focus solely on providing unique business value.

Not created here syndrome

This is the fear of tools and libraries.  Essentially the fear of the unknown.  If it's not from Microsoft and not from us, then we're not using it.  I firmly believe that if Microsoft hadn't delivered VSS, many more shops would have never began using source control.  There are countless tools and libraries available for use that ignoring them can be irresponsible.  It's different in the Java world.  The beginning of a project starts with the selection of tools and libraries.  Often for web apps, they'll choose Struts, Spring, and Hibernate.  This combination gives them the shell of an app, and developers are able to focus on the object model that makes the software valuable.  Hibernate is very common for Java apps.  Microsoft doesn't have an O/R Mapper.  Once they do, no one will ever write data access code again (mark my words) in enterprise apps. 

In the .Net world, it can be a struggle because some folks think that Microsoft is the only entity capable of producing a quality library.  Not created here syndrome leads to 3 times as much work as necessary.

YAGNI:  You ain't gonna need it

If the customer specifically asks for a function build it.  If not, don't.  Let's say your customer needs a windows app for managing contacts (I know, trivial example).  The customer needs to be able to add, edit, and delete contacts.  Say you start working on the add feature, and you speculate that you should probably make a screen so that several can be added quickly.  It seems like a logical extension to the feature, and it seems that it could provide value.  The danger is that while you work on the multi-add screen, the edit screen isn't getting done.  A savvy customer will quickly question and correct this type of behavior.  It might have been a logical variation of the feature, but it's not the most important thing.  Like I've said before, there is an infinite amount of work to do.  The art of product delivery is focusing on the small subset of work that will be valued the most.  Essentially, if I demo an incremental build to a customer and I have to point out something, then we point in a feature that could have been deferred in exchange for something more important.  The customer will come to the demo asking if three things are done:  add, edit, and delete.  Until those 3 things are done, the product team has no right to insert other work in front of the key stories.  With the YAGNI mantra, I assume that if the feature isn't specifically requested, it isn't going to be needed.  Maybe it'll be needed later, but that's just speculation.  When the priority is to deliver value quickly, I have to be able to defer nonessential work.

Conclusion

It's somewhat of an art to be able to filter that infinite pile of potential work done to a small subset of work that will satisfy the customer.  Ratholes and scope creep are very dangerous to the timely delivery of software, and I keep in my mind always:  "Can we put this off 'till later"?

How to produce a software product quickly, part 1 – level 300

This is harder than it sounds.  I’m thinking about this topic because I’m the manager of a software product team.  I’m responsible for the product’s health and speedy delivery.  Because of that, I need to steer the team in the direction with the shortest path to the finish line.  Some of the things I’m focusing on are as follows:

Part 1: Eliminate Waste

I think there is merit in the “Lean” notion of software development.  Earlier in my career, I worked for Dell, Inc. as a software developer.  At the time I was pretty pleased to be working for the worlds largest computer manufacturer, but being there taught me a lot about waste and how to indulge in it.  I’m sure other large companies have these problems, but I observed so much waste, it hurt my morale.  All the talk about “work smarter, not harder” was hard to apply inside the work culture.  Mechanical tasks were being done by humans, and manual tasks often had to be repeated several times.  I remember spending days working on items that our business partners were never able to benefit from.

Logically, if we eliminate waste, all that will be left will be tactical and strategic tasks that have a direct impact on the business.  In software, what things could be waste?  Manual tasks:  database migrations, build delivery, pre-production software installations, manual refactoring (without the aid of a smart IDE), typing code instead of generating it, reporting status, slow communication, etc.

Database migration

This screams to be automated.  Perhaps there are some testing databases with realistic data preloaded.  Suppose these are used for reviewing an incremental release to stakeholders.  After the stakeholders are finished reviewing the current build, they will have changed the data in the database, and over time it won’t be so realistic.  For every build review, it’d be nice to have that realistic database back, so we restore it from backup, detach/attach, etc to get a fresh database for the stakeholders.  The key is to not spend human time on such a task.  This task is repeated every 1 to 2 weeks.  Human time is often the most expensive part of software development.  A quick batch script could easily automate the refresh of this database and free up human time for more critical thinking.

Build delivery

To demonstrate the incremental build, how do we install it?  Who builds it?  Does a “build master” build it in release mode?  Why should a human have anything to do with this mundane task.  CCNet and NAnt are more than capable of building and delivery the software package in a zip file.  Extract the zip file on the demo machine and run.  Again, this type of activity is not worthy of human attention.  Make the machine work. 

Pre-production software installations

All software is different.  Some have client components, server components, distributed components.  Mature software teams have environments set up for testing.  These environments are for testing an incremental build.  How does the incremental build get installed?  If there are multiple servers with distributed services, who sets it all up.  I don’t mean to sound like a broken record, but this task doesn’t require critical thinking.  Leave it to the machine to deploy to the testing environment.  The task that requires some thought is putting together the deployment script for the machine to run.  Invest some time in an install script using NAnt, MSBuild or good old DOS commands, and then you can turn it over to a machine to reliably perform over and over again.  In fact, would your testers appreciate a command to run any time they are ready for the next build?  Why not have it in 2 minutes rather than scheduling an environment refresh?

Manual refactoring

If you’ve read any of the other posts on my blog, you see that I’m a fan of tools.  I especially love Resharper because of all the time it saves me.  I remember not using it to.  I remember renaming a public property and then using CTRL+SHIFT+F to do a solution string search for the property.  For a popular property, this might take a few minutes.  With Resharper, it is sub-second.  That’s right.  No more search and replace.  Looking back, why did it take me so long to demand a better tool?  What about pulling a method from a concrete class up to an interface?  I’d never do it manually now when a tool can do it with a few keystrokes.  Again, it’s trading human time for cheaper (and faster) machine time. 

Typing code instead of generating it

I’m not talking about software generators.  I’m talking about micro-generation.  If I need a class with 3 fields, a constructor, and some properties, I can type every character, and I have in the past.  It is much quicker to allow a tool to do it for me.  Resharper as well as CodeRush make use of micro-generation to throw in standard constructors, properties, and they do standard code completion too.  In fact, I let Resharper name my variables for me.  It guesses so well that I have very descriptive variable names after only hitting 4 keys.

Reporting status

This can take quite a bit of time.  Often a stakeholder or project manager interrupts developers to inquire on status.  There is no need for this.  The software team already tracks status somewhere, whether it be in an excel spreadsheet, on a whiteboard or a storywall.  Wherever status is available, just make it more broadly available.  Welcome your stakeholders to take frequent looks at it.  There is no need for in-person interruption just for status. 

Slow communication

Manual gather of status is a form of slow communication.  I’ll throw out a tip on how to slow down communication if it happens too quickly at your company. </tongueInCheek> Give every member of the software team their own offices and make sure all conference rooms are scarce resources and hard to book.  In fact, location members of the software team in different parts of the building, or maybe in a different time zone.  That should slow down communication sufficiently.  Slow communication will slow the production of software.  This is a form of waste.  Waiting for the answer to a question is wasteful.  To eliminate this, locate all members of the team in the same room without physical barriers.  Product managers too.  This will foster instant communication.

Conclusion

Eliminating waste is key to a productive team.  Identifying waste takes some critical thought, though.  Some teams are so busy with wasteful tasks that they can’t slow down to think about remedies.

[tags: software, lean, eliminatewaste, tools, productivity]

If it takes forever to start your app with the debugger, check for thrown exceptions – level 300

Overview of Exceptions
There are quite of a few things that are just laws of Object-Oriented development, and one of those is that exceptions should be avoided.  If you can prevent an exception from being thrown, do it.  In the world of managed runtimes, particularly Java’s JRE and .Net’s CLR, objects are “thrown” to communicate errors.  In a try/catch block, the language limits objects that can be thrown to ones that derives from System.Exception, or java.lang.Throwable in Java.  When an object is “thrown”, the runtime stops and assembles the callstack and some other information and gives code at all levels of the callstack an opportunity to catch the thrown object (exception) and do something with it.  If the exception is never caught, the runtime with catch it and terminate the program.

Clearly, exceptions being thrown in code is a bad thing, and it signals and unstable state in the program.  It may be a huge bug, or the network may have gone down.  Either way, and exception is thrown.  Proper error handling with catch the exception at a point high enough in the callstack where the program can actually make a decision to do something about it.

Swallowing exceptions (wrapping code in a try/catch where the catch block is empty) leads to less feedback.  An exception will happen, but it will be swallowed, and you won’t know about it.  As soon as you start swallowing exceptions, they will start happening without being noticed.  Debuggers pay special attention to exceptions, so swallowed exceptions (thrown, immediately caught, and ignored) will slow down the debugger with each occurrance.

Solution
Go to the Debug menu in Visual Studio and select Exceptions.  CTRL+ALT+E is the shortcut.  Check the checkbox for “Common Language Runtime Exceptions”.  Now when you start your debugger, it will break when a managed exception is thrown.  It will break on the line from which the exception originates.  You can use this technique to find all the exceptions that are happing in your software right under your nose.  If you refactor to keep those exceptions from happening, you’ll see a marked improvement on debugger load time (provided there were a large number of exceptions happening previously).

Alternatives
The alternative to debugging often is automated tests.  When each small piece of code is verified independently, you don’t have much occassion for running the full application in debug mode.  If you have unit tests as part of your automated test suite, a failure will point to the exact place where you have the problem.

Rule to live by
Fail fast.  Fail fast.  If your software is going to fail, make it fail quickly so that you can get the feedback earlier and fix it earlier.  Don’t hide the problem by ignoring it or burying it in a log file that’s already verbose.  If you are unfamiliar with this concept, read this article by James Shore. 

Don’t use an exception as a return value.  In an ideal situation, your application should run with 0 exceptions.  You may have a library that swallows an exception, and that would be unfortunate, but keep your application code clean.  If you can anticipate an exception happening, perform some checks to avoid it being thrown.

How to keep an eye on exceptions
Use Perfmon.  Watch the counter “.Net CLR Exceptions# of Exceps Thrown”.  The number should be zero in an ideal situation.  If you have an app that can’t avoid some exceptions, you can watch “# of Exceps Thrown / sec”.  This number should be close to zero.  If your application is constantly throwing exceptions under ideal circumstances, you have some work to do.

[tags: exceptions, programming, c#, java, failfast, objectoriented, development, .net, clr]

Build and publish .Net 2.0 projects with NAnt and the MSBuild task – level 200

When
.Net 2.0 first came out, I was left using the <exec
/>
task to call msbuild.exe to build my solution.  The NAnt <solution
/>
task is specific to .Net 1.1 because Microsoft change the structure
of project files to be MSBuild scripts.

I may be a little behind on this, but NAntContrib now contains a
<msbuild /> task that can either just build your solution or execute an
entire msbuild script.  Here is an excerpt from my NAnt build script:

<target name=compile>

       <echo message=Build Directory is ${build.dir} />

       <msbuild project=src/xxx.sln>

              <arg value=/property:Configuration=release />                                  

              <arg value=/t:Rebuild />

       </msbuild>

</target>

 

The task brings the full power of MSBuild to NAnt.  At first, I thought about converting the
entire build to MSBuild, but I have so much invested in NAnt that I can’t
justify the effort of conversion yet since we won’t gain anything.  For now, we’ll have a mixed build process. 

 

Later when we need to package with ClickOnce, we can use the
<msbuild /> task with the publish target:

 

<msbuild project=src/xxx.sln>

<arg value=/p:Configuration=${project.config};ApplicationVersion=${project.version}.${CCNetLabel};MinimumRequiredVersion=${project.version}.${CCNetLabel};UpdateRequired=true />

       <arg value=/t:publish />

</msbuild>

 

This node builds the ClickOnce deployment package for our
application.  CruiseControl.Net executes this NAnt
script, so every time we commit code to Subversion,
we build out the ClickOnce install package and commit it back to SVN and tag
it.  No matter what version we need, it’s
right at our fingertips.

 

By the way, I now always edit my build scripts in VS 2005
since Resharper 2.0 helps
with NAnt scripts.  I can rename targets,
find usages, jump from target to target just as I can in code.  In short, it has refactoring and navigation
support for the build scripts.

Another Winforms testing framework from Thoughtworks! – level 200

Testing WinForms UIs is tough.  Manual testing is slow and difficult to repeat.  Vivek Singh has just released a new version of SharpRobo, a WinForms testing framework.  Vivek recommends running these tests through NUnit, and from practive, I’ve found that whatever I can test with NUnit, I can test with FIT, so I’m excited to try it out.  Thoughtworks has previously produced NUnitForms.

Selenium and FIT are good for testing Web apps, but I am dealing with a windows application at this point.  Kudos to Vivek for this library.

Review James Shore’s new book “The Art of Agile Development” online – level 200

James Shore is taking lessons learned and teaming up to write a book called “The Art of Agile Development”.  He’s posting sections on his website for review.  You can read a section and post what you think to his Yahoo group.  I’ve followed James’ blog for quite a while, and his normal diary has been very useful to me.  He has a great deal of insight into how to manage a software team.

I’d recommend this reading for anyone involved in a software project.