IronPython for ASP.NET CTP downloaded – level 100

I searched MSDN this morning for “ironpython”, and found a CTP had been posted yesterday.  It installs into Visual Studio and makes it possible to write ASP.NET websites using all the dynamic language Python.  This is very interesting as it’s the first dynamic language will full tool support in Visual Studio.

I personally am going to investigate it for use in testing.  If some of my code depends on the system clock, it would be so nice to just fake out the system clock in real-time with the power of a dynamic language instead of having to fake it out with more intrusive techniques such as putting it behind an interface.

Faking interfaces with events or delegates using Rhino Mocks – level 300

I think mock objects themselves are a 300-level topic.  I wish it weren’t so, but from the folks I talk to, the average developer doesn’t use them.  Fakes, stubs, mocks (whatever you want to call them – and I know they overlap, and I understand the semantic differences among them) are critical for testing.  It’s important to isolate code under test, and in order to do that, we have to fake out other classes the current class talks to.  Not all the classes as a rule, but the ones that might give our test unpredictable results.

Phil Haack lays out an example faking an event on an interface.  While I prefer to use plain delegates for my view-controller notification, events are multicast delegates, so the work as well.  Give it a read.

http://haacked.com/archive/2006/06/23/UsingRhinoMocksToUnitTestEventsOnInterfaces.aspx

NHibernate: Casting is a state change and makes a persistent object dirty immediately – level 200

NHibernate knows when an object under its watch has changed.  As soon as the object changes, it is “dirty”.  Some other changes might cause an object to be dirty as well.  One that my team recently encountered is a cast.  We use an enum of type byte.  It’s only a few items (less than 255), so we use a tinyint in our database.  When our mapping uses type=”byte”, NHibernate casts from the byte to our Enum type when hydrating the object.  This cast is a change because when NHibernate checks the value, it’s an Enum, not a byte. 

To get around this cast (implicit or not), we use the fully qualified type name of the Enum in the mapping.  NHibernate understands Enums natively, so just put in the enum type, and you are off to the races.  Note that if you are using an Enum that is nested inside a public class, you need to follow .Net’s rules for fully-qualified type names.

See MSDN’s documentation for this: 

http://msdn2.microsoft.com/en-us/library/w3f99sx1.aspx

TopNamespace.SubNameSpace.ContainingClass+NestedEnum,MyAssembly

NHibernate: ICriteria may cause update – know about AutoFlush mode – level 300

This topic is for those already using NHibernate.  By looking at the forum, that is a whole load of people!

As always, my blog posts stem from experience, and this is no different.  It's been a year since I first tried out NHibernate, and since then I've used it on 4 large, unrelated applications.  This latest application of NHibernate is by far the most exciting, however, because we are able to take advantage of the full power of the library.  The others have always been tempered with the few things that couldn't be changed that hampered seamless data access.  My team no longer has to slow down to think about what SQL to write.  We stay in C#, and we're going faster and faster.  For the performance-minded, the NHibernate SQL is pretty darned fast (because there is nothing special about it – just CRUD).  We run about 120 database tests in 2.5 seconds – not bad.

Last week, I learned another thing new about NHibernate – AutoFlush mode.  This is important because NHibernate only keeps 1 instance of your domain object per ISession instance, so if you ask for the same object multiple times from the same ISession instance, you'll get the same domain object instance.  Imagine this scenario: 

  • You pull some objects into memory.
  • The user modifies one object.
  • You query for a list using ICriteria (the object the user modified is a match for this list)

What should the system do?  Should the fresh query refresh all the objects and throw away the user's changes?  NHibernate's default behavior is "no".  It is configured to "autoflush" by default.  When it detects that some changes might inadvertently be thrown away by a fresh query, it will auto update the modified object to the database.  If you open up SQL Profiler, look for UPDATE commands amidst SELECTs.  If you choose to set the autoflush mode to "NEVER", then you'll get a big fat exception, and you can write some code to handle the times when you need to do a fresh query after a persistent object has been modified.

Applications remain simple in the absence of god code – level 100

Imagine two scenarios: 

  • You build a system where one class is responsible for coordinating actions of many.  This one class may observe many conditions or events and act appropriately.
  • Each small event or condition is encapsulated by an object.  You have many classes, but the responsibility of each class is small.  No one class has too much to do.

I call the first scenario “god code”.  This code rules from on high.  This code might have to track multiple conditions in class or global variables to keep track of what is going on.  This code is very busy all the time and has its hand in everything. 

god code leads to overly complex applications.  A remedy for god code is to push behavior down into the smaller classes being worked on.  Empower the smaller classes to take some responsibility for themselves.  They are quite capable. 

Typing is fast; Design is slow – level 200

I’ve been with my current company for over two months now, so I’m fully ramped up and integrated with my new team.  Since I arrived, we’ve hired one more developer and a tester to round out the team.  Last week we finalized some plans to prepare the application for an huge influx of new features.  The application does about 1/5 of what it needs to do long-term, so we need to be able to add these new features quickly now that we have formed a complete team.  My challenge was to find a way to facilitate that. 

The hump to get over was some of the assumptions that the current application made (and were perfectly valid at the time).  The business had changed direction, but the software hadn’t.  Consequently, new features had to work around the (now false) assumptions.  Adding features became slow, and debugging even slower.

The solution:  rematch the software with the business.  Another way to put it would be that the application was full, and we need to make room in the codebase for new features.  This means a lot of new code.  It might sound alarming at first, but typing the codes isn’t what takes so long.  What takes so long is hashing out the design.  Yes, the code IS the design, and the code is what is important, but I would argue that the code is a mere textual representation of the design.  You can’t have code without design.  Typing characters in the IDE is very fast when the design is known (because one can see the text in one’s head before typing the words).  The design has been hashed, changed, and rehashed over time.  It’s been refined and tested, so now that it is known, the code representing it can by typed very quickly in a manner that leaves ample room (i.e. loosely-coupled) for new features and changes down the road.

We can’t throw away code!  We spent a lot of money on that code!!!

The above statement is a fallacy.  The company spent money designing, not typing.  The level of communication required with business partners was the majority of the costs.  Technical design and whiteboarding is some cost.  The typing was cheap! 

How to produce a software product quickly, part 4 – level 300

 

In my first three installments of this series, I discussed:

Please read these first three to get the full context.

Part 4: Constantly Improve the Design

The inverse of this guideline is to never improve the design.  Sadly, I’ve seen my fair share of systems where the design is never improved.  Instead, new features are wiggled into the existing design, even though there really wasn’t room.  I’m sure you, the reader, can recall an application that grew on it’s own without a critical eye on the evolving design, and eventually, the application is bankrupt and has to be rewritten.  I like the analogy of the credit card.  There is only so much you can charge up on the card without paying for something.  If the indebtedness continues without restraint, financial bankruptcy occurs.  In America, law allows the wiping of debts and a fresh start (with horrible credit).  In software, every shortcut is a charge to the credit card.  Every feature added “around” existing code (because of fear of changing existing code) is a charge.  Every assumption proven wrong, every unrealized assumption, every smelly class, every method with a cyclomatic complexity above X (pick your threshold) is a charge to the application credit card.  The design left to itself will run up a huge technical debt and eventually max out the credit card.  At that point, even if you try to pay down the technical debt, the payment will be less than the monthly interest charge, and the battle is lost.  At that point, management calls for a rewrite, and everyone rushes in to write another application. . . using the same habits and techniques.

Recognizing Design Decay

It’s quite easy to recognize design decay.  Refer to this article on code smells for a little background.  If you smell something in the code, don’t be afraid to investigate.  Look for the root cause of the smell.  If the smell is duplicated code, look for a way to refactor out the duplication.  The key is to identify the design problem so that it can be discussed.  In a team environment, it’s important to discuss smells as an opening for improvement.  Weird bugs are another symptom of design decay.  If the system attracts strange bugs caused by unrelated code in other parts of the system, there may be unintentional coupling.  Confusing code is a smell.  If you can’t look at the code and know what it is doing, it may be rotten code.  Well-designed code will read like a flowing sonnet to a programmer.  Decaying code won’t read at all and has to be stepped through (in a debugger) just to get a feel for what purpose it serves.  For those not using Resharper (for C#), you don’t know what it is like to know exactly who is calling a particular public method all the time.  You do a compile to find things wrong with the code.  Do a small experiment:  download the Resharper trial and open an existing solution with it.  Next, go through the code file and go through the warnings Resharper flags.  It will find private method that are dead (not called by anything), member and local variables that aren’t initialized or even used, etc.  Resharper has on-the-fly code checking to keep you safe from most code bacteria so that you can focus on the actual design.  In conclusion, always be on the lookout for code that is calling out for improvement.

Don’t Fear Code Changes

In many programming shops, this is easier said than done.  If your software has not automated tests, there is cause for fear.  You have no feedback.  If you make a code change, it might take a long time for you to know that the change didn’t have any side effects.  Fear of changing code is a design smell.  It means that you don’t fully understand what the code does.  If you understood completely what the code does, it would be trivial to change it.  The solution to this is to implement practices that make this fear go away.  If you had test coverage for the code in question, it would make it easier to change.  If the code were simpler (a 10-line method instead of a 300-line method), it would be trivial to change.  Not fearing code change might not be an option when confronted with a mountain of technical debt, but conquering this fear is key to producing the software world.  Applications that are write-only end up rotting and dying.  Software that is easy to change can endure for a long time.

Ruthlessly Refactor

Use every opportunity to improve the design of existing code.  Even a little technical debt is bad and incurs interest charges.  Refactoring is not an investment in your code, it is a payment.  Slinging code on the credit card runs up technical debt.  Refactoring actually pays for it.  Paying as you go is the best balance for a sustainable software product.  Refactor to get your technical debt to zero, and constant refactoring will ensure your monthly balance is paid off every month even if you do encounter a small amount of technical debt from time to time.

Improve the Design

Wasn’t that the point of this article,  you say?  Yes, but it is hard to do if you are used to living in a world of technical indebtedness.  Your team may be so busy producing code that you don’t have time to improve the way to produce code.  <analogy>A lumberjack sawed away at a tree all day.  Another sharpened his saw after every tree and delivered several trees in a day</analogy>  The smart lumberjack took time to improve his methods and saw productivity soar above his peer.  Are we sometimes to busy doing things that we can’t find time to improve the way we do things?  This is counterproductive.  Producing software has to include constant retrospection to identify things that need to be improved.  If aspects of the software are hard to work with, take the time to improve that portion of the software so that you can move more quickly through it.  We can talk about improving the design all day, but eventually, we actually have to do it, and it may include throwing away or replacing some code.  Don’t consider this a loss.  Lines of code don’t cost anything.  They just represent the design ideas that you spent so much money fleshing out.

Research and Development

Car manufacturers have entire departments dedicated to finding better ways to do things.  If a car company spent all its resources producing cars and no time finding better ways to produce cars, it would quickly go out of business.

Apply this lesson to your software team and constantly improve your design.

Podcasts I listen to – level 100

I was listening to Internet radio before the term "podcast" was coined.  With the term, we have an explosion of Internet radio and listenership.  I've been asked several times what podcasts I follow.  I'll admit, there is limited time for listening, so over time, I have dropped some shows and picked up some others.  Here is my current list:

How to produce a software product quickly, part 3 – level 300

In my first two installments of this series, I talked about:

These first two guidelines are very important.  The first drives efficiency, and the second drives effectiveness.  Even doing these first two well can leave a team spinning its wheels, however.  If parts 1 and 2 are fully implemented, we are now certain that the team is focusing on the small portion of work that delivers great value.  We are also sure that the team's time isn't being wasted with nonessential or repetitive tasks.  What's next?

Part 3: Use Powerful Tools

A skilled craftsman needs good tools.  First, the craftsman needs the correct tools.  Second, the craftsman needs those tools to be powerful.  A craftsman plowing along with inadequate tools is exerting more effort than necessary.  Why use a hammer and nail when a nail-gun is available?  Why use a screwdriver and wood screws instead of a power drill?

Workstation power

This is the most obvious, but sometimes neglected, tool for a software developer.  Does it make sense to have a highly-paid professional on staff using a mediocre workstation?  The programmer will spend time waiting on a machine.  The answer, of course, is no.  The newest member of my team came on board with a 3.4Ghz Pentium D (dual core) workstation.  1Ghz FSB, 10,000 RPM SATA drive, 2GB RAM.  He had never seen Visual Studio 2005 install in 10 minutes.  Compared to what I'm paying him, his workstation cost pennies.  He spends less time waiting on compiles.  The local build runs very fast as well.  In my mind, it is well worth the cost.

IDE & related tools

Many programmers with whom I talk use Visual Studio.  Just Visual Studio.  Some haven't been exposed to other tools for software development.  First and foremost, Visual Studio is a pretty good solution/project linking and compiling package.  It's pretty poor at helping the coding process, though.  There are a few neat extras you can unlock through SHIFT+ALT+F10 in 2005, but they are sparse.  Install Resharper into the IDE, and it comes alive.  It does use more RAM, but RAM is cheap, and the boost it provides is more than worth it.  Without Resharper, Visual Studio is just a glorified text editor that compiles.  With Resharper, you know about compile errors on the fly, and unused code is grayed out, so you know it's safe for deletion. 

Team tools

It's one thing to give an individual programmer the best tools money can buy.  It's another to focus on the team tools.  There are some tools that all members of the team depend on.  Besides the network connection and the printer, I'm talking about a revision control system and a build server.  It's important to have a quality revision control system.  The team needs to be able to rely on the branching and merging capabilities as well as the commit transactions.  My team uses Subversion.  It has proven itself time and time again. 

The build server is another shared asset.  It must be fast as well since every member of the team will have to wait on it at some point.  The build server can either have the build script run on it manually, or a CI server can run the build when new code is checked in.  Either way, the build needs to run fast, be configurable, and be reliable.  My team uses CruiseControl.Net with NAnt and MSBuild.

Other general, programming and debugging tools

There are many tools that help with a single part of development.  The IDE is the tool that stays open all day long, but other tools are just as critical for specialized tasks.  There is no way I can list them all, but I'll give a run-down of the tools I use:

  • Resharper – productivity add-in for Visual Studio (refactoring, code generation, etc).
  • Lutz Roeder's Reflector – disassemble the .Net Framework.  It's better than documentation.
  • CruiseControl.Net – build server for .Net
  • RealVNC – share the desktop.  Good for pair programming or informal presentations.
  • RssBandit – Keep up with the programming industry through RSS news, blogs, etc.
  • GAIM – Universal IM client.  Use to shoot over snippets of code, svn urls, etc to team members.  Supports all IM services.
  • Google Desktop – Launch programming tools quickly without using the start menu.  Find code files easily.
  • TestDriven.Net – run/debug unit tests right from the code window.  Integrates with NCover for code coverage metrics.
  • Windows Server Admin Pak – Remote desktop to many servers with the same MMC snap-in.
  • TortoiseSVN – Windows client for Subversion.
  • NAnt – build scripting.  In fact, you can script anything with XML using this.  We even use it for zipping up files.
  • NUnit – unit testing framework.
  • StructureMap – IoC container.  Uses attributes for linking interfaces with implementations.  Minimizes Xml configuration.
  • Who's Locking – find out what process has your precious assembly locked.
  • DebugView – hook into the debug stream locally or on a remote server.
  • DiskMon – monitor what's going on on the disk.
  • RegMon – see what keys are being read.
  • FileMon – see what files are being read.
  • Subversion – fast becoming the standard for source control.
  • Ethereal – monitor network traffic.  Find out exactly when and how often your application is communicating over the network.
  • VirtualPC or VMWare – test the software on all supported platforms.
  • Visual XPath – quickly create the correct xpath to get a node from an xml doc.
  • Regulator – Hash out regular expressions quickly with this tool by Roy Osherove.

The list is never ending

Always be on the lookout for tools that will save time or speed you up.  Tools are always improving.  It's important to talk with other professionals from different companies to see what tools are helping them.  Chances are that those tools can help you as well.