What makes a good general-purpose development platform?
Easy to install
Easy to configure
Integrates well with simple tools
Easily extended to make simple tools
Easy to debug
Easy to create test automation
All configuration stores easily in source control
Others I’m forgetting
I’ve heard many times that Sharepoint can be used as a development platform. Technically, that statement is correct, but there is so much friction involved with it, many get frustrated in the process. Consider the dependencies:
Must run on a server OS, Not XP/Vista, Full Stop
After that one, I don’t see a need to go on. In a team environment, every developer needs to have a dedicated development environment. What does that mean? Everything necessary to build and run the system fits on the developer workstation. Why not run a server OS on the developer workstation? Perhaps. I’ve done it before, but there is other friction associated with that. Why not use a server OS VM to run Sharepoint on the box? Perhaps, but again, more friction.
A development environment should be a pleasure to work with, and that require minimizing friction. The harder it is to make a batch file that completely builds, tests, and deploys the system, the harder it is to develop for that platform.
My purpose for this post isn’t to say that “Sharepoint sucks”, notice the worst I say is that it’s not a good development platform. For content-management and list-based stores, it’s great, but as a development platform, there is plenty to be desired.
In the comments below, astute reader clarify that Sharepoint excels as a content management system but not as a general development platform. I would yield that point. In fact it makes more sense to me for Microsoft to market the product for that niche and have users singing its praises than for Microsoft to present it as a higher level ASP.Net platform and have users (see comments below) lamenting the pain involved.
I'm going to present you with a very exciting new feature of StructureMap that is available in the 2.0 release. You can download StructureMap from SourceForge.
There are plenty of articles about StructureMap's service location capabilities, and the documentation is quite good, but I'll throw out the bread-and-butter usage scenario for service location. Here is is.
I have an ICustomerRepository, and an "impl" class that implements the interface, CustomerRepository. As the names suggest, this interface will return an array of Customer(s) given a whole or partial phone number. Here is the full code.
Now that you get the gist of it, I'm going to use StructureMap to enable the Model-View-Presenter pattern in a cleaner way with ASP.NET UserControls. First consider the following usage:
I have a page that has a complicated piece which has been delegated to a usercontrol-presenter pair. The page, of course, will embed this usercontrol in the appropriate place. Immediately you can spot the problem: The "Foo" view is responsible for creating the "Widget" view. Then the Widget view creates its presenter and uses it. This is too much responsibility for both views.
Now consider an alternative:
Now notice that the FooPresenter is responsible for creating and using WidgetPresenter. WidgetPresenter, in return, uses StructureMap to service-locate its view, which implements IWidgetView. Because Widget.ascx.cs implements IWidgetView, this wiring works easily. The StructureMap change that made this possible is the following usage of the configuration. Here is the code:
Now that we can service-locate asp.net usercontrols, we can inject them as well. The next step is having the WidgetPresenter receive IWidgetView in the constructor. StructureMap will chain-create the usercontrol in order to create WidgetPresenter. Now, our presenter is in control, and the views can concentrate on presenting information to the user and not worry about control flow.
2.0 Framework assemblies have been NGENed to reduce JIT compilation.This helps with memory usage when running multiple AppDomains in a single process (multiple applications on a single web server).
Another new feature new in ASP.NET 2.0 is the AppDomain unloader.This is configurable by time of inactivity.If an application is idle for that amount of time, the AppDomain is unloaded to free up memory.This is configurable in the <hostingEnvironment /> element (idleTimeout setting).He demonstrated this feature by attaching the EventLog with the health monitoring feature to write an event when the AppDomain unloads.This feature is great for hosters because it will allow them to serve more customers on a single server (customers who have a small application with light traffic).
Small assemblies still take up 64k of memory.Some customers run into problems when loading many, many (hundreds) of small assemblies in a worker process.This leads to memory waste because there will be unused space lost between assemblies.This situation is when someone has an application with way too many assemblies.The real solution is to refactor the solution to collapse code into fewer assemblies, but hosters can’t always compel their customers to do that, so Microsoft is providing a mitigating solution for this scenario.
This scenario could also happen with applications that use the App_Code directory many small assemblies are created at runtime.
SGEN is a new utility to combine assemblies into a fewer number.
Using Web Application Projects avoid this problem scenario because developer will decide how many assemblies their application needs.
Large directory structures:ASP.NET relies on file change notifications.This can hiccup with large directory structures.This problem is expanded when the files are on a network share or network-area storage.
One thing we can do is change the registry to configure FCN (file-change notifications) to “1”.This will stop ASP.NET from spamming the network share with FCNs.A side-effect is that changing the web.config won’t cause an AppDomain recycle automatically.This is a solution to allowing the application file to be on a network share, but it’s a change to operational behavior.No longer would the website be recycled, but when changing files, the recycle would have to be done manually.
For most of us, this problem will never come up, but it does occur with very large directory structures.
ASP.NET also recycles AppDomains when too many directory changes occur or too many file changes occur.
Other file-system changes:
New in ASP.NET 2.0, a directory deletion now causes an app-domain recycle.Being aware of this can be beneficial if needing to delete directories.
A workaround for this is to make a directory junction using linkd.This makes a virtual folder node.File-change notifications don’t flow over virtual folders, so ASP.NET won’t receive the event, and the app-domain won’t be recycled.
Asynchronous page processing:If a page has a long-running task (like calling a web-service of a 3rd party vendor), the page can set this to run asynchronously.This releases the current thread back to the threadpool while the long-running call happens.When the long-running call finishes, the request will grab a thread again and complete.The allows the thread to be available in the threadpool for other requests when, otherwise, it would be sitting there doing nothing.This helps application performance on a fewer number of threads.
Unhandled exception behavior:
ASP.NET 1.1 suppressed unhandled exceptions on child threads.
ASP.NET 2.0:The process is terminated.This is a big change.
If this is a real problem for you:go to the aspnet.config (in the framework directory) file and set <legacyUnhandledExceptionPolicy enabled=”true” />
To track down the error, you can trap the unhandled exceptioin event and log what happened before the process shuts down.
Caching
ASP.NET sets a limit of cache per process.This is a problem if you try to shove 1GB of stuff into the cache.Cache scavenging starts at 90% usage of cache.
We can use the perf counter to track:ASP.NET Apps v2.0.50727 – Cache Total Entries.What might happen is shoving a lot of stuff into the cache, and the system immediately starts to clean it up. We can also avoid sliding expirations because it can cause stuff to never drop out of the cache.
Cache limits:800MB is a normal limit for cache.As an ASP.NET application is running over many days, virtual memory can become fragmented.For a 32bit box with 2GB of memory, 800MB is safe for fragmentation.X64 architecture makes this limit go away.
You can go into application pool settings for IIS and change the limits.There is also a new <caching /> config section that can be used.
First, you can download the VS 2005 add-in here. There isn’t an automatic port of a VS 2005 website, but it was easy enough to do. Here were my steps:
Add new web application project to my solution.
In Win Explorer, copy the entire contents of my web site project to the web application folder.
Delete my web site folder.
Remove the website from my solution.
Show all files in the web application.
Explicitly include everything except bin and obj
Add any assembly references necessary to the new project.
Set any post-build events that you’ve been jerry-rigging up to this point.
Run a build. You’ll notice it fails on control declarations in code-behind files.
Right click on the web project and run “Convert to Web Application”. This adds an explicit partial class to your code-behinds that hold your control declarations from the markup file.
Run the build again. It passed for me at this point. I ran my application, and all was well.
Run all unit tests and integration tests. They all passed for me.
This project model is so much easier to use for real web applications
(that aren’t just web _sites_). Kudos to the ASP.NET team for
getting this patch out.
If you are using Resharper 2.0 (beta), you’ll notice a slight
difference in navigating to files. CTRL+N will locate the ascx.cs
and ascx.designer.cs files since they are C# code files. To get
to the ascx files, you’ll need to use CTRL+SHIFT+N.
One of the biggest criticizms of VS 2005 was the radical change in the way web applications had to be set up. From the folks doing more simple websites, this change was welcome, but the folks doing complex web applications, this change caused a bit of trouble. It’s clear that some folks like the new way, and some folks like the way VS 2003 handled it (minus the mandatory control declarations).
I use NUnit for my
automated tests.Because of that, all my
tests are unit tests, right?
WRONG!The name of
the testing framework has no bearing on the type of test you have.NUnit is a framework for running automated
tests.You _can_ write unit tests with
it, but you can also write integration tests as well as full-system tests with
it.A
unit test is a special type of developer test and can be done with or without
NUnit.
A unit test tests a
single line of code.
How big is a unit? Well, that’s up to you, and there is not scientific answer.Typically, you only give a class a single
responsibility.The class may have several
methods since the class may need to do several things to accomplish the single
responsibility.The class may have to
collaborate with several other classes to accomplish its purpose.A unit of code is an identifiable chunk of
code needed to accomplish part of a responsibility.Is that vague enough for you?In my example below, I’ll clear this up a
bit.
A unit test isolates
the code being tested.
A class will need to talk to other classes.That’s a given.Sometimes this is ok for testing, and
sometimes this just gets in the way.It
might be ok to talk to a class that just builds a string (like StringBuilder),
but it’s not ok to talk to a class that grabs information from a configuration
file.In a unit test, you need to take
environmental dependencies out of the equation so that a pass or failure is
truly dependent on the code being tested. You don’t want the test failing because the configuration file wasn’t in
the right spot.There are plenty of
techniques available for this.To start,
you need to code against interfaces and use fake objects like stubs and
mocks.I like the Rhino mock framework
for this.
Here are some
dependencies that will need to be frequently simulated for unit testing:
Config
files
Registry
values
Databases
Environment
variables
Machine
name
System
clock (Yup.Even that has the
potential to get in your way).
Example:
This example will show a real web user control that I’ve
unit-tested.This is not
theoretical.This is inside my EZWeb
software.The purpose of the following
screen is to maintain a few pieces of information for the page being
viewed.The user can set the title of
the page and some other things.
I’ve used the Mode-View-Presenter pattern to make unit
testing this easier.Obviously if all my
code is in the code-behind of the ASCX, then I won’t be able to test any of it
because I can’t run that code outside of the ASP.NET runtime.If you aren’t familiar with the MVP pattern
now, take some time to read up on it. The presenter is the controlling class that will be tested.The code-behind becomes very dumb.The code-behind will implement my view
interface and be responsible for taking information and setting the correct
control.The view is very small, and all
the intelligence is in the controlling class (the Presenter).The presenter is where the bugs will hide, so
I’ll unit test that class.The model is
represented by an interface IPageConfig that you’ll see being referenced.
The following example
shows a unit test of the code that gets data from the model and publishes it to
the view.The textboxes and drop-downs
need to be set properly.This is not the
full code.The full code also reacts to
the save button being clicked and taking modifying information and saving it.
Here is the view interface:
namespace
Palermo.EZWeb.UI
{
publicinterfaceIPagePropertiesView
{
string
Title { get; set;
}
bool HasParent
{ get; set; }
string
Template { get; set;
}
string
Theme { get; set;
}
string
Plugin { get; set;
}
string
Parameter { get; set;
}
bool
IsPostback { get; }
DictionaryList
GetTemplateChoices();
void SetTemplatesDropDown(DictionaryList
list);
DictionaryList
GetThemeChoices();
void
SetThemesDropDown(DictionaryList list);
DictionaryList
GetPluginChoices();
void
SetPluginDropDown(DictionaryList list);
void
EnableTitle(bool enabled);
void
EnableTemplate(bool enabled);
void
EnableTheme(bool enabled);
void
EnablePlugin(bool enabled);
void
EnableParameter(bool enabled);
void
ReloadParent();
}
}
Here is the
code-behind that implements the view interface (truncated):
Notice that the LoadConfiguration() method checks for
postback (through the view) and then uses the MODEL to set pieces of
information on the VIEW.You may think
that this code is boring, but it’s essential for the behavior of the screen.
Now for the
test.Note that we’re simulating the
view and the ICurrentContext interface since these are collaborators.The ICurrentContext provides the MODEL to the
Presenter:
Your first thought might be that this unit test method is
too long.It certainly pushes my comfort
level as well.I could have chosen a
small one for this example, but I chose my largest one instead.I’ve seen other unit testing examples that
are so trivial that they don’t demonstrate much.In this sample, I chose one of the most
difficult things to unit test:A UI
screen.Notice that I’m using Rhino
mocks to set up my fake objects.The call
to “mocks.VerifyAll()” does a check to ensure that the collaborators were
called with the correct input.After all,
my presenter method is in charge of getting information from the MODEL and
publishing them to the VIEW.If you
spend some time going over this test, you can see some of the rules the code
has to live by.One of the side-effects
of the unit test is documentation of the code (developer documentation).At this point, I can refactor my method
knowing that I have this test as a safety net.
How do I unit test my
legacy code?
Change it.This
screen started out several years ago with all the code in the code-behind
class.It was impossible to test this
way.I had to refactor to the MVP
pattern to enable testing.I had to
break some things away by inserting an interface so that I’d have a seam to
break dependencies.In short, you must
refactor your existing code to get it to a point where it is testable.The reason it’s not testable is that it’s
tightly-coupled with its dependencies.I
hope by now that the words “loosely-coupled” are recognized as “good” and “tightly-coupled”
are recognized as “bad”.Testable code
is loosely-coupled.Loosely-coupled code
is testable.And now the big leap:Testable code == good.
RJ Dudley has a great quickstart on getting my latest release of EZWeb up and running. If you need a website fast and easy, check out the quickstart and EZWeb. If you need a website easily modified by a family member or friend, this one is for you.
This is a warning that the VirtualPathUtility class in .Net 2.0 takes
some time to understand the rules under which it operates. It
wasn’t obvious to me, and I had to do some investigation to learn it’s
behavior. I expected the following test to pass. It didn’t.
string actual = VirtualPathUtility.Combine(parentPath, subDirectory);
Assert.AreEqual(expected, actual);
}
I expected it to actually “Combine” the base path and the path that was
relative to the base path. After all, if these combine, the
product will be the sum of both, right? Not really. Here’s
my test output:
String lengths differ. Expected length=20, but was length=8.
Strings differ at index 1.
expected:<“/websiteRoot/newPage”>
but was:<“/newPage”>
————^
Thanks to the first comment below, I realized that this method
tries to act like a web browser in resolving relative paths. If I
add a “/” to the end of “/websiteRoot”, the test would have
passed. As it stands, it assumes that “websiteRoot” is a page and not
a directory.
I’ve done quite a bit with Whidbey and .Net 2.0 since Beta 1 hit in mid-2004. I was one of the early adopters that submitted bug reports as well, and I’ve tech editted several .Net 2.0 books, and I’m mostly impressed with .Net 2.0, but there are some aspects that I’m dissapointed with (you can read past ones on my blog).
I’ve been wrapping some of my existing code with the built-in providers in ASP.NET. I’m finished with the MembershipProvider, but all I needed was 4 of the methods, and the abstract class has over 20. What a waste. I’m in the process of wrapping my code with the SiteMapProvider, and that one looks more civilized.
Microsoft has touted it’s Provider pattern as a way to configure application behavior. They’ve touted it as a custom mix of Strategy and Plugin. It’s true that you can change the configuration file, and you’ll hook up a different provider, and the behavior of the application will change.
What’s not widely known is that _every_ provider is a Singleton. There is no getting around it. The biggest implication of this is that where before, only few had to worry about writing thread-safe code, now even the hobbyist has to be aware of threads when creating a provider. There is not a way to configure them away from being singletons, either.
Using them with ASP.NET is the biggest concern because every request runs on a different thread. Handlers are per-thread, but IHttpModule(s) have AppDomain scope just like providers. What this means is that providers are not pluggable components. . . they are services and must be treated as such. They are entities that will serve multiple customers. Each customer doesn’t get his own instance of the provider, but one instance is shared for the life of the application. I’d prefer to have one instance per use.
Early on in the Beta cycle, Rob Howard had the same thoughts as me: “This is a disastor waiting to happen. We you have an API that is
funtionally wrapped up into a package a static methods (I know the Role
Provider class isn’t static, but it’s access is) you have a service. . . ” 4/16/2004
You can verify my facts here by reading this article on MSDN (search for “thread”).