Code the Town! Investing in the next generation of programmers in Austin, TX

Code the TownAustin, TX is a hot-bed for technology.  You can find a user group for just about any technology and purpose meeting almost any day of the week.

And now, there is a group that intersects with giving back to the community and helping the next generation of programmers.  Code the Town is a group that does just that. Clear Measure, and other companies, are sponsors of the group.  The official description is:

“This is a group for anyone interested in volunteering to teach Hour of Code https://hourofcode.com/us in the Austin and surrounding area school districts. The goal is to get community volunteers to give the age appropriate Hour of Code to every student at every grade level. We want to have our own community prepare students for a technology-based workforce. We also want to build a community of professionals and students that have a passion for coding and teaching. We want to begin the Hour of Code in the high schools first. High school students would then be prepared to teach the younger students.  Once this group has momentum, it will be able to form motivated teams and use software projects done for local non-profit organizations to not only reinvest in our community but also to help our youth gain experience in software engineering.  Whether you are a student, parent, educator, or software professional, please join our Meet Up! This will be fun! And it will have a profound impact on the next generation.”

The long term vision is to create a sustainable community of professionals, educators, parents, and students that continually gives back to local community organizations through computers and technology while continually pulling the next generation of students into computer programming.
Simple codeIt all starts with some volunteers to teach students the basics of computer programming.  In the 1990s, the web changed the world.  Now, we have hand-held smartphones and other devices (TVs, bathroom scales, etc) that are connected to computer systems via the internet.  In the next decade, almost every machine will be connected to computer systems, and robotics will be a merging between mechanical engineering and computer science.  Those who know how to write computer code will have a big advantage in the workforce where the divide between those who build/create and those who service what is created might get bigger than it already has.
BlocklyCode the Town will focus on introducing students to computer programming and then pull them together with their parents, their teachers, and willing community professionals to work on real software projects for local non-profits.  In this fashion, everyone gets something.  Everyone gives something, and everyone benefits.  If you are interested in this vision, please come to the first meeting of Code the Town by signing up for the Meetup group.

What are the Alt.Net principles? – my answer

I’ll be attending the AltNetConf.  Convenient for me that it’s in Austin, TX.  It’s an open space conference, and I consider it the founding conference of a conversation that is “Alt.Net”.  I’ll be proposing the topic: What are the Alt.Net principles?.

The definition of Alt.Net isn’t defined yet.  It’s not even at a point where I can explain what it is and actually have other agree with me.  

Since I know the guy who originally coined the term, I have some insight into what David meant, but I’m going to propose some principles that, together, should be the definition of Alt.Net.

First, Alt.Net inherits Agile, and IS Agile.  Therefore:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan 

Extended principles:

  • Excellence and quality
    In a world of wizard-generated, disposable software (disposed 2 years later by necessity, not choice), Alt.Net focuses us on excellence in the software we create.  While the world may pay for and accept software that lives for 2 years before becoming utterly unmaintainable, we don’t accept shoddy work.  We know that we can deliver high quality software faster than others can deliver low quality software, so we accept no less than the highest in quality.  We strive for excellence through solid engineering practices and a high level of software education.  Coincidentally, Extreme Programming helps in this area, but Alt.Net does not specifically _mean_ XP.
  • Alternative Vendors
    A  common theme in many .Net shops is they are “Microsoft” shops.  In other words, if it doesn’t come from Microsoft, they won’t use it.  This makes no sense.  Microsoft is not a large enough company to be the vendor to the whole world.  .Net is a great platform, and we as a community have chosen the platform and choose to participate in this segment of the industry.  We strongly believe that 3rd party vendors complement the .Net platform in a way that can contribute to excellent working software.  In fact, some 3rd party offerings are superior to Microsoft’s offering.  For instance, in a strive for excellence in an e-commerce website, a team may choose a mature O/R Mapper like NHibernate to accelerate team speed and produce a flexible data layer; however, Alt.Net does not _mean_ ORM.  Open source software is a source of excellent 3rd party alternatives built on the .Net platform, and it should be used over Microsoft alternatives when they contribute to excellence; however, Alt.Net does not _mean_ open source.
  • Joy in our work
    We know that we will produce better software if a team is motivated and morale is high; therefore, we use libraries, tools, and practices that add joy to the working environment.  We abandon or correct libraries, tools, and practices that make it a drag to work on the software.  For instance, many find that Visual Studio is a bit slow to work with and that adding Resharper to the IDE adds a high level of joy while working with .Net code; however, Alt.Net does not _mean_ Resharper.
  • Knowledge
    We know that we will never know everything there is to know.  We strive for a greater understanding of software through studying successes and failures of the past as well as the present.  We educate ourselves through many sources in order to bring the most value to our clients.  We keep up with other platforms, such as Java and Ruby, so that we can apply their good ideas to .Net development and increase the quality of our .Net software.  The technology is always changing, but the knowledge accumulates.  We know that the knowledge applies no matter how the technology changes.  With knowledge comes humility because without humility, knowledge would pass us by.

The above are principles, so they are intentionally abstract.  Below, I’ll list some items that are concrete.  These items apply the principles but are more directly applicable:

  • Read more than just MSDN magazine and MS Press.  Authors like Feathers, Fowler, Martin, Evans, etc have a lot to give (Knowledge)
  • Use Resharper.  It makes working with Visual Studio a (Joy).  But if another vendor comes along that does even better than JetBrains, consider switching
  • Use NUnit  over MSTest,  Subversion over TFS SCC,  Infragistics/Telerik over in-the-box controls,  RedGate over in-the-box SQL tools.  Each of these is a better alternative to that which Microsoft provides (Alternative Vendors).  Use NHibernate over hand-rolled stored procedures and especially over DataAdapter/DataSet, but if EntityFramework proves to actually be superior to NHibernate in a meaningful way, consider using it.
  • Use a responsible application architecture.  Don’t put everything in Page_Load like you see demonstrated at MSDN Events.  Use knowledge to create an application that can stand the test of time and not be rewritten every 2 years.  Deliver (high quality and excellence). 
  • Automate every repetitive task; builds, tests, deployments, etc – excellence and joy

The concrete examples could go on and on, and I hope AltNetConf produces a long list.  I’ll be interested in having my proposed principles accepted by the community there or revised into something better.  Either way, I’d like to get to a point where there is an accepted definition of Alt.Net.

Focus on the core: the most important part of the application

Technologies are coming and going faster than every before.  In this environment, how can we provide companies with a good return for their software investment.  Looking back, J2EE was all the rage.  Software executives were banking on J2EE and making significant investments.  The same thing happened with COM+, ASP 3.0, etc.  Managers were projecting significant savings by using these.  Now, where are the savings.  Many applications written with these are being rewritten in newer technologies. 

Why?  Because the applications had no core.  By core, I mean, the center of the application that describes the business domain.  Typically, these are classes and interfaces.  Creating classes using COM+ or J2EE doesn’t an application core make.  The core doesn’t care about surrounding technology. The core is your domain model.  By its design, it’s the most important part of the application, but, done well, it’s portable. 

Look around and see if you can relate to this:  A software team focuses much energy on making the database as good as possible and they create stored procedures to pull back the data as quickly as possible.  They also consider the use cases of the screens that are necessary.  Using technology X for the presentation, they make database table designs and stored procedures that return exactly what the screen needs to show to the user.  Perhaps J2EE or COM+ is used in the passage of information from the database to the UI.  Perhaps Enterprise Java Beans for COM+ components perform some transformation or calculations necessary for the screens. 

Take a step back and remove the screens.  Remove the database.  Is there any application left?  Can you point to any business rules or domain concepts left in the application after the presentation and storage components are removed?  In my experience, I’ve had to answer “no” more than once.  This is absolutely the wrong way to develop software.

Software systems should be resistant to climate change.  The technology climate is always changing.  The core is the most important part of the application, and it should be insulated against changes on the outside.  Over time presentation technologies have changed many, many times.  Data access technologies haven’t sat still either.  We still have the relational database, but the manner of using it is constantly changing. 

Software health check:  Take away the code used to talk to the database.  Take away every screen.  You should still have an application.  You should be left with your application core or domain model and domain services.  Everything should be intact. 

Handle changes in technology gracefully.  If your application has a healthy core, you will be able to upgrade to the next-generation UI.  You’ll be able to change your data access to use LINQ or an ORM without much impact.  If you don’t have a healthy core, any change in technology requires almost a wholesale rewrite.

Any software system is a large investment for a company.  That software is expected to last a LONG time.  By focusing on the core of the application (domain model), the software will be able to weather changes in the technology climate.  By creating a healthy core, your software will be able to drop and adopt technologies as necessary.  Let’s stop rewriting and rewriting and start creating healthy software.

The inspiration for this post came from Jim Shore’ s thoughts.

Baking requirements – Developing with raw ingredients is waste

I have learned an important lesson from my combined experiences at all the places I’ve worked.  That is:  raw requirements cause waste.  A term I’ve used (and have heard others use) is that requirements are either “baked” or “not baked”.  For a development team to plan an iteration, or a scope of delivery, the requirements need to be baked.  If we pull the development team into a planning session, we ensure the requirements are fully baked before the meeting.  Developers will be asking specific questions about the details of the requirements, and answers need to be readily available.

A big cause of waste is when a project manager inaccurately declares the requirements as actionable and the entire team meets.  This is the most expensive meeting you can have.  As soon as the developers ask questions, a discussion ensues among business stakeholders on what the requirements should be.  At this point, the developers sit and listen until the stakeholders finish defining what the system should do.

The above is a strong indicator that the requirements aren’t baked.  There are holes in the analysis, and it comes out as soon as a developer asks a question about the expected behavior.

TIP:  Project Managers:  ensure the requirements are fully baked BEFORE you take up the ENTIRE team’s time.  You may need help from the architect or tester, but ensure the center is not raw when the whole team is pulled in.

UPDATE:  ScottBellware was a bit confused about the context of this post (see comment below), so I thought others might be also.   This post is about behavioral requirements for a single user story.  Very small scope.  Before the team can estimate, this story must be “baked”.  Otherwise, the coding is guesswork.

Martin Fowler evolves his Model-View-Presenter pattern – level 300

I subscribe to Martin's MVP pattern.  If you are new to it, please have a read.  It's a variation of Model-view-controller that puts more behavior in the controller and less in the view.  I have tended to vary the amount of logic that belongs in the view depending on the scenario.  Martin has split the pattern into two:  one part leans toward balancing the logic and putting UI-specific behavior in the view and application behavior in the controller.  Read it here.  The other seeks to make the view as thin as possible and renders the view very passive.  In this case, the controller has every bit of behavior, including setting every single field.  Read it here.

I'm glad he made the split because it really is two different ways to do it.  I tend to throw a domain object at the view and say "here, show this", whereas PassiveView would say to set each field individually and not to let the view know about the domain object.  In Supervising controller (which I favor), the view can know about the domain object and how to bind it to it's GUI elements.

As with all patterns, they have advantages and drawbacks.  The worst thing we can do is be dogmatic about one and declare its applicability to all scenarios.  I've used Supervising Controller in ASP.NET and WinForms, and I like the way it separates behavior from visual goo.  I also like how it pulls behavior into a class that's easily tested.

If it takes forever to start your app with the debugger, check for thrown exceptions – level 300

Overview of Exceptions
There are quite of a few things that are just laws of Object-Oriented development, and one of those is that exceptions should be avoided.  If you can prevent an exception from being thrown, do it.  In the world of managed runtimes, particularly Java’s JRE and .Net’s CLR, objects are “thrown” to communicate errors.  In a try/catch block, the language limits objects that can be thrown to ones that derives from System.Exception, or java.lang.Throwable in Java.  When an object is “thrown”, the runtime stops and assembles the callstack and some other information and gives code at all levels of the callstack an opportunity to catch the thrown object (exception) and do something with it.  If the exception is never caught, the runtime with catch it and terminate the program.

Clearly, exceptions being thrown in code is a bad thing, and it signals and unstable state in the program.  It may be a huge bug, or the network may have gone down.  Either way, and exception is thrown.  Proper error handling with catch the exception at a point high enough in the callstack where the program can actually make a decision to do something about it.

Swallowing exceptions (wrapping code in a try/catch where the catch block is empty) leads to less feedback.  An exception will happen, but it will be swallowed, and you won’t know about it.  As soon as you start swallowing exceptions, they will start happening without being noticed.  Debuggers pay special attention to exceptions, so swallowed exceptions (thrown, immediately caught, and ignored) will slow down the debugger with each occurrance.

Solution
Go to the Debug menu in Visual Studio and select Exceptions.  CTRL+ALT+E is the shortcut.  Check the checkbox for “Common Language Runtime Exceptions”.  Now when you start your debugger, it will break when a managed exception is thrown.  It will break on the line from which the exception originates.  You can use this technique to find all the exceptions that are happing in your software right under your nose.  If you refactor to keep those exceptions from happening, you’ll see a marked improvement on debugger load time (provided there were a large number of exceptions happening previously).

Alternatives
The alternative to debugging often is automated tests.  When each small piece of code is verified independently, you don’t have much occassion for running the full application in debug mode.  If you have unit tests as part of your automated test suite, a failure will point to the exact place where you have the problem.

Rule to live by
Fail fast.  Fail fast.  If your software is going to fail, make it fail quickly so that you can get the feedback earlier and fix it earlier.  Don’t hide the problem by ignoring it or burying it in a log file that’s already verbose.  If you are unfamiliar with this concept, read this article by James Shore. 

Don’t use an exception as a return value.  In an ideal situation, your application should run with 0 exceptions.  You may have a library that swallows an exception, and that would be unfortunate, but keep your application code clean.  If you can anticipate an exception happening, perform some checks to avoid it being thrown.

How to keep an eye on exceptions
Use Perfmon.  Watch the counter “.Net CLR Exceptions# of Exceps Thrown”.  The number should be zero in an ideal situation.  If you have an app that can’t avoid some exceptions, you can watch “# of Exceps Thrown / sec”.  This number should be close to zero.  If your application is constantly throwing exceptions under ideal circumstances, you have some work to do.

[tags: exceptions, programming, c#, java, failfast, objectoriented, development, .net, clr]

Blogging from Tech Ed 2006 – Brian Button on how to write frameworks – level 300

Brian Button from the patterns and practices team gave a session bright and early Friday morning that was great.  He covered how to write frameworks.  What’s interesting is that he focused on writing frameworks as a non-technical problem since he’s seen too many frameworks that solve the wrong problem very well. J


 



  • Rules for Framework development


    • Clients come before frameworks.

    • Be sure that you are solving the right problem.

    • Quality, quality, quality.

    • Be an enabler:  You can’t solve every problem, so solve many problems but allow for extensibility points.

    • Creating frameworks is a people problem.

 


Brian stress that it’s impossible to write a good framework without a real application using it.  It will end up missing the mark.  He suggests, instead, writing several applications and harvest a framework from the common parts of the application.  This resonates with me because I believe it’s very difficult to write code that’s intended for reuse.  I write code for use, and if it needs to be used elsewhere, I’ll harvest it into a shared library.


 


Brian was very bold at this Microsoft conference, and I applaud him for this.  He stressed automated testing, and he said that here and now there is NO EXCUSE for not writing automated tests.  Bravo!  Brian contends that functional testing is a waste of a tester’s time.  The thought is that testers are too valuable for functional testing that could be covered with automated testing.  Testers should be testing the harder things.


 


Brian shares my frustration about sealed classes in the .Net Framework.  He has encountered parts of the framework that are sealed, and when he needs to extend them, he can’t.  Sealed classes are hard to write testable code against.  He made a good point that I hadn’t thought of before:  If you seal a class, you are saying “I can predict the future, and this is all that this class will ever need to do.”


 


Finally, Brian advocates these in creating a framework.



  • Do open planning.

  • Practice open prioritization.

  • Show visible progress.

  • Don’t allow surprises.

 


Brian recommends



  • Resist the urge to create something new.  Harvest frameworks from existing applications.

  • Encourage quality throughout the project.  Lead by example and testers rule!

 


 


 

Microsoft republishes Guidelines for Test-Driven Development – level 200

If you remember the article in October that was on MSDN on Test-Driven Development, you remember the hub-bub that it caused because of the inaccuracies, and how it soon was pulled from the web.

Microsoft has published new Guidelines for Test-Driven Development.  See the article to see why I think they got it right. 🙂

Many thanks to Paul Schafer and Rob Caron.

StructureMap v1.1 (for .Net 2.0) released on sourceforge – level 000

Not long ago, Jeremy Miller released version 1.0 his excellent dependency injection tool, StructureMap.  I’ve ported it to .Net 2.0, and I’ve added generics to the bread and butter class, ObjectFactory.  Download the .Net 2.0 release on sourceforge.net.  StructureMap supports creating very loosely coupled applications by service location and dependency injection.


Coding to interfaces is fine and dandy, but at some point, you need an instance of the concrete class that implements the interface.  Typing “new SomeClass” tightly couples your code, so what do you do?


Slap an attribute on the interface, the concrete class, and then all you have to type is:


IMyInterface instance = ObjectFactory.GetInstance<IMyInterface>( );


That’s all it takes, and you have an instance of the interface you are binding against.  Using this pattern, you can harvest reuse in your applications and test any component in isolation.

Integration testing demonstrated – level 200

In this
post, I’ll talk about and demonstrate integration testing.  If you are just starting out with integration
testing, you want to test small before you test big.  Full-system tests are good, but if they fail,
they don’t give much of a hint as to where the failure is.  Smaller integration tests will help narrow
the area where the failure lies.

 

A few
rules to live by

·                    
An
integration test must be isolated in setup and teardown.  If it requires some data to be in a database,
it must put it there.  Environmental
variables should not cause the test to fail randomly. 

·                    
It
must also run fast.  If it is slow, build
time will suffer, and you will run fewer builds – leading to other
problems. 

·                    
Integration
tests should be order-independent.  It
should not matter the order you run them. 
They should all pass.

·                    
Feel
free to make up rules that objectively result in fewer bugs.

 

Testing
a custom SiteMapProvider

In my
example, I have a custom SiteMapProvider (PageInfoSiteMapProvider).  This site map provider gets it’s data from
inside my EZWeb application, specifically, the IPageConfigProvider interface.  I use StructureMap for service location, so
one of the things that an integration test will validate is that my interface
implementations can be resolved correctly. 
I’m going to focus on an integration test for one method on the site map
provider, FindSiteMapNode(url). 

 

Here
is the constructor and method on my custom site map provider:

        public
PageInfoSiteMapProvider()

        {

            _provider = (IPageConfigProvider) ObjectFactory.GetInstance(typeof (IPageConfigProvider));

            ICurrentContext
context = (ICurrentContext)ObjectFactory.GetInstance(typeof(ICurrentContext));

            IConfigurationSource
config = (IConfigurationSource)ObjectFactory.GetInstance(typeof(IConfigurationSource));

 

            _applicationPath =
context.ApplicationPath;

            _defaultPage = config.DefaultPage;

        }

 

        public override SiteMapNode
FindSiteMapNode(string rawUrl)

        {

            VirtualPath
path = new VirtualPath(rawUrl,
_applicationPath, _defaultPage);

            PageInfo
config = _provider.GetPageConfig(path);

            SiteMapNode
node = MakeNodeFromPageInfo(config);

            return
node;

        }

 

This is
simple enough.  The site map provider is
just a wrapper around the interface call. 
Notice in the constructor, that I’m using a call to StructureMap’s
ObjectFactory class to resolve the interfaces that I need.  I need the current HttpContext and some stuff
in the web.config file.  Obvious in my
integration test I don’t have the ASP.NET runtime, and I don’t have the
web.config file, so I’ll need to simulate thing (mock, stub, fake, whatever you
want to call it).  In my integration
test, I’m going to have to use fake implementations of these interfaces.

 

Here
is the integration test fixture that tests this provider all the way down to
the point where it reads the data from the xml file on the disk. 
I’ve chosen this scope because it’s not too large, and it’s
not too small.

    [TestFixture]

    public class PageInfoSiteMapProviderTester

    {

        [SetUp]

        public void Setup()

        {

            IConfigurationSource
source = new TestingConfigurationSource();

            ICurrentContext
context = new TestingCurrentContext(source);

 

            ObjectFactory.InjectStub(typeof(IConfigurationSource),
source);

            ObjectFactory.InjectStub(typeof(ICurrentContext),
context);

        }

       

        [Test]

        public void ShouldGetRootSiteMapNode()

        {

            string
xmlFileContext = @”<?xml
version=””1.0″”
encoding=””utf-16″”?>

                <PageInfo
xmlns:xsi=””http://www.w3.org/2001/XMLSchema-instance”&#8221;
xmlns:xsd=””http://www.w3.org/2001/XMLSchema””&gt;

                 
<VirtualPath>/</VirtualPath>

                  <Title>My home
page</Title>

                  <Template />

                  <Theme />

                  <Plugin />

                 
<HasParent>false</HasParent>

                  <Children />

                  <Links />

                  <Editors />

                </PageInfo>”;

 

            string
fileName = FilePageConfigProvider._fileName;

            string
fileSubDir = FilePageConfigProvider._filesDir;

 

            string
fileDirectory = Environment.CurrentDirectory + “\” + fileSubDir;

            string
path = fileDirectory + “\” +
fileName;

 

            FileHelper
helper = new FileHelper();

            helper.SaveFileContents(path,
xmlFileContext);

           

            PageInfoSiteMapProvider
provider = new PageInfoSiteMapProvider();

            SiteMapNode
node = provider.FindSiteMapNode(“/mywebapplication/default.aspx”);

           

            Assert.AreEqual(“My home page”, node.Title);

            Assert.AreEqual(“/myWebApplication/Default.aspx”,
node.Url);

            Assert.AreEqual(“/”, node.Key);

        }

    }

 

This is
my entire integration test to make sure that the SiteMapNode object is put
together correctly. Notice that I have data setup with the xml file.  Then I call the provider and assert on the
results.

 

Faking
the “cruise missile” with StructureMap

If my
code was in charge of launching a cruise missile correctly, then I would have
to fake out the cruise missile when testing my code.  In fact, my code might not ever run in its
true environment until the world plunged into war again.  On a small scale, I have to fake the two
interfaces that are not reproduceable in my test context.  Direct your attention to the Setup
method.  You’ll notice that I’m creating
two fake instances and instructing StructureMap to “InjectStub”.  Because of this, when my code asks for an
instance of one of those interfaces, the class I created in Setup will be
returned.

 

Here
are the stub classes

    public class TestingCurrentContext
: CurrentContext

    {

        public
TestingCurrentContext(IConfigurationSource
source) : base(source) { }

 

        public override string
MapPath(string path)

        {

            string
executingDir = Environment.CurrentDirectory;

            string
combinedPath = executingDir + “\”
+ path;

            return
combinedPath;

        }

 

        public override string
ApplicationPath

        {

            get

            {

                return
“/myWebApplication”;

            }

        }

    }

 

    public class TestingConfigurationSource
: ConfigurationSource

    {

        public override string
ClientFilesPath

        {

            get

            {

                return
@””;

            }

        }

 

        public override string
DefaultPage

        {

            get

            {

                return
“Default.aspx”;

            }

        }

    }

 

These
classes are simple enough.  They merely
return expected environmental settings so that I can test my code.

 

Integration
testing is hard at first

If your
style of coding isn’t loosely coupled, then it will be difficult to do
automated integration testing.  Seams
must exist in the code where fake implementation can be inserted.  In my case, I had a web.config file and the
HttpContext of the web application that get in the way.  I stub these out for my test.  My tools list for integration testing is:

·                    
NUnit
for organizing and running the tests.

·                    
StructureMap
for resolving interfaces and for stubbing interfaces at hard boundaries.

 

My way
or the highway

Just kidding.  I’m not saying that this
is the “golden” way of doing integration testing.  There are many ways, and that’s why software
engineering is engineering and not assembly-line work.  The above approach has worked well for me,
and I put interfaces where they make sense (everywhere) in my code to maintain
flexibility.  I’m open for critique or
questions.