Software Engineering Reality: Momentum

In a discussion recently with James Shaw, one of our engineering department Vice Presidents, we explored the concept of momentum as it pertains to computer programming.  The topic arose in one of the many times we struggled to provide a good-enough estimate.  This estimate was for making a change to a software system we were inheriting.  A code base we had not explored, and a code base which we had never modified, built, or tested before.   As you can imagine, and estimate in this environment is completely trash.  Similarly, I would imagine, to an estimate given for what it would take to open back up the police station in Haiti after the devastation several years ago (without first visiting the region).

When thinking about how we could begin setting expectations for how long a software change might take, we recalled the anecdote of one of the folks at the client saying “it took Bob about a week the time we had to do something like this before”.  While useful information, providing an estimate of a week would be sure disaster.  As we pondered why this was true, we discovered an appropriate way to describe the force at play.


imageNewton’s first law states that a body remains at rest or in uniform motion in a straight line unless acted upon by a force.  It’s amazing how this applies to software engineering as well as many different human endeavors.  In fact, I hear momentum referred to during sporting events quite frequently – as when an interception kills the momentum of a scoring streak by the opposite team.

James and I analyzed momentum in software development and for the purposes of providing estimate.  We remembered the many times in our careers when nontrivial enhancements to software were able to be completed in very short periods of time and what the factors were.  And we also remembered the times when seemingly trivial enhancements took inordinate amounts of time.  A common element of each was the presence or absence of momentum.  That is, when a software engineer brain has been engulfed in a code base and problem set for an extended period of time, accomplishing many tasks, there is good momentum.  The marginal effort of each successive task decreases until it approaches some minimally viable floor of effort.  That is, when going 100 MPH on a software problem, each mile marker passes by relatively quickly.  In contrast, when starting from a stand-still, the first task absorbs the cost of acceleration. 

In normal circumstances, like context switching, momentum can be gained quickly.  In cases where we are taking over a software system written by unsophisticated teams, gaining momentum can be much more difficult.  For instance, environment friction can be a huge factor in the cost of gaining and maintaining momentum.  How long does it take to prepare the environment for programming?  How long does it take to integrate changes and prepare for testing?  What is involved in understanding where to make changes?

We did not come up with an actual answer for how to estimate a change to a previously unknown code base, but we were able to articulate the momentum factor at play.  Have you, dear reader, noticed this factor at work in your environment?  What builds/kills your momentum?

Coding forward: the opposite of coding backward

I am a big advocate of coding forward.  Coding forward is the style of coding used in Test-driven development.  In TDD, we write a test as we would like the code to work.  We even allow compile errors because we are modeling the API of the code the way we would like it to read – even before it exists.  Then, we begin to fill in the holes, creating the methods that didn’t exist yet, and making them work right.

I like to carry that into all coding, not just test-first coding.  For instance, if I am in an MVC controller and I need to call a new method that I am imagining in my head, I like to just write the call to that method that doesn’t yet exist.  For instance:


Here, I know I need to make my domain model to a strongly-typed view model for use in an MVC view.  The method to do it doesn’t exist. 

A common style is to stop coding and then go create the mapping method and then come back.  I find this to be cognitively disjointed and prone to the loss of train of thought.  When I stop coding and jump down into the stack of methods & classes that need to exists for a top-level solution to work, I have to make sure I keep track of my own “call stack” or development stack of items to come back to.  If, instead, I continue coding forward to the end of the scenario, the compiler will remind me of the missing pieces because it won’t compile – or the page/function won’t run.  Automated tests do this also because the test won’t pass until all necessary code is in place.

I have noticed myself doing this, and I realized that it was distinctly a different style than many programmers.  JetBrains ReSharper does help tremendously with this style because of the navigation features and code generation features.  I’m not sure it would be as convenient without R#.  Creating a new class and then flicking it to a new code file is just a couple of shortcut keys with R#, so it’s pretty frictionless to code forward.

Happy coding (forward)

Maiden Name Modeling: determine the right structure

Working with one of our software engineering teams today, I was reminded of some principles of modeling that I have come to take for granted.  But this topic I’m writing about in this post is something that took me a while to learn, and my hope is that at least one other person will find this useful.

When modeling a domain model, data model, or any other data structure representing information for the real world, there are an infinite number of possibilities, and it is up to the software designer to choose the structure for a data model.  I’ll show two ways to model the same data in a real scenario.

Maiden Name Modeling

My nickname for this technique is Maiden Name Modeling.  This is because of the best example.  Here is the requirement:

A congressional legislator needs a way to track contacts.  These contacts are typically constituents, but sometimes they are donors, judges, etc.  An application built on this data model will allow office clerks to maintain contacts in the legislator’s jurisdiction.  It will also allow the lookup and updating of information and notes on the contact.  Many times, a person will be a contact for many legislators, but the information differs a bit from legislator to legislator.  For instance, the contact may be a business, but a different business location or phone number is different for the legislator.

Sometimes a client won’t know how to describe the data characteristics.  And in the age where there are many many database tables containing information about “people”, we modelers need to have some tools to decide what structure to use in what scenario.

Question to ask:  Here is a scenario: Amy Smith is a contact for legislator Bob Parker.  She gets married and becomes Amy Pumpels.  She then reaches out to another legislator Sammy Berkins and gets entered into the database as one of his contacts.  Should her name and other information automatically be overwritten in the record for Bob Parker?

If the answer is “no”, then the maiden name model is the most appropriate for the scenario.  Even though the same person is represented as a contact for the two legislators, it is appropriate for two independent records to be used.  This is because there is no business relationship between the two concepts.  They are completely independent.  In other words if the person of “Amy Smith” disappeared from Bob Barker’s contact list, Bob would be upset.  He would be searching for this person, and Amy Pumpels would be quietly hiding the fact that “Smith” has been deleted from the database.

Here is a diagram of this model.image

Master Name Model

Another way to represent the same type of data is with a master name model.  You might have heard of master name indexes that seek to de-duplicate data for people of all sorts so that there is one place in the company to keep track of names, addresses, and phone number, etc.  This is useful in many scenarios.  Here is a way to understand if this structure is more appropriate to the situation.

Question to ask: Here is a scenario: Amy Smith is a contact for legislator Bob Parker.  She gets married and becomes Amy Pumpels.  She then reaches out to another legislator Sammy Berkins and gets entered into the database as one of his contacts.  Should her name and other information automatically be overwritten in the record for Bob Parker?

If the answer is that Amy Smith should no longer exist in any legislator’s contact list, then this is a tip-off.  A UI features that might accompany this model is a screen that selects an existing contact and adds a Type and Notes.  In this scenario, the user will maintain a shared group of Contacts, and they will be attached to a Legislator along with adding a Type and Notes specific to the relationship.  Here is what it looks like.


Danger of many-to-many relationships

Many-to-many relationships have always been hard to manage because of the ownership issue: what object owns the relationship?  For the database, there is no concept of ownership.  In the database, we just store the current state and structure of the data – there are no hints around how it is used.  Any application using and modifying the data must establish usage constraints in order to present an understandable records-management paradigm.

We do this by eliminating many-to-many scenarios in the application: in the object model.  In the above diagram, you see that Legislator has a one-to-many with LegislatorContact.  Then LegislatorContact has a many-to-one relationship with Contact.  This is important: Contact has no relationship with Legislator or LegislatorContact.  And LegislatorContact has no relationship with Legislator.  In the object model, we do not represent these possible relationships in order to make the application code simple and consistent.  Through this modeling, we ensure that application code uses these objects in only one manner. 

In domain-driven design terms, Legislator and Contact are aggregate roots, and LegislatorContact is a type belonging to the Legislator aggregate and can only be accessed through a Legislator.  With domain-driven design, we constrain the model will rules that make things simpler by taking away possible usage scenarios.  For instance, it’s ok for a subordinate member of an aggregate to have a dependency on another aggregate root only, but not classes owned by that aggregate root.  And it’s ok for an aggregate root to directly depend on another aggregate root, but it is not ok for an aggregate root like Contact to directly have a dependency on a subordinate type of the Legislator aggregate.

With these modeling constraints, we eliminate the many-to-many concept that is possible from the data in the application so that application code can be drastically simpler and one-way.


There is no “one way” to model data or objects.  I hope that this post has helped with one common decision point that has occurred over and over in my career.  I would love to have your comments.  Have you encountered a decision point similar to this?

Developers driving on ice

imageToday, all the schools are out, and it is a good old “ice day” in Austin, TX.  For northerners, know that Austin doesn’t really have many plows or equipment to speak of to combat this because it happens only every two years. 

If you are from a northern state, or have driving in Colorado to go skiing, you might have experience driving in icy conditions.  No car _really_ does well driving on ice, but the point is not to drive on ice.  The point is to _not_ drive on the ice.  And if you have never done it before, you have no idea what to expect.  Without the past experience, you don’t know how to be prepared for the encounter, what to avoid, or how to handle it – and when to completely avoid it altogether.

Coding on ice

The same is true every day when engineering software.  Because this is such a new profession, we have a short supply of experienced software engineers who have been through the tough challenges before.  Because of the growth in the industry, companies are forced to hire developers who have executed a few projects but lack the experience in the wide range of situation that can occur in a software engineering project, or the many years of production operations in the life of a software system.

Without the prior experience of being in a certain situation before, developers don’t know what to expect, and have to figure out on-the-fly how to handle a new situation.

The point

I don’t pretend to have experienced everything that is possible in the software world.  Few people could, and I continually turn to Fred Brooks (teaching link) for his timeless wisdom in this area.

One particularly hairy situation can be integration with other systems that were built before widespread networks were common.  These systems are very difficult to deal with, and if one has started one’s career with websites and easy-to-use web services, these can catch one by surprise.


This is just a short post to reflect on the similarities drawn from something as simple as driving on ice and creating software in an unfamiliar situation.  Without past experience to draw from, we can get turned around, or find ourselves off the road.

Drawing from others’ experience is a good move.  Reading the works of others in an area, educating ourselves, etc.  And the best option is to find something who has been through the challenge before so that you don’t have to go through the jungle yourself.  There is no shame is asking for help and admitting that you’ve never dealt with a problem quite like this before.  It’s liberating to be able to say “I don’t know”, or “I haven’t done anything like this before”.  There is no software engineer in the world who had seen everything.  And the more I learn, the more I discover just how much I have yet to learn.

My current preferred continuous integration build script–psake

I first learned continuous integration and build script principles from Steve Donie back in the last decade.  I’m eternally grateful.  To this day, the basic outline of the build scripts I deploy today have the same general flow that he taught me and implemented using NAnt and

Today, we look back at the practice of forcing XML into a procedural programming language and chuckle at how naïve we were as an industry.  Now, we use Powershell on the Windows platform for modern scripting.  It was a brilliant move for James Kovacs to essentially port the build script concepts to powershell with the psake library.

I’ve been speaking on setting up basic software configuration management (SCM) at conferences and user groups for years, and I try to maintain an “Iteration Zero” Visual Studio solution template that includes scripting and the structure necessary for continuous integration right out of the gate.  This build script is the one from that project, and it’s the template I use for every new software system.  It’s been modified a bit over the years.  It came from the one I had used for CodeCampServer back in the day, and it, of course, is used in every project we do at our software engineering practice at Clear Measure

The full file can be found here.

# required parameters :
# 	$databaseName

Framework "4.0"

properties {
    $projectName = "IterationZero"
    $unitTestAssembly = "UnitTests.dll"
    $integrationTestAssembly = "IntegrationTests.dll"
    $fullSystemTestAssembly = "FullSystemTests.dll"
    $projectConfig = "Release"
    $base_dir = resolve-path .
    $source_dir = "$base_dirsrc"
    $nunitPath = "$source_dirpackagesNUnit."
    $build_dir = "$base_dirbuild"
    $test_dir = "$build_dirtest"
    $testCopyIgnorePath = "_ReSharper"
    $package_dir = "$build_dirpackage"	
    $package_file = "$build_dirlatestVersion" + $projectName +""
    $databaseName = $projectName
    $databaseServer = "localhostsqlexpress"
    $databaseScripts = "$source_dirCoreDatabase"
    $hibernateConfig = "$source_dirhibernate.cfg.xml"
    $schemaDatabaseName = $databaseName + "_schema"
    $connection_string = "server=$databaseserver;database=$databasename;Integrated Security=true;"
    $cassini_app = 'C:Program Files (x86)Common FilesMicrosoft SharedDevServer10.0WebDev.WebServer40.EXE'
    $port = 1234
    $webapp_dir = "$source_dirUI" 

task default -depends Init, CommonAssemblyInfo, Compile, RebuildDatabase, Test, LoadData
task ci -depends Init, CommonAssemblyInfo, Compile, RebuildDatabase, Test, LoadData, Package

task Init {
    delete_file $package_file
    delete_directory $build_dir
    create_directory $test_dir
    create_directory $build_dir

task ConnectionString {
	$connection_string = "server=$databaseserver;database=$databasename;Integrated Security=true;"
	write-host "Using connection string: $connection_string"
	poke-xml $hibernateConfig "//e:property[@name = 'connection.connection_string']" $connection_string @{"e" = "urn:nhibernate-configuration-2.2"}

task Compile -depends Init {
    msbuild /t:clean /v:q /nologo /p:Configuration=$projectConfig $source_dir$projectName.sln
    delete_file $error_dir
    msbuild /t:build /v:q /nologo /p:Configuration=$projectConfig $source_dir$projectName.sln

task Test {
	copy_all_assemblies_for_test $test_dir
	exec {
		& $nunitPathnunit-console.exe $test_dir$unitTestAssembly $test_dir$integrationTestAssembly /nologo /nodots /xml=$build_dirTestResult.xml    

task RebuildDatabase -depends ConnectionString {
    exec { 
		& $base_diraliasqlaliasql.exe Rebuild $databaseServer $databaseName $databaseScripts 

task LoadData -depends ConnectionString, Compile, RebuildDatabase {
    exec { 
		& $nunitPathnunit-console.exe $test_dir$integrationTestAssembly /include=DataLoader /nologo /nodots /xml=$build_dirDataLoadResult.xml
    } "Build failed - data load failure"  

task CreateCompareSchema -depends SchemaConnectionString {
    exec { 
		& $base_diraliasqlaliasql.exe Rebuild $databaseServer $schemaDatabaseName $databaseScripts 

task SchemaConnectionString {
	$connection_string = "server=$databaseserver;database=$schemaDatabaseName;Integrated Security=true;"
	write-host "Using connection string: $connection_string"
	poke-xml $hibernateConfig "//e:property[@name = 'connection.connection_string']" $connection_string @{"e" = "urn:nhibernate-configuration-2.2"}

task CommonAssemblyInfo {
    $version = ""   
    create-commonAssemblyInfo "$version" $projectName "$source_dirCommonAssemblyInfo.cs"

task Package -depends Compile {
    delete_directory $package_dir
	#web app
    copy_website_files "$webapp_dir" "$package_dirweb" 
    copy_files "$databaseScripts" "$package_dirdatabase"
	zip_directory $package_dir $package_file 

task FullSystemTests -depends Compile, RebuildDatabase {
    copy_all_assemblies_for_test $test_dir
    &$cassini_app "/port:$port" "/path:$webapp_dir"
    & $nunitPathnunit-console-x86.exe $test_dir$fullSystemTestAssembly /framework=net-4.0 /nologo /nodots /xml=$build_dirFullSystemTestResult.xml
    exec { taskkill  /F /IM WebDev.WebServer40.EXE }
function global:zip_directory($directory,$file) {
    write-host "Zipping folder: " $test_assembly
    delete_file $file
    cd $directory
    & "$base_dir7zip7za.exe" a -mx=9 -r $file
    cd $base_dir

function global:copy_website_files($source,$destination){
    $exclude = @('*.user','*.dtd','*.tt','*.cs','*.csproj','*.orig', '*.log') 
    copy_files $source $destination $exclude
	delete_directory "$destinationobj"

function global:copy_files($source,$destination,$exclude=@()){    
    create_directory $destination
    Get-ChildItem $source -Recurse -Exclude $exclude | Copy-Item -Destination {Join-Path $destination $_.FullName.Substring($source.length)} 

function global:Copy_and_flatten ($source,$filter,$dest) {
  ls $source -filter $filter  -r | Where-Object{!$_.FullName.Contains("$testCopyIgnorePath") -and !$_.FullName.Contains("packages") }| cp -dest $dest -force

function global:copy_all_assemblies_for_test($destination){
  create_directory $destination
  Copy_and_flatten $source_dir *.exe $destination
  Copy_and_flatten $source_dir *.dll $destination
  Copy_and_flatten $source_dir *.config $destination
  Copy_and_flatten $source_dir *.xml $destination
  Copy_and_flatten $source_dir *.pdb $destination
  Copy_and_flatten $source_dir *.sql $destination
  Copy_and_flatten $source_dir *.xlsx $destination

function global:delete_file($file) {
    if($file) { remove-item $file -force -ErrorAction SilentlyContinue | out-null } 

function global:delete_directory($directory_name)
  rd $directory_name -recurse -force  -ErrorAction SilentlyContinue | out-null

function global:delete_files_in_dir($dir)
	get-childitem $dir -recurse | foreach ($_) {remove-item $_.fullname}

function global:create_directory($directory_name)
  mkdir $directory_name  -ErrorAction SilentlyContinue  | out-null

function global:create-commonAssemblyInfo($version,$applicationName,$filename)
"using System;
using System.Reflection;
using System.Runtime.InteropServices;

// <auto-generated>
//     This code was generated by a tool.
//     Runtime Version:2.0.50727.4927
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>

[assembly: ComVisibleAttribute(false)]
[assembly: AssemblyVersionAttribute(""$version"")]
[assembly: AssemblyFileVersionAttribute(""$version"")]
[assembly: AssemblyCopyrightAttribute(""Copyright 2010"")]
[assembly: AssemblyProductAttribute(""$applicationName"")]
[assembly: AssemblyCompanyAttribute(""Headspring"")]
[assembly: AssemblyConfigurationAttribute(""release"")]
[assembly: AssemblyInformationalVersionAttribute(""$version"")]"  | out-file $filename -encoding "ASCII"    

function script:poke-xml($filePath, $xpath, $value, $namespaces = @{}) {
    [xml] $fileXml = Get-Content $filePath
    if($namespaces -ne $null -and $namespaces.Count -gt 0) {
        $ns = New-Object Xml.XmlNamespaceManager $fileXml.NameTable
        $namespaces.GetEnumerator() | %{ $ns.AddNamespace($_.Key,$_.Value) }
        $node = $fileXml.SelectSingleNode($xpath,$ns)
    } else {
        $node = $fileXml.SelectSingleNode($xpath)
    Assert ($node -ne $null) "could not find node @ $xpath"
    if($node.NodeType -eq "Element") {
        $node.InnerText = $value
    } else {
        $node.Value = $value


AliaSQL – the new name in automated database change management

Along with this post, please make sure to read Eric Coffman’s very thorough post introducing all of his work on AliaSQL.

Way back in 2006, Kevin Hurwitz and I both worked at a start-up company focused on Sarbanes-Oxley compliance software.  While the business model didn’t quite pan out, we had a killer team, and we created some innovations that have gained widespread adoption even to this day.  Among them are:

While any artifacts from 2006 are long-gone, these tools and patterns live on to this day, and many folks have adopted these around the world.  I do have to give credit where credit is due.  In 2007, both Kevin and I were working with Eric Hexter on projects at Callaway Golf Interactive, and Eric material contributed to a large rewrite of the automated database migrations, AND, he was very involved in labeling it as “Tarantino”, honoring the famous movie producer.  And to this day, Tarantino has been widely adopted as a simple and targeted way to perform automated database migrations in continuous integration environments.

Reviewing the problem

frustration.cartoonIn many teams, source control is normal, but databases are left out in the cold.  Perhaps a massive SQL file is exported from time to time as the DDL definition of the database, but deploying database changes across test, staging and production environments is still an issue and is error-prone.

Several common mistakes exist when managing the database change management process.  The first is developers sharing a development database.  The second is developers maintaining local databases that are synced manually.

When sharing a development database, changes to this database have to be a blocking issue.  Working on branches becomes an issue because when a database change happens, at least one developer ends up working with a version of the code that has a problem with the new database change.  The external environmental change ends up wasting the time of at least one team member.

When each developer maintains isolated databases that are synced manually, the team invariably has to have a meeting in order to figure out what the needed database changes are for a given build that needs to be deployed to production.  Having a repeatable QA process is difficult here.

The problem manifests itself when production deployments happen and some database change is left out or performed differently than intended.  This can result in deployment-time troubleshooting and adhoc code-changes or database changes in a heroic effort to salvage the deployment.

The solution

The premise to automated database migrations is to have a process that executes in the exact same fashion in every successive environment so that by the time the production deployment happens, there is no chance of it not working properly.  In addition, the process should not vary based on what feature branch or hot-fix branch in source control is being worked.  And the process should scale to an unlimited number of pre-production and production environments.

One school of thought with database migrations uses a process of examining the target database and then generates appropriate scripts to be run based on the state of the target database.  In fact, Microsoft’s DACPAC in the database project works like this.  From a philosophical level, I don’t like this approach because it doesn’t allow the QA process to vet the scripts that will actually execute from environment to environment, and there is no opportunity to mix in data transforms in between multi-step table transformations, like the merging or splitting of columns.

In addition, I reject migration philosophies that believe roll backs are possible.  Perhaps roll-backs could be performed for completely reversible operations, but as soon as a migration includes a DROP COLUMN operation, a roll-back scenario is broken because there is no way to roll back and reverse the deletion of the data in the column.  In addition, once an install package has failed to properly install, how can one trust it to then faithfully do the right thing in a roll back attempt?

Introducing AliaSQL, the new simple standard in SQL-based database migrations

Right now, you can download AliaSQL (pronounced “ey-lee-us-Q-L”) from NugetEric Coffman was a Tarantino user for several years and then started encountering issues because Tarantino hadn’t been maintained in a few years.  So he forked the project.  Tarantino was great, and I, and others, poured many hours into it.  It does include much more than just database migrations, and that’s one of the reasons that a new project is warranted – to provide focus.

Tarantino still has a dependency on SQL Server 2008.  SQL Server 2012 isn’t supported, and SQL Azure has some issues.  The SQL SMO dependency was a great idea in its time, but AliaSQL does away with this dependency and achieves broad compatibility as a result.

How to upgrade from Tarantino to AliaSQL

the good news is that AliaSQL is 100% backward compatible with Tarantino database migrations.  This is absolutely intentional.  The process and philosophy of the original Tarantino (which was actually NAnt-script-based) from 2006 is preserved while taking advantage of a significant rewrite to provide more detailed logging, transaction support, and database compatibility.

If you have an application and a build script that currently uses Tarantino, I encourage you to make the simple and trivial upgrade.  You can check out my Iteration Zero sample project to see how easy it is to make the upgrade.  The recommended way to get AliaSQL.exe is from a Nuget search, but you can also directly download just the EXE here.

Then, just updated your build script (a psake build script is shown here).

 task RebuildDatabase -depends ConnectionString {
  1. exec {
-        & $base_dirtarantinoDatabaseDeployer.exe Rebuild $databaseServer $databaseName $databaseScripts
+        & $base_diraliasqlaliasql.exe Rebuild $databaseServer $databaseName $databaseScripts

.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/white-space: pre;/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }

Notice that for an application already using Tarantino, the only change is the path to aliasql.exe.  All other major behavior is exactly the same as well.

AliaSQL differences from Tarantino

Although backward compatibility is excellent, you will immediately notice some key differences:

  • Immediate compatibility with SQL Server 2012 as well as automatic compatibility with future versions.  This was accomplished by breaking the SQL SMO dependency.
  • Transactions are added:  With Tarantino, when a script failed, the database was in an inconsistent state because transactions were not used.  AliaSQL implements transactions for each SQL file, split by GO lines.  This enables transactional rollback if something goes wrong when executing scripts.
  • New TestData command that executes scripts in a TestData folder for the purpose of SQL statements that include test data for non-production environments.
  • New Baseline command that initializes an existing database for future management by AliaSQL
  • AliaSQL Kickstarter Nuget package that creates a database-specific Visual Studio project to contain SQL scripts and provide a quick console project for the execution of AliaSQL from within Visual Studio

An upgrade illustrated

Before AliaSQL, still running Tarantino.

AliaSQL---the-new-name-in-automated-data_13F59Tarantino build

The automated build after upgrading to AliaSQL:

AliaSQL---the-new-name-in-automated-data_13F59AliaSQL build

Notice the augmented logging the specified that a transaction was used.


In closing, go out an download AliaSQL now and upgrade your old Tarantino applications.  It’s a quick, drop-in upgrade, and you’ll be immediately ready to go for SQL Server 2012, SQL Azure, and future versions.

And finally, check out the project documentation and get involved on Github!

Thanks so much Eric Coffman, for grabbing the reins and creating this new tool that continues the heritage of this popular approach to automated database change management/database migrations.

.Net Rocks road trip–Austin stop

This past weekend, the .Net Rocks crew came to Austin on the national road trip. It was a great time.image

On Friday, there were lots of folks at the Xamarin/.Net Rocks all-afternoon event.  Richard Campbell told some of his famous storytelling, and Carl got the audience loud and rowdy for the show recording.

The start of the .Net Rocks show

Later that night, the road trip folks came over to my house and we tackled this big hunk of pure rib-eye

photo 1 (4)

I cut this up into 1” steaks, a little seasoning, and served it medium rare.  It turned out really good with some Rebecca Creek Texas whiskey.

Here’s the crew that tackled this hunk of steak.

photo 2 (3)

Ok, there were a few sides, but those aren’t as interesting.

Then, on Saturday, we had a Humanitarian Toolbox hackathon hosted at the Clear Measure offices.  We supported a non-profit called Humanity Road and helped them publish information for disaster recovery first responders across the country.  It was a big data processing task, and we wrote some code to get all the data into markdown format to publish on the wiki.

It as a great weekend overall, and everyone had a great time.  Thanks, .Net Rocks and Xamarin, and Humanitarian Toolbox for coming to town.

Solution to GoToMeeting video conferencing quality

In a previous post, I spoke about a problem with GoToMeeting when video feeds are enabled.  The problem manifests as audio AND video degrading to the point of unusability.  Throughout this time, we as a company evaluated many other options for online meetings that include video.  This is very important to us since our clients are busy, and we have staff in Austin, Dallas, and the Toronto area.  Also, using video allows us to create a more personal relationship with out clients.  In my experience, you naturally build trust quicker with people you can see.  And adding video to conference calls is a great tool.

There were two issues we were experiencing with G2M video meeting quality. 

    1. Intel speed-step interference. 
      First, Running three screens and video conferencing is a big tax to the CPU.  It works fine initially, and then 15 minutes into the meeting, the CPU gets hotter, and if not cooled enough, the CPU steps down in speed, sometimes to 0.79 Mhz.  When this happens, there just isn’t enough horsepower for G2M to do what it needs to do.  The first solution was to ensure proper airflow to the machines running the software so that the CPU stays cool.
    2. Lower the CPU priority of the g2mvideoconference.exe process
      IF the CPU is going to be starved for any reason, we wanted the video feed to be the thing that suffered.  Currently, the audio cuts out to the point of unusability.  The fix for this is to change the CPU priority to “Low”.  The picture below shows how to do this using the Task Manager windows (you can get to task manager by pressing CTRL+SHIFT+ESC all at the same time.


I want to figure out a way to have this process launch with Low CPU priority every time it launches.  If anyone knows a registry setting or some other way to have this be automated, I would be grateful. 

Onion Architecture: Part 4 – After Four Years

 In 2008, I coined a new pattern name called Onion Architecture.  You can read the previous parts here: part 1, part 2, part 3.  Over these four years, I’ve spoken about this pattern at user groups, conferences, and it’s even published in one of the chapters of ASP.NET MVC in Action from Manning.

I’ve been overwhelmed by the traction this pattern name has enjoyed.  Folks from all over the country have written about and have talked about the pattern.  Some of the ones I’ve noticed are here (please comment with more – I welcome it).


Back in 2008, I defined four tenets of Onion Architecture:

  • The application is built around an independent object model
  • Inner layers define interfaces.  Outer layers implement interfaces
  • Direction of coupling is toward the center
  • All application core code can be compiled and run separate from infrastructure


Although there has been significant adoption of this pattern, I have received countless questions about how to implement it in various environments.  I mostly get asked about how it relates to domain-driven design.  First, onion architecture works well with and without DDD patterns.  It works well with CQRS, forms over data, and DDD.  It is merely an architectural pattern where the core object model is represented in a way that does not accept dependencies on less stable code.

CodeCampServer was an original sample of onion architecture, but it also grew as a sample of how to do ASP.NET MVC in various ways, how to use Portable Areas, and how to use MvcContrib features like input builders.  If you are just looking for onion architecture, it has too much going on.  I have pushed a much simpler solution that represents onion architecture concepts.  I have intentionally not included a UI input form or an IoC container, which most people associate with onion architecture.  Onion architecture works just fine without the likes of StructureMap or Castle Windsor.  Please check out the code here and let me know if this presents a simple approach – that is the goal.

When there is enough interest, I will continue this series with more parts.  CQRS definitely deserves some addressing within this architecture, and so do object models that support task-based UIs.

Get the code here at my BitBucket repository.

How to configure SQL Server 2012 for remote network connections

SQL Server 2012, especially when using a named instance, changes the way us old SQL Server veterans manage connectivity to the server.  We are so used to relying on port 1433 for SQL Server that setting up a new SQL Server 2012 database server might give us some fits.

I have one software engineering team that just did this recently.  Our dev and test environments are in Windows Azure, and then our production environment is in a specialized data center on the east coast of the U.S.

To make a long story short, we are using Azure’s point-to-site VPN so that none of the servers can have any contact from the outside world.  The only way to get to them is to VPN in.  Then, the servers respond to pings, and it’s as if they are on the LAN.  It works great – except connecting SQL Management Studio to a new Windows Server 2012 box running SQL Server 2012 wouldn’t work.  All the old tricks of using SQL Configuration Manager and enabling TCP/IP yielded no fruit.  And opening port 1433 on the firewall yielded no fruit as well.

Just for completeness, we were using a named instance, not a default instance.  A quick look at NetStat revealed that SQL Server wasn’t listening on port 1433 at all.

Notice how port 1434 is listening, however.  This is the SQL Server Browser service.  A search through the documentation reveals that Microsoft has made a bit of a change to the way SQL Server operates regarding ports.  These named instances now use dynamic ports – I think for security.  This TechNet article explains that to connect, the client will send a UDP packet to port 1434 first to resolve the dynamic port.  Then it will connect as normal.

The appropriate Windows Firewall rule to establish is a program rule, not a port rule.

By adding SQL Server’s exe as a program rule in the firewall, it now works.

It’s been several versions since something like this has changed in SQL Server, so hopefully this article will help someone else using SQL Server 2012 for the first time.