Code the Town! Investing in the next generation of programmers in Austin, TX

image

Austin, TX is a hot-bed for technology.  You can find a user group for just about any technology and purpose meeting almost any day of the week. 

And now, there is a group that intersects with giving back to the community and helping the next generation of programmers.  Code the Town is a group that does just that.  The official description is:

“This is a group for anyone interested in volunteering to teach Hour of Code https://hourofcode.com/us in the Austin and surrounding area school districts. The goal is to get community volunteers to give the age appropriate Hour of Code to every student at every grade level. We want to have our own community prepare students for a technology-based workforce. We also want to build a community of professionals and students that have a passion for coding and teaching. We want to begin the Hour of Code in the high schools first. High school students would then be prepared to teach the younger students.  Once this group has momentum, it will be able to form motivated teams and use software projects done for local non-profit organizations to not only reinvest in our community but also to help our youth gain experience in software engineering.  Whether you are a student, parent, educator, or software professional, please join our Meet Up! This will be fun! And it will have a profound impact on the next generation.”

Read this full post at http://codebetter.com/jeffreypalermo/2015/08/26/code-the-town/

Visual Studio 2013 Crashes On Startup–my solution

When preparing for my master developer boot camp class I’m teaching next week for a client, my Visual Studio 2013 installation started crashing on startup.  This had happened to me before, and I reinstalled VS to get it working again.  After some googling here and here, I uninstalled GitExtensions.  Now my VS is working again.  There is some commentary on the offending configuration, but at this point in my career I have a very low tolerance for buggy software – especially bugs that render the computer unusable – and for programmers, the IDE _is_ the computer.

My IDE is working fine again, and I’m once again sticking with Atlassian SourceTree for most tasks and TortoiseGit when I just need that right click convenience.

How Windows 10 Ushers in Changes

0I am attending the Microsoft //Build/ Tour in Austin.  I have the pleasure of participating in the afternoon Q&A panel.  Thanks to Ryan Joy for that.  Jerry Nixon, Neil Hutson, Daniel Buchner and crew are doing a great job distilling the Build conference content down to a single day.  The big news, of course, is Windows 10.  What is not explicitly shared is the 5 and 10-year vision for the industry and the world.  Clearly, the Windows 10 roadmap was built on the basis of some critical assumptions and bets.  Google is betting that the web and browsers will continue to grow.  They are doing pretty well penetrating education with Chrome Book computers.  Microsoft is better that people want to interact with information and other people through many ways.  It appears that Microsoft is also betting that if they can make computers of all shapes and sizes that are easy to develop for, that developers will create another 16 million applications for the new Windows.  16 million is the number shared of current native Win32 applications – starting from the early 1990s.

Market Changes

There are some key market changes.  Some great, some unfortunate.  First, Apple was on a role bringing the true table and mobile form factor for computing.  Who knew that doctors would be dictating patient visits by pressing a button on their iPhones.  So Apple really shared a vision that the whole world latched onto regarding the smartphone.  Not a new idea, but the execution and selection of a touch-sensitive screen instead of physical buttons was the game-changing idea.  Google and Microsoft and others have followed, and now we have a phone/computer that resides in the pockets of millions of people.  But sadly, with the death of Steve Jobs, Apple hasn’t moved with the same momentum or visionary direction that Steve Jobs brought to the table.  In observing that, Apple has a great computing base, but it is unclear if Apple will share a next generation vision or will continue just refining existing product categories.  In addition, Google’s success with Android has led to the market coming to expect some basic things from a smartphone.  What once was a premium feature is now just expected, and cellular carriers are offering these smartphones as the free phone for a new data plan.  This now makes smartphones a commodity item and available to every cellular customer.  This space is no longer game-changing.  It’s now normal.  Computing on a phone is now normal.

Web applications are also changing – or the demand for them.  At the turn of the century, and for the last 15 years, there has been a huge push to get native applications to have a broader and more convenient reach by making them into web applications.  Few applications survived this transition.  Most were completely rewritten in order to make them into web applications.  The SPA movement and now mobile and tablet apps are already chipping away at what would have been a pure web application 5 years ago.  Developers have realized that in some scenarios, we need the power of a native platform rather than the more generic capabilities of the browser, which was designed for reach and for runtime-deployment.  Web applications are no longer highly desirous.  Instead, many folks would rather use a native app on their mobile phone or tablet.  And to the extend that installation is super easy, on the desktop as well – as we observe with the success of the Adobe AIR runtime for the desktop.  With business applications delivered via a browser, we have found that users have the ability to easily introduce environment/runtime variances causing more support issues than necessary.  In addition, the testing costs are multiplied unless you can really lock down a single browser to support.

New Industry Trend

In the business world that Clear Measure lives in, we have found that when the users of the application are employees of the client, we can establish a required client environment.  This would indicate that a certain type of application does not need the reach strengths of browser apps because we have control of the computer that would be used to access the application.

There is an obvious trend toward using simple computing form factors to do computing.  When we observe family members doing tons of computing on phones and tablets as well as the popularity of Ultra Book laptops, we know that the common computing world is changing.  And what happens in the normal population quickly invades business regardless if it was part of I/T strategy or not.

How Windows 10 Can Create a Tidal Wave

First, Microsoft isn’t necessarily creating a tidal wave with Windows 10.  They are just trying to meet the demand that they see in the market.  Their bet is that the world is ready for a common Windows across all types of computers, large and small; some with UIs, some without.  The bet is that people will continue to buy small and convenient computers and not just laptops.  The bet is also that developers will see demand from their customers and employers to build more accessible applications.  We have been in a business era where the human had to approach the computer and use the application.  The consumer world has already shifted to a world where the application was standing ready with the human and a lot of times proactively interacted with the human via notifications.  The bet is that the business world is ready for this.  That the business world is ready to arm their employees with the applications wherever they go.  That they are ready to break up the large systems so that functions needed on the go can be provided and exposed to smartphones and tables.  And with wearable's clearly taking off in the public as demonstrated by FitBit and other bands, Microsoft is betting that folks will appreciate doing business computing via more form factors.  This is the bet with Hololens and Microsoft Band.  Starting in the consumer product world, these new ideas can be made available in business, and Windows 10 is being positioned to allow and support that.

The Trend That I See Moving

In 2007, I was really early in the shift from ASP.NET WebForms to ASP.NET MVC.  MVC is pretty common now, and it’s no longer the application platform of the future.  It will be around for a long time to come, and I think the web is now the standard mechanism for publishing and accessing information.  It will be around for as long as paper books.  But this is different from business computing.  Already, we see the limitation of the web for business computer.  For a long time, I wondered why I wasn’t excited about Google’s Angular SPA framework.  It clearly is successful, and lots of people are using it, including us, but I have never seen it as the future of business computing.  With Windows 10 around the corner, I am much more excited about native computing now than browser-based computing.  Google has made a bet on the web, and so Angular fits into their vision.  But I think that era has played out and has hit a wall with business computing.  I think this is demonstrated with the sheer number of native apps that even Google has developed and published just to achieve the goals of their own products.

With Windows 10, I see Microsoft recognizing and supporting the trend that I see, and that is a trend toward more longer-lived systems that support interaction from any number of computer types as well as new form factors that have not been invented yet.

What I mean is illustrated by what happened when applications where migrated from 1990s desktop applications to the web.  Too many were rewritten completely from scratch in the business world.  I know this because I and my team have been hired too frequently over the years to do this type of work.  What is sad is that these web applications out in the business world were too often written using the same wrong assumptions that caused the 1990s applications to be unable to be migrated going forward.  When an developer assumes that the UI technology is not going to change, he couples the business logic of the application with the UI frameworks.  The huge wave coming is business wanting to take more advantage of mobile native computers, and web applications are always the best for that.  So these applications will need to be extended, and many of them are written in a way where if the MVC UI is taken away from them, there is no application left!

This new world of computing will usher in a new standard architecture where the application is built in the absence of UI frameworks, which will continue to change.  This new way of building applications will expose functions via APIs that any UI can call.  And with this application architecture, we can make UI apps for all the different types of computers both now and in the future.  And Windows 10 encourages this new way of developing applications. 

C# is the next-generation cross-platform language

I remember computer science class at Texas A&M University where I was learning Java as a write-once-run-anywhere language.

Now, so many years later, C# is emerging as the language to build

  • Windows apps
  • Web apps
  • Linux apps
  • Mac apps
  • iOS apps
  • Android apps
  • Windows Phone apps
  • XBox apps
  • SPA apps
  • JavaScript apps
  • Database apps
  • Cloud apps
  • and more

Today, Microsoft announced so many things.  I’ll just list them here and let you follow the links

Prediction follow-up–Microsoft now licensing Windows Azure to private data centers

Back in early 2013, I predicted that Microsoft would start licensing Windows Azure to private data centers.  As a software company, Microsoft’s strength is in software and software platforms.  It has been dogfooding Azure to prove it out, but Microsoft never wanted to transform itself into a data center infrastructure company.  Just as Windows Server is a computer operating system, Windows Azure is a data center operating system.  Now, Microsoft has officially announced that for the past 18 months, it has been working on Microsoft Cloud Platform System (I predict the name will change because nobody licenses something that gets confused with “Child Protective Services”, CPS).  This is essentially a cabinet-sized computer you can buy that has all Dell hardware pre built to run a 2000-VM capacity Windows Azure.

I had no idea what their actual plans were at the time, but I’m glad they are doing this because now data centers all around the world can start hosting for their customers on a consistent Microsoft data center operating system.  Now, Azure and OpenStack are the two cloud platforms to choose from for private data centers.

I am really excited about this because at Clear Measure, we have already built a strong competency in Windows Azure not only on the software engineering side, led by James Shaw, but also with our DevOps division, led by Paul Drew.  And with Andrew Siemer, former chief architect of Dell.com, we are also facilitating AzureAustin, a user group focused on Windows Azure from a software engineering standpoint.

New release of AliaSQL released to support Everytime scripts

Eric Coffman and I paired this morning to tie off a great new feature that Eric had implemented.  It is the concept of everytime scripts – scripts that are considered version-controlled (sprocs, functions, views, etc).  These scripts needs to have a canonical version in source control but need to be re-run in an environment when changed.

Check it out on Nuget or Github

IterationZero projected updated so that you can see the proper use case.

Developer interested in business? Join me next week for JWMI event

As many of you know, I went back to school 2011-2012 at the Jack Welch Management Institute and earned my MBA. My entire career has been writing business software, and now I've layered on business education to help me make that business software have even greater of an impact. JWMI, my master's alma matre, is visiting Austin to give a seminar. Here's the invitation. Please contact me if you'd like to come. Invitation is open to all.

---------------------

We invite you to join us on August 28th for a special community leadership breakfast event hosted by the Jack Welch Management Institute at Strayer University.

As one of the most respected and celebrated CEOs of all time, Jack Welch transformed GE into the world's most admired and successful company with his innovative management techniques. He continues to impact the business world as a bestselling author, speaker, commentator and advisor to many of today’s top organizations.

By attending this complimentary event you will learn Jack Welch’s 8 Leadership Lessons to Live By, have the chance to win a signed copy of Jack’s book Winning, and network with other business professionals from the Austin area.

Don’t miss this opportunity to learn actionable guiding principles that will help you win in life and at work.

8 Leadership Lessons to Live By
T
hursday, August 28th
8:00 a.m. to 10:00 a.m.

Breakfast, training session, networking

-----------------------------

More info about JWMI:

 

https://www.youtube.com/watch?v=UUQP7S0UM2U

Software Engineering Reality: Momentum

In a discussion recently with James Shaw, one of our engineering department Vice Presidents, we explored the concept of momentum as it pertains to computer programming.  The topic arose in one of the many times we struggled to provide a good-enough estimate.  This estimate was for making a change to a software system we were inheriting.  A code base we had not explored, and a code base which we had never modified, built, or tested before.   As you can imagine, and estimate in this environment is completely trash.  Similarly, I would imagine, to an estimate given for what it would take to open back up the police station in Haiti after the devastation several years ago (without first visiting the region).

When thinking about how we could begin setting expectations for how long a software change might take, we recalled the anecdote of one of the folks at the client saying “it took Bob about a week the time we had to do something like this before”.  While useful information, providing an estimate of a week would be sure disaster.  As we pondered why this was true, we discovered an appropriate way to describe the force at play.

MOMENTUM

imageNewton’s first law states that a body remains at rest or in uniform motion in a straight line unless acted upon by a force.  It’s amazing how this applies to software engineering as well as many different human endeavors.  In fact, I hear momentum referred to during sporting events quite frequently – as when an interception kills the momentum of a scoring streak by the opposite team.

James and I analyzed momentum in software development and for the purposes of providing estimate.  We remembered the many times in our careers when nontrivial enhancements to software were able to be completed in very short periods of time and what the factors were.  And we also remembered the times when seemingly trivial enhancements took inordinate amounts of time.  A common element of each was the presence or absence of momentum.  That is, when a software engineer brain has been engulfed in a code base and problem set for an extended period of time, accomplishing many tasks, there is good momentum.  The marginal effort of each successive task decreases until it approaches some minimally viable floor of effort.  That is, when going 100 MPH on a software problem, each mile marker passes by relatively quickly.  In contrast, when starting from a stand-still, the first task absorbs the cost of acceleration. 

In normal circumstances, like context switching, momentum can be gained quickly.  In cases where we are taking over a software system written by unsophisticated teams, gaining momentum can be much more difficult.  For instance, environment friction can be a huge factor in the cost of gaining and maintaining momentum.  How long does it take to prepare the environment for programming?  How long does it take to integrate changes and prepare for testing?  What is involved in understanding where to make changes?

We did not come up with an actual answer for how to estimate a change to a previously unknown code base, but we were able to articulate the momentum factor at play.  Have you, dear reader, noticed this factor at work in your environment?  What builds/kills your momentum?

Coding forward: the opposite of coding backward

I am a big advocate of coding forward.  Coding forward is the style of coding used in Test-driven development.  In TDD, we write a test as we would like the code to work.  We even allow compile errors because we are modeling the API of the code the way we would like it to read – even before it exists.  Then, we begin to fill in the holes, creating the methods that didn’t exist yet, and making them work right.

I like to carry that into all coding, not just test-first coding.  For instance, if I am in an MVC controller and I need to call a new method that I am imagining in my head, I like to just write the call to that method that doesn’t yet exist.  For instance:

image

Here, I know I need to make my domain model to a strongly-typed view model for use in an MVC view.  The method to do it doesn’t exist. 

A common style is to stop coding and then go create the mapping method and then come back.  I find this to be cognitively disjointed and prone to the loss of train of thought.  When I stop coding and jump down into the stack of methods & classes that need to exists for a top-level solution to work, I have to make sure I keep track of my own “call stack” or development stack of items to come back to.  If, instead, I continue coding forward to the end of the scenario, the compiler will remind me of the missing pieces because it won’t compile – or the page/function won’t run.  Automated tests do this also because the test won’t pass until all necessary code is in place.

I have noticed myself doing this, and I realized that it was distinctly a different style than many programmers.  JetBrains ReSharper does help tremendously with this style because of the navigation features and code generation features.  I’m not sure it would be as convenient without R#.  Creating a new class and then flicking it to a new code file is just a couple of shortcut keys with R#, so it’s pretty frictionless to code forward.

Happy coding (forward)

Maiden Name Modeling: determine the right structure

Working with one of our software engineering teams today, I was reminded of some principles of modeling that I have come to take for granted.  But this topic I’m writing about in this post is something that took me a while to learn, and my hope is that at least one other person will find this useful.

When modeling a domain model, data model, or any other data structure representing information for the real world, there are an infinite number of possibilities, and it is up to the software designer to choose the structure for a data model.  I’ll show two ways to model the same data in a real scenario.

Maiden Name Modeling

My nickname for this technique is Maiden Name Modeling.  This is because of the best example.  Here is the requirement:

A congressional legislator needs a way to track contacts.  These contacts are typically constituents, but sometimes they are donors, judges, etc.  An application built on this data model will allow office clerks to maintain contacts in the legislator’s jurisdiction.  It will also allow the lookup and updating of information and notes on the contact.  Many times, a person will be a contact for many legislators, but the information differs a bit from legislator to legislator.  For instance, the contact may be a business, but a different business location or phone number is different for the legislator.

Sometimes a client won’t know how to describe the data characteristics.  And in the age where there are many many database tables containing information about “people”, we modelers need to have some tools to decide what structure to use in what scenario.

Question to ask:  Here is a scenario: Amy Smith is a contact for legislator Bob Parker.  She gets married and becomes Amy Pumpels.  She then reaches out to another legislator Sammy Berkins and gets entered into the database as one of his contacts.  Should her name and other information automatically be overwritten in the record for Bob Parker?

If the answer is “no”, then the maiden name model is the most appropriate for the scenario.  Even though the same person is represented as a contact for the two legislators, it is appropriate for two independent records to be used.  This is because there is no business relationship between the two concepts.  They are completely independent.  In other words if the person of “Amy Smith” disappeared from Bob Barker’s contact list, Bob would be upset.  He would be searching for this person, and Amy Pumpels would be quietly hiding the fact that “Smith” has been deleted from the database.

Here is a diagram of this model.image

Master Name Model

Another way to represent the same type of data is with a master name model.  You might have heard of master name indexes that seek to de-duplicate data for people of all sorts so that there is one place in the company to keep track of names, addresses, and phone number, etc.  This is useful in many scenarios.  Here is a way to understand if this structure is more appropriate to the situation.

Question to ask: Here is a scenario: Amy Smith is a contact for legislator Bob Parker.  She gets married and becomes Amy Pumpels.  She then reaches out to another legislator Sammy Berkins and gets entered into the database as one of his contacts.  Should her name and other information automatically be overwritten in the record for Bob Parker?

If the answer is that Amy Smith should no longer exist in any legislator’s contact list, then this is a tip-off.  A UI features that might accompany this model is a screen that selects an existing contact and adds a Type and Notes.  In this scenario, the user will maintain a shared group of Contacts, and they will be attached to a Legislator along with adding a Type and Notes specific to the relationship.  Here is what it looks like.

image

Danger of many-to-many relationships

Many-to-many relationships have always been hard to manage because of the ownership issue: what object owns the relationship?  For the database, there is no concept of ownership.  In the database, we just store the current state and structure of the data – there are no hints around how it is used.  Any application using and modifying the data must establish usage constraints in order to present an understandable records-management paradigm.

We do this by eliminating many-to-many scenarios in the application: in the object model.  In the above diagram, you see that Legislator has a one-to-many with LegislatorContact.  Then LegislatorContact has a many-to-one relationship with Contact.  This is important: Contact has no relationship with Legislator or LegislatorContact.  And LegislatorContact has no relationship with Legislator.  In the object model, we do not represent these possible relationships in order to make the application code simple and consistent.  Through this modeling, we ensure that application code uses these objects in only one manner. 

In domain-driven design terms, Legislator and Contact are aggregate roots, and LegislatorContact is a type belonging to the Legislator aggregate and can only be accessed through a Legislator.  With domain-driven design, we constrain the model will rules that make things simpler by taking away possible usage scenarios.  For instance, it’s ok for a subordinate member of an aggregate to have a dependency on another aggregate root only, but not classes owned by that aggregate root.  And it’s ok for an aggregate root to directly depend on another aggregate root, but it is not ok for an aggregate root like Contact to directly have a dependency on a subordinate type of the Legislator aggregate.

With these modeling constraints, we eliminate the many-to-many concept that is possible from the data in the application so that application code can be drastically simpler and one-way.

Conclusion

There is no “one way” to model data or objects.  I hope that this post has helped with one common decision point that has occurred over and over in my career.  I would love to have your comments.  Have you encountered a decision point similar to this?