Tech Ed 2005 Day 2 – Test-Driven Development is Design!

Scott Bellware, along with Darren Norton facilitated a Birds of a Feather session on Test-Driven Development.  There was a very large turn-out that only left standing room.  Most of the attendees were interested in TDD, but they weren’t practitioners, so the session started out with an overview.  James Newkirk even came, but he hid out in the back until he was discovered.  James Avery and Dave Donaldson also hid in the back and didn’t say much.

There was some good discussion about how to keep TDD from slowing down a project, and we made it clear that TDD is a rigorous methodology and will require a learning curve, so just like you have to learn .Net, then you become more productive, it is the same way with TDD.

Tech Ed 2005 Day 2 – Dealing with Data in Service-Oriented Architectures

This session was not about SOA but about dealing with data in an SOA. This issue he brings to light is that many systems cannot have a single source of data because of the amount of data. Huge databases that grow 40% per year. The data has to be partitioned in some manner, but after partitioning, how is it queried? That becomes a problem.

Architects have to camps: “object-siders” and “data-siders”.

Object-siders start with objects, but Data-siders think of everything in terms of the database.

John speaks of the beginning of OO, through Components, to SO in terms of evolution. I’m not sure I agree with this because evolution implies that SO is the superset of the beginnings, but I think it’s just another layer above and around OO and components

In this presentation, data is owned by different services. Data is partitioned among different services that use each other to get a job done. It’s clear which service owns which data. This presentation is very high-level and very abstract, and he doesn’t go into much detail about where actual applications fit into his model. He talks only of services and how they talk to each other. In my experience, services don’t talk to other services. Application talk to services, and services are owned by applications, but service code doesn’t engage in two-way interaction with other service code. The consumption direction is one-way.

One thing I really liked is that John refuted the idea that SO kills OO. He emphasizes that services are built with OO-components.

John concludes that architecture is very hard and is a game of trade-offs. He advises us to “be very careful” when trying to factor existing systems into services.

Tech Ed 2005 Day 2 – Anatomy of a network hack: How to get your network hacked in 10 easy steps

Jesper Johansson gave a session on the Anatomy of a network hack: How to get your network hacked in 10 easy steps. He set up a local network with several machines and hacked through a SQL injection attack, through an outer domain controller, to the corporate domain controller, and took over the entire network. He used several command-line tools and several built-in windows tools to accomplish the hack. The guy next to me got very depressed and declared after the talk, “I’m going to unplug all my servers.” Security is a very real concern, and IT Pros need to be experts on security, but the problem is that if you don’t know how to hack, you don’t know how to secure against those hacks. I don’t pretend to know a lot of hacks, but I’ve committed to knowing application security. If the server on which my application runs is vulnerable, then we may be sunk anyway. It was a great session, and Jesper is a great speaker.

How to get your network hacked in 10 easy steps:
1. Don’t patch anything.
2. Run unhardened applications.
3. Use one admin account, everywhere.
4. Open lots of holes in the firewall.
5. Restrict internal traffic.
6. Allow all outbound traffic.
7. Don’t harden servers. Run them in the default configuration.
8. Reuse your passwords everywhere.
9. Use high-level service accounts in multiple places.
10. Assume everything is OK.

Tech Ed 2005 Day 2 – morning

I woke up later than I wanted this morning, but I didn’t intend to make the keynote anyway.  By the way, there isn’t any significant wireless access at the convention center.   I haven’t been able to connect at all.  I wasn’t planning on it anyway because I always bring my Sprint phone with the USB cable, and I can get on the Net anywhere I have signal.  I’ll never count on wireless access at a conference.  In my hotel it’s pretty good.

New INETA liaison for south Texas – level 000

I’ve recently accepted a new position as the INETA liaison for south Texas.  I’ll be the contact point for several user groups to help them with INETA-related matters.  I look forward to contributing more to the developer community beyond leading the Austin .Net User Group.

Tech Ed 2005 Day 1 – Smart Client Architecture

Billy Hollis and Rocky Lhotka gave a session on smart client architecture. They started out with an overview and the differences that smart clients have from other types of applications. Billy Hollis started out by recognizing that developers have been pushed to the web for some time, and now smart clients say “go back to the desktop”.

Billy Hollis and Rocky brought a lot of energy to the presentation, and the audience was quite engaged.

Applications started out on centralized mainframes that were as big as room. Come up a number of years, and PCs allowed decentralized client-server models. It allowed distributed systems, but it couldn’t jump to the web because it required a constanct connection. At the same time, COM came around, but there were issues with the stateless Internet. Then came the web model which went back to a centralized web server, and it turned the PC into a dumb terminal for the web. This was a step back architecturally. .Net solves the issues that prevented client-server apps from making the jump to the web. This will help solve limitations of the browser.

The following are some aspects of a smart client. Smart clients emphasize intelligence of the UI. They take advantage of the client machine, yet defer processing to a server. There are some questions you can ask yourself in order to determine if you need a smart client? Do you control the client OS? Do you need offline? Would a better UI lead to better productivity? Do you need tablet features? Do you need to access local resources? What security do you need? Do you have low-bandwidth requirements?

There are some requirements in order to use a smart client. You need broadband, .Net Windows Forms, copy and run/clickonce/etc, web services / remoting, code access security, and authentication / authorization. If you have all these, then you have a good environment for a smart client application. Code Access Security is very important in this list.

When you do web-based applications, you are trained to not use state because it means more data on the wire. Developers need to change their way of thinking. With a smart client, use as much state as you need. Other than the UI, other principles still apply. A domain model will function of the same way, and the database will be the same. The presentation to the user is the main thing that is different. Billy presented a layered architecture as a basis for the smart client. Included is local storage on the client for caching. What hits me is that if you have a rich domain model, the only thing that changes I the view and the controller. To build the client layer, you want to componetize as much logic as possible.

Billy made a hit on VB developers. He stated that a lot of those developers will reuse functionality by copying and pasting in the UI. Billy, himself, is a VB guy, but he pointed out that cultural issue that needs to be addressed. By componetizing, you will pull out logic from the individual views.

Some of the key technologies needed to effectively implement smart clients are extender providers, and dynamic loading of .Net assemblies. Externder providers become more standardized in 2.0, but it can be done in 1.1. You will also need to dynamically load assemblies because this allows you to only load the components you need at one point in time.

Billy Hollis demoed an implementation of the observer pattern for IsDirty checking. He used VS 2005 and added a DirtyChecker component to watch over all controls with the DirtyChecking property set to true. The component throws the event when some data changes and can run some code. In his example, when the data is dirty, the save button is enabled. The DirtyChecker is a component Billy wrote himself, and it is available on his website.

Rocky then took over and moved from UI to data transport. Rocky emphasized that Uis always change. Customers always change their mind, and they blame the developer. UI designers have the worst job in the world according to Rocky, and that the person building the UI ought to be able to focus on it and add glitz. He contends that if every a UI developer has to touch remoting, or event SQL, then it is wrong. He says that should be the goal. This allows the UI developer to focus on one thing and do a good job of it. Rocky goes on with the following (paraphrased). Most applications exist for way to move data back and forth from the user to the database, so what should you use to connect the UI with the database? On the server, you can host some data. You also have to choose the transport for that data (for getting it across the network). Certain hosts require certain transports. In the future, Microsoft will provide a unified view of the host and transport in the Indigo package. He says that you can configure it for messages or method calls.

Today we have some competing technologies: RPC, Async messaging, or Services(SO). Rocky made a crack as SOA saying that using that buzzword will make your salary go up, but nothing else right now (tongue in cheek). Options today are numerous. Web services don’t work perfectly for client services, but you gain a lot of benefit from using them. DCOM is also an option, and it’s the fastest option. If performance in that space is the main focus, then Enterprise Services is where you want to be with DCOM. Most of use don’t need that number of transactions/sec, so we can fall back to remoting. Remoting was created for use with client-server. The easiest method for client-server is remoting. No matter the choice, moving to Indigo later won’t be much of a problem.

Rocky talks about a “DataPortal” design pattern with a goal to abstract away the logic of the communication between the business logic and the data access code. You don’t want the UI or business logic to know what the data access code knows. The more places you interact with communication protocols, the more code you’ll have to change when Indigo comes. The DataPortal pattern puts a DataPortal Façade below the business logic, and it knows how to contact the DataPortal to get data. You don’t care what communication mechanism is used. The DataPortal Façade knows remoting, or DCOM, or web services.

Some options for moving and managing data are datasets, data-transfer object, or domain objects. A DTO is just an object that contains data for the purpose of moving. The ultimate DTO is the dataset. It contains data but no intrinsic logic or business behavior. A DataSet fills the role of the DTO. One scenario where these are used is a Data-Centric model (Fat UI). This model makes it easy for putting business logic in the UI, and they rely on DTOs. It requires discipline to keep from putting business logic in the UI.

Another option is distributed objects. This puts the network boundary between the UI and domain layer. This is better but still has a flaw. The objects tend to just pump tabular views of data in and out. Client-side objects help this situation. The objects reside on the client physically and help encapuslate logic while aiding the UI.

Rocky demoed some code at where he showed the Façade layer that could use any transport based on an enum. It abstracted away all transport and data details. Very nice demo.

Microsoft is started to beef up support for smart clients, but it’s a great application option, regardless. Avalon is the next generation of smart clients, and XAML will be big on that.

Tech Ed 2005 Day 1 – Microsoft Visual C# Under the Covers: An In-Depth Look at C# 2.0

I attended Anders Hejlsberg’s session on Microsoft Visual C# Under the Covers: An In-Depth Look at C# 2.0. He started with an overview of the new language features. They include generics, anonymous methods, nullable types, iterators, partial types and more, yet the language is 100% backwards compatible. Generics required a change to the runtime, but the others did not.

Sometimes with aggregates or groups of objects, we have to use a IList class or implement a custom aggregate class (custom collection). Using ArrayList as it is works, but you lose the strong-typing to your custom object. To get a strongly-typed list, the developer has to create a brand new class just for this purpose. Generics solves this problem with type parameters. With a generic type parameter, you get type checking, no boxing, and no downcasts. Another benefit is increased sharing of the generic collection because the type can be substituted. C# generics are different from C++ templates because the code is instantiated at run-time instead of compile time. C# generics are checked at declaration, not instantiation, they work for both reference types and value type. With the type parameter, you can choose to accept any object or put constraints on the type parameter to accept only a subset of types. Type parameters aren’t limited to classes, structs, interfaces, or degates. You can also put a type parameter on a method. The example he gave was as follows:

Class Utils{

public static T[] CreateArray<T>(T value, int size){

T[] result = new T[size];

// something;

return result;



This also supports type inference. If you leave out the type parameter, it will detect what you are assigning the result to and make a guess. Very interesting! Definitly, the most popular use of generics will be consuming the built-in generic collections such as: List<T>, Dictionary<K,V>, SortedDictionary<K,V>, Stack<T>, and Queue<T>. Instead of using an ArrayList, I’d use List<T>. If you still need a custom collection for smarter aggregates, you can inherit from Collection<T> and just implement the difference in needed functionality.

With anonymous methods, we can substitute a body of code for the delegate name. If I want a body of code run as a parameter to another method or anywhere else, I just use the following:

Delegate() { return something; }

With anonymous methods, the delegate is automatically inferred, and the code block may omit the parameter list. The most useful scenario is event handlers for events. Instead of creating an event handler method, I can just add a code block to the click event like so:

Button.Click += delegate { MessageBox.Show(“Howdy”); };

The compiler does the method creation for you.

Anders did a speed test using anonymous methods with generics. For the Arraylist of an int, it took longer than the Arraylist of strings. Using the generic List<string> was faster than both Arraylist uses, but List<int> was significantly faster because there was no boxing and not even an object. Anders explained that the x86 code that executes is dealing with ints natively. Very cool.

After the generics demo, he went to nullable types. He addressed the disparity between the .Net type system and relational database type systems. C# 2.0 support nullable types. That means that value types can now be null as well as their default value. The native types are not changed, but new types exist. There is an int and an int?. With an int, one memory slot is the value, but with the int?, it also stores a boolean flag denoting whether is null or not. System.Nullable<T> is the enabler for these nullable types. C# 2.0 has language features that uses this class with the “?” marker after a value type such as int? and double?, both of which can now be assigned to null.

Iterators are also an addition to the language and introduces the “yield” keyword. An iterator method must return Ienumerator or Ienumerable. It incrementally computes and returns a sequence of values. It will “yield return” or “yield break”. Using these keywords actually causes the compiler to generate IL based on the use.

Partial types brings the ability to split a class among many physical files. This is similar to include files, but is handled by the compiler by merging into one class and building IL. If decompiled, you can’t tell it was written as separate partial classes. The “partial” keyword is placed before “class” in the class definition to invoke this feature. The second purpose of partial classes is to solve a code generation problem. If you change a generated class and regen, you’re in trouble because your changes are gone. If a partial classes are generated, then you can own another partial class that the compiler merges in, but since your code is separate, regens don’t mess you up. The compiler understands the sum of the partial classes.

C# 2.0 also supports static classes. Many developers create classes with only static methods, and 2.0 supports the “static” keyword as a class modifier to denote that no instance of that class can be created.

The change to accessors will probably be quite useful. When you find yourself with a public property, but you want the setter to be internal to the assembly, now you can set the accessibility level.

I enjoyed this talk tremendously. It always good to learn about a language from one of the main engineers of that language. Great session!

Tech Ed 2005 Day 1 – Microsoft Visual C# 2005: IDE Tips and Tricks

In the Microsoft Visual C# 2005: IDE Tips and Tricks session, Lucas Hoban, Anson Horton showed VS 2005 and various features in contained. They started with the Whidbey class designer. It has some similarities with UML, but much more since VS keeps the diagram in sync with the code. The diagram and code can share the screen and changes to one affect the other. Changes in code are immediately shown in the diagram. I’m quite excited about this particular feature because it enables design documentation generation based on the code, and the diagram will stay in sync. This is much different from UML diagrams because if I change an inheritence relationship, I have to manually alter the UML diagram.

They also went over some of the refactoring features. Some of them are basic but necessary. The Rename refactoring does a smart search and replace for you instead of requiring a manual process. VS 2005 also includes static code analysis (FX Cop). You can use the warnings found by FX Cop as a springboard to the refactoring features.

I was most impressed with built-in cyclomatic complexity analysis. Cyclomatic complexity is a factor of how many possible paths execution can take through a particular method. The metric you want to have is 1. By default there is 1 path through code. When you add an “if” statement, the metric automatically goes to 2 because there are now two paths through the code. A high cyclomatically complexity denotes smelly code that is hard to debug, and you need to refactor to smaller methods. Spaghetti methods are also hard to test because there are multiple paths through the code.

Intellisense is also beefed up in 2.0. When starting a new class, just type “c”, and an intellisense window pops up, and you can complete the word with tab, and “class” will be inserted. It also has code snippets in the list, and it will include the opening and closing braces along with a fill-in section for the class name. Other common ones are “prop”, and hitting tab will insert a private field as well as the property wrapper. Just tab to the next section of the template to fill in the types and names. If you worked with the Beta 1 last year, you’re familiar with most of the features, but the IDE is a lot less buggy from what I can tell.

The IDE includes a code snippet manager where you can manage the snippets. You can modify a snippet or add your own to improve your coding experience. All the snippet is is an Xml file with the information for building the snippet.

I saw a feature that will probably be an aid in Test-Driven development. When writing a test we call a method that doesn’t yet exist. VS will detect that the method doesn’t exist and give a “smart tag” that, when activated, will generate the method stub in the target class. Some resharper features are also included in the new IDE. Insert using statement is now supported with a smart tag.

Exceptions are in the tooltips now, and typenames are colorized (Resharper style).

Tech Ed 2005 Day 1 – Opening keynote

The keynote was good, but it ran too long. I did expect a bit more excitement from Steve Ballmer than he showed. He didn’t do any shouting at this keynote. He went over some stuff about Windows Server and Windows Mobile and announced Windows Update for all Microsoft products instead of just a select few.

He hinted at a Release 2 of Windows Server 2003 that would include more features like built-in virtualization instead of a stand-alone Virtual Server product.

He demoed a web application called Virtual Earth that’s pretty much a copy of Google Maps except for a hybrid view that showed the satellite picture below a translucent map view. Windows Mobile also has some pretty cool features like remote wipe if the device is lost. The device can be wiped remotely.

Visual Studio Tools for Office looks really good. With it we can create a Windows Forms application that can be hosted inside an Office application. The most obvious of these is Outlook. WS Management will also be big for server administrators. It will allow management of servers besides Windows servers. They showed a demo using Sun servers. Steve then covered security vulnerabilities and showed a slide showing how many more vulnerabilities Suse and RedHat linux have than Windows Server 2003. Active Directory is the most popular directory, and the keynote was over. It took quite a while to get out of the keynote room because of the bottleneck at the exits.

Tech Ed 2005 Palermo After Party

The Palermo After Party will be Friday evening at the JW Marriott.  The time is still TBD, but pass the word.  This is after the conference and will be the final Tech Ed blowout before we all have to go back home.