Billy Hollis and Rocky Lhotka gave a session on smart client architecture. They started out with an overview and the differences that smart clients have from other types of applications. Billy Hollis started out by recognizing that developers have been pushed to the web for some time, and now smart clients say “go back to the desktop”.
Billy Hollis and Rocky brought a lot of energy to the presentation, and the audience was quite engaged.
Applications started out on centralized mainframes that were as big as room. Come up a number of years, and PCs allowed decentralized client-server models. It allowed distributed systems, but it couldn’t jump to the web because it required a constanct connection. At the same time, COM came around, but there were issues with the stateless Internet. Then came the web model which went back to a centralized web server, and it turned the PC into a dumb terminal for the web. This was a step back architecturally. .Net solves the issues that prevented client-server apps from making the jump to the web. This will help solve limitations of the browser.
The following are some aspects of a smart client. Smart clients emphasize intelligence of the UI. They take advantage of the client machine, yet defer processing to a server. There are some questions you can ask yourself in order to determine if you need a smart client? Do you control the client OS? Do you need offline? Would a better UI lead to better productivity? Do you need tablet features? Do you need to access local resources? What security do you need? Do you have low-bandwidth requirements?
There are some requirements in order to use a smart client. You need broadband, .Net Windows Forms, copy and run/clickonce/etc, web services / remoting, code access security, and authentication / authorization. If you have all these, then you have a good environment for a smart client application. Code Access Security is very important in this list.
When you do web-based applications, you are trained to not use state because it means more data on the wire. Developers need to change their way of thinking. With a smart client, use as much state as you need. Other than the UI, other principles still apply. A domain model will function of the same way, and the database will be the same. The presentation to the user is the main thing that is different. Billy presented a layered architecture as a basis for the smart client. Included is local storage on the client for caching. What hits me is that if you have a rich domain model, the only thing that changes I the view and the controller. To build the client layer, you want to componetize as much logic as possible.
Billy made a hit on VB developers. He stated that a lot of those developers will reuse functionality by copying and pasting in the UI. Billy, himself, is a VB guy, but he pointed out that cultural issue that needs to be addressed. By componetizing, you will pull out logic from the individual views.
Some of the key technologies needed to effectively implement smart clients are extender providers, and dynamic loading of .Net assemblies. Externder providers become more standardized in 2.0, but it can be done in 1.1. You will also need to dynamically load assemblies because this allows you to only load the components you need at one point in time.
Billy Hollis demoed an implementation of the observer pattern for IsDirty checking. He used VS 2005 and added a DirtyChecker component to watch over all controls with the DirtyChecking property set to true. The component throws the event when some data changes and can run some code. In his example, when the data is dirty, the save button is enabled. The DirtyChecker is a component Billy wrote himself, and it is available on his website.
Rocky then took over and moved from UI to data transport. Rocky emphasized that Uis always change. Customers always change their mind, and they blame the developer. UI designers have the worst job in the world according to Rocky, and that the person building the UI ought to be able to focus on it and add glitz. He contends that if every a UI developer has to touch remoting, or event SQL, then it is wrong. He says that should be the goal. This allows the UI developer to focus on one thing and do a good job of it. Rocky goes on with the following (paraphrased). Most applications exist for way to move data back and forth from the user to the database, so what should you use to connect the UI with the database? On the server, you can host some data. You also have to choose the transport for that data (for getting it across the network). Certain hosts require certain transports. In the future, Microsoft will provide a unified view of the host and transport in the Indigo package. He says that you can configure it for messages or method calls.
Today we have some competing technologies: RPC, Async messaging, or Services(SO). Rocky made a crack as SOA saying that using that buzzword will make your salary go up, but nothing else right now (tongue in cheek). Options today are numerous. Web services don’t work perfectly for client services, but you gain a lot of benefit from using them. DCOM is also an option, and it’s the fastest option. If performance in that space is the main focus, then Enterprise Services is where you want to be with DCOM. Most of use don’t need that number of transactions/sec, so we can fall back to remoting. Remoting was created for use with client-server. The easiest method for client-server is remoting. No matter the choice, moving to Indigo later won’t be much of a problem.
Rocky talks about a “DataPortal” design pattern with a goal to abstract away the logic of the communication between the business logic and the data access code. You don’t want the UI or business logic to know what the data access code knows. The more places you interact with communication protocols, the more code you’ll have to change when Indigo comes. The DataPortal pattern puts a DataPortal Façade below the business logic, and it knows how to contact the DataPortal to get data. You don’t care what communication mechanism is used. The DataPortal Façade knows remoting, or DCOM, or web services.
Some options for moving and managing data are datasets, data-transfer object, or domain objects. A DTO is just an object that contains data for the purpose of moving. The ultimate DTO is the dataset. It contains data but no intrinsic logic or business behavior. A DataSet fills the role of the DTO. One scenario where these are used is a Data-Centric model (Fat UI). This model makes it easy for putting business logic in the UI, and they rely on DTOs. It requires discipline to keep from putting business logic in the UI.
Another option is distributed objects. This puts the network boundary between the UI and domain layer. This is better but still has a flaw. The objects tend to just pump tabular views of data in and out. Client-side objects help this situation. The objects reside on the client physically and help encapuslate logic while aiding the UI.
Rocky demoed some code at www.lhotka.net/go/teched05.aspx where he showed the Façade layer that could use any transport based on an enum. It abstracted away all transport and data details. Very nice demo.
Microsoft is started to beef up support for smart clients, but it’s a great application option, regardless. Avalon is the next generation of smart clients, and XAML will be big on that.