I think mock objects themselves are a 300-level topic. I wish it weren’t so, but from the folks I talk to, the average developer doesn’t use them. Fakes, stubs, mocks (whatever you want to call them – and I know they overlap, and I understand the semantic differences among them) are critical for testing. It’s important to isolate code under test, and in order to do that, we have to fake out other classes the current class talks to. Not all the classes as a rule, but the ones that might give our test unpredictable results.
Phil Haack lays out an example faking an event on an interface. While I prefer to use plain delegates for my view-controller notification, events are multicast delegates, so the work as well. Give it a read.
NHibernate knows when an object under its watch has changed. As soon as the object changes, it is “dirty”. Some other changes might cause an object to be dirty as well. One that my team recently encountered is a cast. We use an enum of type byte. It’s only a few items (less than 255), so we use a tinyint in our database. When our mapping uses type=”byte”, NHibernate casts from the byte to our Enum type when hydrating the object. This cast is a change because when NHibernate checks the value, it’s an Enum, not a byte.
To get around this cast (implicit or not), we use the fully qualified type name of the Enum in the mapping. NHibernate understands Enums natively, so just put in the enum type, and you are off to the races. Note that if you are using an Enum that is nested inside a public class, you need to follow .Net’s rules for fully-qualified type names.
See MSDN’s documentation for this:
This topic is for those already using NHibernate. By looking at the forum, that is a whole load of people!
As always, my blog posts stem from experience, and this is no different. It's been a year since I first tried out NHibernate, and since then I've used it on 4 large, unrelated applications. This latest application of NHibernate is by far the most exciting, however, because we are able to take advantage of the full power of the library. The others have always been tempered with the few things that couldn't be changed that hampered seamless data access. My team no longer has to slow down to think about what SQL to write. We stay in C#, and we're going faster and faster. For the performance-minded, the NHibernate SQL is pretty darned fast (because there is nothing special about it – just CRUD). We run about 120 database tests in 2.5 seconds – not bad.
Last week, I learned another thing new about NHibernate – AutoFlush mode. This is important because NHibernate only keeps 1 instance of your domain object per ISession instance, so if you ask for the same object multiple times from the same ISession instance, you'll get the same domain object instance. Imagine this scenario:
- You pull some objects into memory.
- The user modifies one object.
- You query for a list using ICriteria (the object the user modified is a match for this list)
What should the system do? Should the fresh query refresh all the objects and throw away the user's changes? NHibernate's default behavior is "no". It is configured to "autoflush" by default. When it detects that some changes might inadvertently be thrown away by a fresh query, it will auto update the modified object to the database. If you open up SQL Profiler, look for UPDATE commands amidst SELECTs. If you choose to set the autoflush mode to "NEVER", then you'll get a big fat exception, and you can write some code to handle the times when you need to do a fresh query after a persistent object has been modified.
Imagine two scenarios:
- You build a system where one class is responsible for coordinating actions of many. This one class may observe many conditions or events and act appropriately.
- Each small event or condition is encapsulated by an object. You have many classes, but the responsibility of each class is small. No one class has too much to do.
I call the first scenario “god code”. This code rules from on high. This code might have to track multiple conditions in class or global variables to keep track of what is going on. This code is very busy all the time and has its hand in everything.
god code leads to overly complex applications. A remedy for god code is to push behavior down into the smaller classes being worked on. Empower the smaller classes to take some responsibility for themselves. They are quite capable.
I’ve been with my current company for over two months now, so I’m fully ramped up and integrated with my new team. Since I arrived, we’ve hired one more developer and a tester to round out the team. Last week we finalized some plans to prepare the application for an huge influx of new features. The application does about 1/5 of what it needs to do long-term, so we need to be able to add these new features quickly now that we have formed a complete team. My challenge was to find a way to facilitate that.
The hump to get over was some of the assumptions that the current application made (and were perfectly valid at the time). The business had changed direction, but the software hadn’t. Consequently, new features had to work around the (now false) assumptions. Adding features became slow, and debugging even slower.
The solution: rematch the software with the business. Another way to put it would be that the application was full, and we need to make room in the codebase for new features. This means a lot of new code. It might sound alarming at first, but typing the codes isn’t what takes so long. What takes so long is hashing out the design. Yes, the code IS the design, and the code is what is important, but I would argue that the code is a mere textual representation of the design. You can’t have code without design. Typing characters in the IDE is very fast when the design is known (because one can see the text in one’s head before typing the words). The design has been hashed, changed, and rehashed over time. It’s been refined and tested, so now that it is known, the code representing it can by typed very quickly in a manner that leaves ample room (i.e. loosely-coupled) for new features and changes down the road.
We can’t throw away code! We spent a lot of money on that code!!!
The above statement is a fallacy. The company spent money designing, not typing. The level of communication required with business partners was the majority of the costs. Technical design and whiteboarding is some cost. The typing was cheap!