Challenging non-local session scope (session-per-request)

I posted the following on the nhusers mailing list, and Steve Bohlen was nice enough to weigh in.  I was hoping to get broader feedback on this, so I’m posting it here. 


I want to challenge a presumed best practice with NHibernate.  I'm challenging it honestly given my experience using it and helping many clients use it as well.

On the official NHibernate community website,, there is an article on session-per-request.  I have used this technique for most of the years I've used NHibernate.  I have been able to use it effectively.  But, I couldn't get over the fact that all of our clients have problems with this and end up in situations where lazy-load queries happen at all layers of the application.

My problem, then, is that this pattern, while technically sound and useful, introduces complexity that requires a deeper understanding of NHibernate to manage.  Because of this, we have begun disposing of the session after each and every transaction.  See here:

public Employee GetByUserName(string userName)
    using (ISession session = DataContext.GetSession())
        IQuery query =
            "from Employee emp left join fetch emp.Roles where emp.UserName = :username");
        query.SetParameter("username", userName);
        var match = query.UniqueResult<Employee>();
        return match;

Here, I get the employee, dispose of the session, and pass the object back.  For each variation of loading, we have a different method that eager fetches the right level of association or collection.  This way, the application has the data it needs, and there aren't any lazy-load exceptions.

This approach simplified the code in a few ways:

  • Each query to the database is known, and there is a method for each
  • All queries are executed while the call stack is in the data layer (no UI-level lazy loading)
  • It forces the developer to think about each data scenario

In coaching and training other developers (we actually teach NHibernate in our Agile Boot Camp training class), we have come up with a simple message for how to think about NHibernate and other ORMs.

  1. When moving from stored procedures to an ORM, don't forget what was good about sprocs (you had a list of every query that would be run against your database)
  2. Use an ORM for just Object to Relational Mapping.  That is the core strength.  The other features, like lazy-loading, can be useful, but they also add complexity
  3. Keep all your relationships and collections mapped as lazy, but don't allow them to lazily-load
  4. Dispose of a session immediately after using it.
  5. Keep the NHibernate reference out of the your core library and behind data access interfaces.

I'll pause here because there is nothing wrong with session-per-request technically.  I successfully used it on many projects.  The problem is that it is complicated.  I understand it just fine, but I'm also an NHibernate expert.  Lots of other developers just want to use it and be done with it.  They don't want surprises.  Elevating the session past local scope up to a global variable brings unintended consequences.  When developing, I might dereference another collection on an object, and NHibernate happily runs another select statement against my database.  As that developer, I don't have profiler perpetually open to see this happen.  There is absolutely no signal that I just put something undesirable in my application.  Developers need fast feedback on what they are doing.  If there was no session hanging around, the lazy-load invocation would throw and excepting right then.  The developer would then go back and modify the query or decide that we need a new one.  We have used session-per-transaction on two projects now, and it hasn't led to any other difficulties, so I view it very favorably.

A "best practice" should generally lead a developer to a desirable result.  I think that here, the best practice needs to lead to simplicity, even while the "advanced" practice may still be session-per-request.

I have not drawn a final conclusion, but given the bad effects I've seen from this, wouldn't it be a simpler (better) practice to just dispose of the session immediately and create a few more explicit repository methods for different loading levels? 

I appreciate any discussion that arises from this.  The best place would be on the NHibernate Users mailing list.


The Morning Brew - Chris Alcock » The Morning Brew #961 Posted on 10.18.2011 at 2:53 AM

Pingback from The Morning Brew - Chris Alcock » The Morning Brew #961


Chris Marisic said on 10.18.2011 at 8:25 AM

NHibernate session management is the entire reason I /gquit NHibernate. The session is a total nightmare. I spent a couple years using it and approached what I feel is near expert level, if not expert level and would constantly encounter things that just made no sense. Like eager loaded queries still invoking multiple queries to the database.

You also didn't bring up the entire range of caching layers that's built into NH, and how easily they can become inconsistent.

Pretty much I've given up on ORMs. I have applications that use EntityFramework code first, and pretty much every problem I've ever seen with NH has manifested in those applications also.

I have no idea if NH 3 fixed any or all of the problems I saw. Also if you wanted to do something "radical" like implement the long conversation pattern so you could hold an entire wizard session transiently until it was completed, it was just impossible basically with the session.

Chris Vann said on 10.18.2011 at 9:31 AM

I've used this method in the past, but ran into other complexity problems that were just as ugly, if not more so, than the initial problems you mention. Primarily, when you use a session scope on a per-method basis as illustrated, you may (depending on the code) run into problems where one method that has its own transaction is called from inside another method that also has a transaction running. The obvious pitfall here becomes deadlocks. Even if you don't think you're accessing the same data within nested transactions, the loading of child objects often presents this possibility. Unless you have a deep understanding of DB locking mechanisms and can properly debug the call stack, it can be very difficult to track down the source of these deadlocks which can absolutely cripple the application. For this reason, I would argue that it's better to have a deeper understanding of the inner workings of NHibernate's lazy loading and continue with the session-per-request technique.

mick delaney said on 10.18.2011 at 9:37 AM

have to agree.

the important point here is making the developer think about data again, i've got a large MVC app, (4 sites, 100's of controllers) and i wish i'd done that from the beginning.

Each and every query should be taken seriously, and keeping it in a data layer assembly is definitely the way to go, especially when you consider now that people are doing more polyglot data, its not just relational anymore...

zvolkov said on 10.18.2011 at 11:36 AM

It's all about basic assumptions. Does your business strategy assume your devs to be idiots (aka The Microsoft Dev-Div Way)? If yes, then what you said here is right. If you, on the other hand, bet on neck-beards (The Unix Way and The Microsoft Win-Div Way) then you don't want to protect people from themselves.

Torkel said on 10.19.2011 at 3:15 AM

I agree that session management and the identity map (l1 cache) is the root cause for a lot of issues. But you lose a lot of the power of nhibernate if you close the session after each repository method, the dirty tracking will not be as effecient, you might need to use Merge method (to update a detached instance) which has a lot of peculiar behavior.

What I have gravitated torwards is to use simple hand coded SQL for read scenarios (simply read data and to map to a DTO that correspond to data that needs to be shown to a user), only in commands (that modify state) do I use session per request / unit of work and read up aggregate roots and modify state.

For more on seperating between read models and write models:

Gian Maria said on 10.19.2011 at 7:18 AM

Actually I mostly works with web application (where session per request is reasonable good) or desktop application that are based on WCF service. In that scenario using DTO is a cool approach, because it permits you to use similar Session-per-servicecall pattern.

In this scenario, a single session is valid for the entire call, lazy load works greatly (but I always use nhprof to avoid N+1) and the identity map prevent you from having aliases.

But after years of NH, I'm starting to think that Lazy load is an antipattern, and I try to avoid it as much as possible.

Immediately disposing the session lead to lots of problems, first of all, alias (I thanks Identity maps each day I work without it :) ), and makes really difficult using transactions (or forces you to use transaction scope). If I found myself in a specific situation where I need to immediately dispose the session, I usually resort to issue a standard SQL Query to the database (using the same session connection).


Jimmy Bogard said on 10.19.2011 at 9:09 PM

I know of at least two projects we've worked on where this approach didn't work and we had to replace it with session-per-request ;)

And good grief was that a lot of work!

I like this approach when you could have visibility into what was actually going on, but it's tough when everything is hidden behind repositories...

Mike Brown said on 10.25.2011 at 2:38 PM

I would argue that the problem is not with keeping the session open but rather in not making your transactions explicit. Creating an application service layer that forms a boundary around your data layer (and domain model too) does this. The application service layer takes ViewModels or DTOs or even value objects as parameters to its methods, delegates to the domain model and projects the results to a corresponding view model.

Internal to the service layer, there may be multiple calls to the database but it's all within the scope of one operation or transaction. Once we exit the service layer, that transaction is closed off and the object returned is specific to the client.

James Nail said on 11.04.2011 at 9:38 AM

Mike, regarding the approach you're proposing, I think a service layer composed of coarse grained commands (basically encapsulating our entire logical unit of work) would be ideal. Obviously it's overkill for a lot of projects, but not a bad idea at all for a larger application requiring that style of architecture.