Successfully using Server 2k3 on my development laptop – level 100

Two months ago, I blogged about installing Server 2k3 on my development laptop.  I’m following up with that post to say that everything has gone just fine.  Every XP driver worked perfectly on my Dell Lattitude D600.  Everything has been perfect.  Very stable, very fast, and I can profile my applications and have confidence that the results will be better than if I profiled them on an XP box.  Besides.  XP .Net CLR 1.1 has a bug in the Large Object Heap that throws stuff off. 


I can now recommend using Server 2k3 as a development platform for web applications.

Obvious compatibility bug in .Net 2.0 Beta 1 – your v1.1 code won’t work – level 100

This post is to share a backwards compatibility bug in v2.0 Beta 1.  I ran into this bug when porting a project of mine.  This wasn’t the only issue, but here it is.  Run this page under v1.1 and click the button to your heart’s content:

<%@ Page%>

<HTML>

<HEAD>

<title>WebForm1</title>

</HEAD>

<body>

<form id=”Form1″ method=”post” runat=”server”>

<asp:Button id=”Button1″ runat=”server” Text=”Button” OnClick=”Button1_Click”></asp:Button>

</form>

</body>

</HTML>

<script runat=”server”>

private void Button1_Click(object sender, System.EventArgs e)

{

Server.Transfer(Request.Path, true);

}

</script>

Now, run this page under v2.0.  You will click the button, and your computer will never return the page.  It will lock up.  But really it’s not.  Your computer is in an infinite loop.  This is because the second parameter of Server.Transfer has been changed. 



  • When set to true, in v1.1, the querystring would be preserved.
  • In v2.0, the form values are preserved as well.
  • Because the button element is a form element, the button submit is transfered also, so the page thinks that every run is a postback, so the page keeps Server.Transfer()ing back to itself until you kill it.

It’s nice that Microsoft added functionality, but we really just need another overload.  Leave the method like v1.1 because this broke my project, so I’m sure it will break some other projects.  Add an overload to accept a third parameter as a boolean which dictates whether to preserve the form values:


Server.Transfer(Request.Path, true, false);


I would use the above so that my querystring would be preserved but the form values would not.


using VS 2005 Beta 1 and Yukon Beta on the same machine – level 100

Tim Sneath posted a great piece of information today.  I ran into this issue exactly:


I installed Whidbey and then Yukon.  Yukon setup kept complaining that there was already a conflicting version of Yukon on the box (I had MSDE Express), but that wasn’t it.  Tim points out that Yukon has a slightly new version of the .Net framework, so what I need to do is uninstall v2.0 (not VS), and then Yukon will install fine.  Great job Tim.  I was stumped.


By the way, I saw a demo of stored procedures in .Net languages, and I’d like to clear up something:


In Yukon, there is no way to get or modify data without using TSQL.  Yes, stored procedures can be written in any .Net language, but stored procedures are just executable bits of logic.  Manipulating rowsets in the database still requires TSQL calls, but any calculations, validation, etc can be coded in the .Net language.  That being said, is there a reason to code stored procedures in .Net?  Not for me (yet).  I make it a habit NOT to write logic in my Sprocs.  I manipulate data rowsets, and that’s it, so if I have to use TSQL to manipulate rowsets, then I’m right where I am now.


Also, what about the issue of database migration, or switching databases.  If you write database sprocs in .Net, you can never switch databases.  What if you develop a product with Yukon.  You’ll never be able to implement it with SQL Server 2000, and 2000 will be around for a long, long time because it suits the needs of so many businesses.


I’m probably coming across as a pessimist, but that’s not my intent.  I just know it’s inevitable that somebody will devleop an entire application on Yukon’s CLR and then wonder why their database box is so damn slow.  At least, it’ll be good material for The Daily WTF.

Attending the InfoPath/ASP.NET 2.0 MSDN event in Austin, TX – level 100

David Waddleton, and MSDN representative, is giving an MSDN event on InfoPath and ASP.NET 2.0 at the Gateway theater in Austin, TX.  InfoPath is quite interesting as an application client.  Currently my application client is a browser since I write ASP.NET apps, but if all the users have InfoPath on their computer, that can be the client.  Now, it’s a no-brainer for web apps because they can use any browser as the client, but I’m not sure that InfoPath would be viable for my use even though I work for a very large enterprise company.  I can see that a consulting company might be able to use it by building it into the minimum system requirements of their solution, but for my large company, I would have to sell the top IT big dogs on the idea of installing InfoPath on every workstation.  That’s a lot of licensing costs that are not present with web apps or even a click-once app.  Maybe in time it will look like a better option for me.


Just an overview of what I’m doing:


I have my laptop here running on battery power, and I’m connected to the Internet through a USB cable attached to my Sprint phone.  Very nice setup.  I can get on the Net anywhere I can get signal, and I really don’t go out into west Texas much.  I live out in the country, but Sprint has pretty good coverage even outside suburbs.

Yes, sometimes: 好文章分享 — 午睡的神奇力量 – level 100

In response to the blog post: ?????????? , I agree with some of the stuff that was said.  If you are feeling tired, sometimes you have to take a break and get some coffee or tea – I got a Chinese coworker to translate for me :^), but I’m curious as to why http://weblogs.asp.net has chinese posts on it.  Maybe if you post in a different language, include a link to a site where that post can be translated into English.

ASP.NET v2.0 will have a dynamically compiled /Code directory *or will it?* – level 200

I’ve been having frustrations with the v2.0 /Code directory.  Code files in it are dynamically compiled, and that’s great, but I have a /UserControls folder, and I have some usercontrols that have tightly-bound base classes that are also in the /UserControls folder.  In v2.0, to get use these class files, I have to move them to the /Code directory, or they won’t be dynamically compiled, or compiled at all.  And in Whidbey, there is no way to build a web app into a DLL like in VS 2003.  So either I move it to the /Code directory or I’m out of luck.  I want to be able to determine my own file hierarchy in my web project.  I want my class file right next to the user control that depends on it.  I need that class to be compiled to use it.  The solution I would recommend would be to make dynamically compiled folders configurable.  I want to specify multiple folders for dynamic compilation, and I know what I would do.  I’d set to root to be dynamically compiled so that everything would be compiled, and I wouldn’t have to worry about it.  As it stands now, I prefer the VS 2003 method of building code files into a DLL.  Dynamically compiled code isn’t a step forward if it requires taking a leap back by throwing away other functionality.


For those working with Whidbey, don’t rely on the /Code directory staying that way.  It will probably change.  The current though is having /Appllication_Code folders be dynamically compiled instead.  See Van’s comment about that.

Adding webparts programmatically – the REAL way :^) – level 300

Johan made a great post about adding web parts in your code instead of having to declare it in the markup at design time.  This is very powerful, and I’m sure the prefered method for any system with roles.  You can programmatically decide whether a particular webpart should be available to that particular user.  If so, go ahead and add it.  I will definitely have to incorporate this feature into EZWeb when I release my 2.0 version.

Code-behind versus Design-behind versus Code-beside – level 200

Anders Norås’ posted an interesting article about the intent of code-behind.  He says doing:







<TITLE><%# GetResourceString(“PageTitle”) %></TITLE>


is better than doing this:







public class MyPage : System.Web.UI.Page {
     protected HtmlGenericControl Title;
     private void Page_Load(object sender, System.EventArgs e) { 
          Title.InnerText=GetResourceString(“PageTitle”);
      }
}


On my team, there is much discussion about putting any function call int the aspx file whatsoever, so this strikes me as an interesting take.  Honestly, I do a mixture of both, but in the above case, I do exactly what the second code snippet does.  I do all programmatic manipulation in my code-behind file.  I try to have only markup in my aspx and ascx files. 


In v2.0, I imagine I would do the following:







public partial class MyPage{
      private void Page_Load(object sender, System.EventArgs e) { 
          Header.Title = GetResourceString(“PageTitle”);
      }
}


And I really don’t agree that setting content in code is a bad thing to do.  If I was forbidden to do it, I would not be a very effective programmer at all.  For instance, I use a base page class to set the title of every page in my application.  There is no way that I am going to put a function call in the markup of every aspx just to set the title when I can do with one call in my base page.  That just doesn’t make sense. 


Anders, your method works well for mostly declarative pages, but I disagree with you.  I prefer the code method.

Find the index of a particular object when foreaching on a custom collection derived from CollectionBase – level 200

I labeled this post as 200 instead of 100 because I imagine that beginners in .Net won’t be creating their own custom collection classes.  I think once someone can create their own custom collections, they should be considered at least intermediate.


Today a colleague approached me with a problem.  He was foreaching through a custom collection in C#, and when he evaluated the object he wanted, he need the index of that object for the collection.  He was approaching the problem with calls to GetEnumerator(), etc, and I shared with him what I do for this situation:



  • When you inherit from CollectionBase, your collection data structure is an ArrayList.  if you look at the InnerList protected member, it is of type ArrayList.  The ArrayList type has a method IndexOf(object) that returns int, the index of the object within the ArrayList.

  • To make the index available to callers of your custom collection, define a method like so:

        public int IndexOf(object value)

{

return base.InnerList.IndexOf(value);

}


  • So now, when you have an object you can pass it to this method to get the index of that object within the instance of your collection.

When I create my own collection types, I normally expose all the helper methods of the InnerList so that my collection is as functional as possible.  Inheriting from CollectionBase just strongly types the ArrayList object for you.  With generics in v2.0, if you don’t need any extra functionality from your collections, then you’ll just want to go with ArrayList<T> or you can use the new Collection<T> class:

    public class Junk<T> : Collection<T>

{

private int m_myExtraMember;

public Junk(int myExtraMember)

{

m_myExtraMember = myExtraMember;

}

public new int IndexOf(T value)

{

return base.IndexOf(value);

}

}

With v2.0 and generics, you may still want to implement your own custom collections if you need to track a member that isn’t included, but still, this method is simplified.  I have overriden the IndexOf() method unnecessarily just to show some syntax.  With 2.0, all you would need is the m_myExtraMember field.  IndexOf is inherited.


Another tip about collections:  Know what data structure the collection uses internally.  Collection<T> uses the List<T> structure internally which is similar to ArrayList in that it keeps the objects in the same order in which they were inserted.  With HashTable-based collections, the objects are retrieved via key, and the order is not guaranteed, so if  you foreach on a HashTable, you can never know in what order the objects will be retrieved.  I hope this post helps someone.

Here’s how I track down memory leaks in my .Net apps – level 400

I was planning on posting this anyway, but Steve’s post has convinced me to do it sooner.  Here are methods and tools I use to track down memory leaks in my applications. 



  • First, I profile my app.  I do web applications, so I’ll use Application Center Test that comes with VS 2003 EA, but you can also use the free tool, Microsoft Application Stress Tool.

  • I run perfmon while the stress tool is running.  The stress tool will push my app to the limit and simulate heavy traffic.  Here are counters I ALWAYS look at:


    • # of Excepts Thrown /sec (0 is the number you are shooting for.

    • Gen 0 heap size

    • Gen 1 heap size

    • Gen 2 heap size (this is where your memory will likely accumulate.  If you have objects that won’t go away, they will end up here and end up taking up ALL your memory)

    • Large Object Heap size

    • Application Restarts (if your memory usage gets too high, ASP.NET might automatically restart the AppDomain, and you’ll be able to see it here).

    • Any others you feel are appropriate.

  • Run the stress tool while watching the performance counters.  Your memory usage should go up and down but overall remain consistent.  If your gen 2 keeps getting bigger and bigger and bigger, then you probably have a memory leak.

  • Here is how I go about finding out exactly what objects are taking up all my gen 2 memory


    • Bookmark the C# Team’s Tools page.  There are several memory profilers to choose from.  There are several commercial products you can buy, but I chose to try the free one: Allocation Profiler.  I also tried NProf, but it seemed to profile processor performance and not memory (unless I used it wrong).

    • Note, for AllocationProfiler or NProf to work, you have to modify your machine.config file’s <processModel/> node to have username=“SYSTEM“ instead of “machine“.

    • Open AllocationProfiler, and have it start profiling your app.

    • I would run the stress tool on my web apps to generate a lot of traffic.  When I’m done, I stop profiling.  Application Profiler generates some good graphs from its logfile.  I look at where the memory is allocated, and I see generation 2 has a bunch of stuff.  I can see what types make up the most memory.  You can also look at the contents of the managed heap with other views, and you can drill down and determine what is taking up all your memory.  Once you have found the offending type, you can dig into your code and look for no-nos.

  • Read my previous post about my nasty memory leak.

  • If would also do you good to read Jeffrey Richter’s book: Applied Microsoft .Net Programming in C#.  It has a great section about the managed heap and garbage collector.

  • Once you know how the managed heap and garbage collector work in .Net (and no sooner), you can evaluate your code for scenarios where long-lived objects keep references to other objects and thereby prohibit the garbage collector from reclaiming their memory.

  • Example:  If you have a long-lived object that has long strings as members, the memory those strings consume cannot be collected as long as the long-lived object is alive. 

I hope this helps someone find memory leaks in their application.  Practice makes perfect, and I had to read books and articles to understand the inner workings of the CLR before I could effectively analyze my code for leaks.