Archive for the ‘.NET’ Category

Windows Client for Subversion – TortoiseSVN

The client was the biggest adjustment I needed to make when switching from Visual SourceSafe (VSS) to Subversion. A VSS client was integrated with Visual Studio, which meant you could do most things related to source control through Visual Studio.  So when I it started looking at Subversion, the first client I tried integrated into Visual Studio, but I couldn’t get it working properly.

I ended up installing TortoiseSVN, which adds the source control functionality to Windows Explorer. I was skeptical because I though it would be a pain to go to Windows Explorer whenever I wanted to interact with Subversion, but I was wrong. I quickly got used to working with Subversion in this manner.

Probably the biggest advantage of working with TortoiseSVN is that I can have multiple working copies of a project. For example, I can have a copy on my development machine, on our test server, and our production server. When I want to see what has changed between the repository and the working version, I simply use the “Check for modifications” command of TortoiseSVN. This tells you what files have changed, and has a pretty good diff tool to show how those files changed.

Because TortoiseSVN integrates with Windows Explorer, rather than Visual Studio, it can be used by people that don’t use Visual Studio. This is important to us because some design changes are made directly on a test server, not on a developer’s computer. TortoiseSVN will save a ton of time in keeping these changes in sync.

All of this works because by default, Subversion uses an edit-merge-commit model, rather than the checkout-edit-checkin model used by VSS. The differences are explained in Chapter 2: Checkins of Eric Sink’s series on source control. At first this change of models sounded strange to me, but I found that the edit-merge-commit model really does work better for some of the things we’ve been doing with projects.

I’ve been running Subversion and TortoiseSVN in a test environment for about 6 weeks. So far, I’ve been extremely pleased in how it’s working.

Installing Subversion on Windows

At first I was apprehensive about installing Subversion on Windows. I was concerned because the web site mentions Apache, and Apache isn’t on my Windows virtual machine. Did I really want another web server running on this machine?

Then, about a month ago, I decided to give it a try, so I installed the Tigris version for Windows. To my surprise, it was pretty easy to install, Apache was included in the install package.  The main problem I had was figuring out how to run Subversion as a service rather than a simple application.

Then, about a week ago, I decided to change the virtual machine Subversion was installed on. This time, I installed the CollabNet package of Subversion. Wow! This was even easier. It took care of installing everything I needed. Here are the steps I took, very loosely followed from this page on Setting up Subverion on Windows:

  1. Download and install the package from Collabnet
  2. Added a Environmental variable called SVN_EDITOR pointing to c:\windows\notepad.exe
  3. Modified the file C:\Program Files\CollabNet Subversion Server\httpd\conf\httpd.conf to so Apache would Listen to a port other than 80, as 80 interfered with Microsoft Content Management System and IIS already on the machine
  4. Created a Subverion Repository
  5. Because Subversion was already installed on another virtual server, and I wanted to keep that repository, I simply copied the entire folder from one server to the other.

Copying the repository from one server to the other almost trivial. Just copied it in, and it worked. I highly recommend the package from CollabNet.

The hardest part of changing servers was that I had to use the Relocate… command to point all my working directories to this new repository.  Not a hard thing to do, but it took a few minutes to do this every working directory.

So far, I’ve been very pleased with the way Subversion has been working for me. I’m very close to expanding it to include the other people on our development staff. Unfortunately, this will mean yet another server install, moving the repository, and relocating all the working directories. At least when this time comes, I’m confident that the change will be easy.

Looking at Source Code Control Solutions

Lately, I’ve been looking at options for doing source code control within our group. What really got me started rethinking source control was these articles titled Source Control HOWTO by Eric Sink.

This series of articles does great job of describing what source code control is all about. It covers a lot of different concepts within source control, and how different products handle these concepts. Best of all, the series is a pretty easy read. I learned a lot from these articles and appreciate that Eric made them available. I highly recommend them.

We have been using an older version of Microsoft Visual SourceSafe. Although we have not run into some of the corruption issues some people have experienced, I’d like to find a better way of handling source code. The choices I see include:

  1. Continue running the old version of Visual SourceSafe
  2. Upgrade to the newest version of it
  3. Use another commercial product like SourceGear Vault
  4. Use an open source product like Subversion

A few weeks ago, I decided to try out Subversion. It was easy to install on one of my test servers, and I really like the way it works. It has already solved a problem I was having keeping a project current between our production server, test server, and my development platform. I’m thinking it might be time to put Subversion on a real server, and increase our use of it.

It might seem a little strange that a developer working mostly in .NET would use an open source product for source control. As this results of this survey question from CodeProject shows, I’m not alone. When asked what source code control people use, Subversion got 33% while Visual SourceSafe got 27%.

SQL Calls with Returned Value

In my last post, SQL Calls from ASP.NET, I talked about how to make a call to a stored procedure to query a database. But what do you do when the stored procedure uses an output parameters, such as SQL Stored Procedure: Save Data. Here is a sample method for making this database call:

using System.Data.Common;
using Microsoft.Practices.EnterpriseLibrary.Data;

const string DBConn = "MyConnectionString";
public static int User_SaveProfile(int userID, string userFirstName, string userLastName)
Database db = DatabaseFactory.CreateDatabase(DBConn);
DbCommand cmd = db.GetStoredProcCommand("UserSave");

db.AddOutParameter(cmd, "@NewID", DbType.Int32, sizeof(Int32));
db.AddInParameter(cmd, "@UserID", DbType.Int32, userID);
db.AddInParameter(cmd, "@UserFirstName", DbType.String, userFirstName);
db.AddInParameter(cmd, "@UserLastName", DbType.String, userLastName);


return (Convert.ToInt32(db.GetParameterValue(cmd, "@NewID")));


Again, this is fairly straight forward code. It’s creating a Database object using the Database Factory, then building the a Database command with the appropriate parameters.

Once this is done, it makes the call to the database. This time however, it uses the method ExecuteNonQuery, which doesn’t return anything from the database, other than the output parameter of course. Lastly, it converts the output parameter into an integer, and returns it.

With this, the calling method can handle the returned value. If the stored procedure does error checking, it would be reflected in the @NewID that was returned. The calling method would then handle the error. Also, if zero (0) was passed in as the userID, and the returned @NewID is a positive number, a new record was added to the database. And if the @newID is equal to the userID, then an existing record was modified.

SQL Calls from ASP.NET

When I started working in .NET, one of the things that confused me was the process to make SQL calls from ASP.NET. The basic process was:

  1. Create a SqlConnection object
  2. Give the SqlConnection a connection string
  3. Create a SqlCommand
  4. Give the SqlCommand the actual SQL code
  5. Bind the SqlConnection to the SqlCommand
  6. Open the SqlConnection
  7. Execute the SqlCommand
  8. Close the SqlConnection

An example of how to do this can be found at ASP.NET Tutorials. While this works, I thought it seemed like a lot of work just to make a call to a database.

Then I heard about the Database Factory that is part of the Microsoft Enterprise Library 2.0. (yes, I know that there are newer versions, but this is the version I’ve used.) I’m only using a very small part of this library, but it sure makes database calls easy.

Here is some sample code that would call the stored procedure from Case Statement in SQL Query

using System.Data.Common;
using Microsoft.Practices.EnterpriseLibrary.Data;

const string DBConn = "MyConnectionString";

public static DataTable GetRooms(int roomID)
Database db = DatabaseFactory.CreateDatabase(DBConn);
DbCommand cmd = db.GetStoredProcCommand("GetRooms");
db.AddInParameter(cmd, "@RoomID", DbType.Int32, roomID);

DataSet ds = db.ExecuteDataSet(cmd);
return (ds.Tables[0]);

Much simpler code. What’s happing here is that we have a connection string called MyConnectionString, and we use it create a Database object. We then give the database object the name of the stored procedure. We give it the parameters we want to pass, then we execute it. This returns a DataSet to us. From this DataSet, we grab the first table (ds.Table[0]), and return this table.

To me, this is a much easier way to make the database call, much easier than building the SqlConnection and SqlCommand, and doing all that in the correct order.

Also note that because we’re using an integer parameter to pass the roomID into the stored procedure, we’ve pretty much eliminated the possibility of this sql command being successfully targeted by a SQL injection attack.  See SQL Injection Prevention: Stored Procedures

Server Side vs Client Side Comments in ASP.NET

Last week, I was asked to make a change to the name of a channel (folder) in an application using our Content Management Server (CMS). After I did this, I went to their home page and received the dreaded yellow error screen of

After making this change, the application was throwing a null reference exception from a user control that was within comments in the .aspx file. This user control was trying to do something with the old channel name, but was no longer able to find it.

But why was this an issue at all? After all, the code was commented. Yes it was, however it was using client side (HTML) comment “<!– HTML comment –>“, instead of an ASP.NET comment “<%– ASP.NET comment –%>“.

Because “<!– –>” is a client side comment, the browser does not display whatever is in the comment, however the server still handles this code and sends it to the browser. If you have things like server controls, users controls, etc in this type of comment, the server still runs these controls.

As an example, consider this simple aspx file:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="" >
<title>Test Page</title>
  <!-- HTML Comment -->
  <%-- ASP.NET Comment --%>

When you do a view source in your browser on this page, you get:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="">
    <title>Test Page</title>
  <!-- HTML Comment -->

Notice that the HTML comment does get sent to the browser, however the ASP.NET comment does not. The server ignores everything embedded in the the ASP.NET comment.

On the page I was looking at last week, this caused two major problems. First, we were sending code to the client that wasn’t really necessary. What was commented out was HTML code that the browser would not display, therefore it added a little to the download time for the page.

A bigger problem was this section of code had a couple of user controls in it, and these user controls took a few seconds to run each time this home page was hit. Therefore this was a very slow page to load, and the slowness was caused by the server generating an HTML comment which the user never saw. When I changed this, the load time for the page was greatly improved.

The moral of the story is simple. If you are writing a comment that should be seen in the HTML code, then go ahead and use the client side HTML comment code. However, in many cases, you don’t need/want the user to be able to see the comments. For these cases, you should use the ASP.NET comment code “<%– –%>”, which will cause the server to skip this code. In certain cases, this small change can have a dramatic effect on the performance of your web page.

For more information on this, see Scott Guthrie’s post: Tip/Trick: User Server Side Comments with ASP.NET 2.0

HtmlEncode in ASP.NET

I keep forgetting where to find HtmlEncode when working in .NET. I end up going to Google and doing a couple searches before I find the right namespace to include. Hopefully now I’ll remember to look at my blog rather than Google:-)

HtmlEncode does exactly what it sounds like it should. It takes a string value as a parameter, and returns an escaped string.

using System.Web;

private static string ReturnEncoded()
	return (HttpUtility.HtmlEncode("<br />"));

This function would return “&lt;br /&gt;”. Not a real useful function, but is shows how to use HtmlEncode.

This same location also has HtmlDecode, UrlEncode, and UrlDecode.

Web Forms Should Inherit a Custom Class

When doing a web application in .NET, I like to create a custom class that inherits from System.Web.UI.Page I usually call this class “MyWebPage”. In some (most) cases, this class doesn’t add any functionality, it’s just a wrapper. However, when you need to do certain things in your web app, this wrapper is invaluable.

When creating web forms, the first thing I do is change the code behind file, so it inherits from “MyWebPage” rather than default System.Web.UI.Page. This gives the new web form whatever functionality is built into “MyWebPage”.

When is this useful? Previously, I talked about dynamic master pages in a CMS application. By overwriting the PreInit method in “MyWebPage” to include the functionality for dynamic web pages, any new template that inherits “MyWebPage” will automatically receive this functionality. The alternative would be to override the PreInit method in every new template.

I first heard about making this page wrapper when Paul Sheriff gave a talk at an Iowa .Net User Group meeting. The example he gave was that you have a large .NET application. On Friday morning, your boss tells you that by Monday, the web application should log every time a page is hit, this log would be something above and beyond what the web server can log. Without the wrapper, you would have to edit every web form in the application, possibly hundreds of files. With the web forms inheriting from your wrapper class, you simply change this one class, and you enjoy the weekend.

Since then, I’ve heard this talked about on .NET Rocks, and also in this blog post by Brian Mishler.

Dynamic Master Pages

An important thing to remember about master pages is that you can have more than one in any given web application. For example, if some pages in your web application will be two columns, and some will be three, you might build two master files. Then, when you create the web forms, you select which master file to use.

A powerful use for multiple master pages in an application is to dynamically assign them when the page loads. Here is an example of code to do dynamic master pages. The example on this page is giving a different master page if the client is using a browser on Windows CE. The thing to remember about dynamically assigning master pages, is that it needs to be done very early in the page life cycle, in the PreInit method.

Imagine a situation where you want to host a web application for many different entities, for example, I recently watched a demonstration of a hosted 4-H enrollment application. Every state wants to have their own look to the application. An easy way to do this is by using dynamic master pages, and assigning them based on the username of the person that is logged in.

A place where we are using dynamic web pages extensively is in our Content Management Server (CMS). We have one set of templates that are used for 18 different sites. Each of these sites have their own branding and navigation, although many of them are similar. When we set up the sites in CMS, we set a custom property that tells it what master file to load. This allow us to provide a huge amount of customization by only modifying the two files that make up the master page. Our Midwest Grape and Wine Industry Institute site, and our Bioeconomy site are using this CMS application, and have quite different looks.

We are currently working on a project for community foundations, that will have a number of sites for individual community foundations. These sites will use the same CMS application as the ISU extension sites referenced above. However, they will have a completely different look and feel. This project was possible because we are dynamically assigning the master pages as the page loads.

wine2.png bioeconomy2.png hardin2.png

Here are thumbnails of the three web sites referenced above that are using this single CMS application. The different looks is a result of the dynamic master pages mentioned above.

Master Pages in .NET 2.0

One of the big improvements with .NET 2.0 is master pages. A master page is a template that you use to build the structure of your web pages. You use it to do things like branding, top navigation, side navigation, footer, etc. Anything that you want repeated on every page can be placed in a master page.

Before .NET 2.0, we would build things in user controls, and add the user controls to our pages. This worked reasonably well, until you wanted to add a new feature, such as a search box to all your pages. If you didn’t have a suitable user control on your pages, you’d create a new user control, then add that to every page in your web application.

With .NET 2.0 or later, you can add the search box to the master page, and every page using that master page will automagically get the search box. This can save a huge amount of time. I learned this a few weeks ago, when I had to modify project that was created in .NET 1.1.

On master pages, in addition to the branding and navigation elements, you provide one or more content ares. These content areas are where the individual pages are responsible for populating the content. However, the content page does not have to populate every content area, you can pick and choose which content areas to use. For example, if your master page has an area for a sidebar, but your page doesn’t have content for the sidebar, you simply leave that area off your content page. It is an error, however, when your pages implement a content area that is not on the master page.

SharePoint makes extensive use of master pages, so this is a concept I will continue working with and expanding my knowledge.