InfiniTec - Henning Krauses Blog

Don't adjust your mind - it's reality that is malfunctioning

Developing Windows Services with .NET

Developing Windows services with .NET is not always easy – sure, one can attach a debugger once the service is started (at least when developing on Windows XP / 2003). But this is not exactly an ‘F5’ experience one has with normal programs or even Websites. Another drawback of the “Attach to process” method is that the start of the service cannot be debugged. A common way to circumvent this is to write a System.Diagnostics.Debugger.Break statement within the first lines. This will bring up the Just-In-Time Debugger window that let’s you choose a debugger. Sadly, this does not work any more with Windows Vista / 2008 because services run on an isolated window station.

Luckily, there is a solution: A windows service is nothing more than a console application that is started slightly differently. One can take advantage of this by starting the service as a stand-alone program when started normally (via F5 or starting the service from the command line). But if the program detects if it is started by the Service Control manager, run it as a service. So, how can a program check whether it is started by the Service control manager? It depends on the user account the program runs under. If it’s running as “LocalSystem”, one can safely assume it’s running as a service. In any other case, the user account token has a special group membership: The System.Security.Principal.WellknownSidType.Service.

The default body of a program.cs file of blank service solution looks like this:

   1: using System.ServiceProcess;
   3: namespace WindowsService1
   4: {
   5:     static class Program
   6:     {
   7:         /// <summary>
   8:         /// The main entry point for the application.
   9:         /// </summary>
  10:         static void Main()
  11:         {
  12:             var servicesToRun = new ServiceBase[] 
  13:                                               { 
  14:                                                   new Service1() 
  15:                                               };
  16:             ServiceBase.Run(servicesToRun);
  17:         }
  18:     }
  19: }

With a few additional lines, a comfortable F5 experience can be gained:

   1: using System;
   2: using System.Linq;
   3: using System.Security.Principal;
   4: using System.ServiceProcess;
   6: namespace WindowsService1
   7: {
   8:     static class Program
   9:     {
  10:         /// <summary>
  11:         /// The main entry point for the application.
  12:         /// </summary>
  13:         static void Main()
  14:         {
  15:             var identity = WindowsIdentity.GetCurrent();
  16:             var principal = new WindowsPrincipal(identity);
  17:             // Check whether the current user account is the LocalSystem account
  18:             var isLocalSystemUser = 
  19:                 identity.User == new SecurityIdentifier(WellKnownSidType.LocalSystemSid, null);
  21:             if (isLocalSystemUser || principal.IsInRole(new SecurityIdentifier(WellKnownSidType.ServiceSid, null)))
  22:             {
  23:                 var servicesToRun = new ServiceBase[]
  24:                                         {
  25:                                             new Service1()
  26:                                         };
  27:                 ServiceBase.Run(servicesToRun);
  28:             }
  29:             else
  30:             {
  31:                 var service = new Service1();
  32:                 service.StartAsApplication();
  33:             }
  34:         }
  36:     }
  37: }

I have added a StartAsApplication() method to the Service1 class, because the OnStart() method of the default service template is protected and therefore not accessible. And that is all there is. The service can now be started by just pressing F5.


The gain of the F5 experience does not come without a cost – normally, a Windows service runs under a special account (which is hopefully not LocalSystem): NetworkService, LocalService or a dedicated service account. With this approach, the service runs with the permissions of the currently logged on user. Normally, that user account has far more security permissions than the service account would have. For example, the current user might have the permission to host WCF services on arbitrary ports – something only administrators are allowed to do. A service account would need explicit permission to host that service. Another example are private keys of certificates in the LocalComputer store – not accessible for the NetworkService account by default. But if a service needs to access those keys, it won’t fail unless during development. This should be kept in mind.!

Posted by Henning Krause on Saturday, June 13, 2009 12:26 PM, last modified on Monday, November 29, 2010 7:30 PM
Permalink | Post RSSRSS comment feed

Getting the fullqualified DNS name of the current computer

Under certain circumstances a program needs to determine the name of the computer it’s running on. The first approach to get this name is to use the System.Environment.MachineName property. However, this name only reflects the NETBIOS name of the current machine. But in larger environments a full-qualified name including the DNS domain the computer belongs to. This can be something like computername.contoso.local. One example where this full qualified name might be needed are Exchange Push notification. I’ve published a component to CodePlex makes it really easy to incorporate them in an application. However, for the notifications to reach the client the component needs to tell the Exchange server a correct callback address. In a very simple network environment, it is sufficient to specify the NETBIOS hostname. But in more complex environments, Exchange might not be able to send a notification because it cannot correctly resolve the unqualified hostname to an IP address.

The full qualified domain name of the current host can be resolved with a call tot the System.Net.NetworkInformation.IPGlobalProperties.GetIPGlobalProperties method. This method returns, among other things, the required information:

   1: var ipGlobalProperties = IPGlobalProperties.GetIPGlobalProperties();
   2: string fullQualifiedDomainName;
   4: if (!string.IsNullOrEmpty(ipGlobalProperties.DomainName))
   5: {
   6:     fullQualifiedDomainName = string.Format("{0}.{1}", ipGlobalProperties.HostName, ipGlobalProperties.DomainName);
   7: }
   8: else
   9: {
  10:     fullQualifiedDomainName = ipGlobalProperties.HostName;
  11: }

I have updated the PushNotification component to reflect this new behavior.

Posted by Henning Krause on Saturday, June 6, 2009 6:23 PM, last modified on Saturday, June 6, 2009 11:39 PM
Permalink | Post RSSRSS comment feed

X509Certificate2 Constructor creates two empty files in the temporary files directory [Update]

2009-04-18: Microsoft has indeed a knowledgebase article on this topic and there is a hotfix availbe.

The .NET Framework has a nice class called X509Certificate2 that simplifies the handling of X.509 certificates. Specifically, it has a constructor that takes a byte array that allows a developer to load a certificate from an arbitrary storage like a database. The class uses some Win32 to parse the byte array, and one of the used functions contain a bug: During the load process, two empty temporary files are created in the temporary files directory of the user running the code. Unfortunately, these files are never cleaned up, so they accumulate over time. What’s worse, they seem to be created by the GetTempFilename function. This function creates an empty file in a specific directory with a name guaranteed to be unique. These files have a prefix (up to three letters) and a number. The number is generated by the GetTempFilename function in a sequential manner. Since the two files are never deleted by the Win32 function, the call to GetTempFilename takes longer and longer. In my case, I had over 65.000 files in the temporary files directory and the X509Certificate2 constructor took several seconds to complete.

Microsoft knows about this bug, but they don’t seem eager to fix it. More details can be found on this connect link:

Since there will be no fix in the near future, you can use this workaround: Instead of loading the certificate from a byte array, dump it to a file and read it from there. The following snippet demonstrates this workaround:

   1: public static X509Certificate2 LoadCertificate(byte[] buffer)
   2: {
   3:     if (buffer == null) throw new ArgumentNullException("buffer");
   5:     string filename = Path.GetTempFileName();
   6:     try
   7:     {
   8:         File.WriteAllBytes(filename, buffer);
   9:         return new X509Certificate2(filename);
  10:     }
  11:     finally
  12:     {
  13:         try
  14:         {
  15:             File.Delete(filename);
  16:         }
  17:         catch
  18:         {
  19:             // This is ok - the file is in the temporary files directory. No harm done.
  20:         }
  21:     }
  22: }


The knowledgebase article 931908 deals with this problem and offers a hotfix. The hot fix can be requested directly from the article.

Posted by Henning Krause on Sunday, March 29, 2009 3:13 PM, last modified on Saturday, April 18, 2009 11:02 PM
Permalink | Post RSSRSS comment feed

InvalidCastException: Unable to cast object of Type 'X' to 'X'

In a recent project, the above exception was thrown when returning from a web service call. Interestingly, the exception was only thrown if the code was running within a web application. It would run fine when executed in an executable or even the Visual Studio development web server. At first I suspected some sort of permission problem but that couldn't really explain that particular error message. Next suspect was the shadow-copy feature of ASP.NET: The runtime does not actually run the assemblies from the path the IIS virtual directory points. Instead it copies all assemblies to the Temporary ASP.NET Files (C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files on x86 machines or C:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files on x64 machines) and executes them from there. So I added two lines of code right before the exception would be thrown:

   1:  Trace.WriteLine(instance.GetType().Assembly.Location)
   2:  Trace.WriteLine(GetType(MyType).Assembly.Location)

Upon execution, the trace output was the following:

   1:  c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\c88143cc\98574940\assembly\dl3\98dad73b\f3cf1948_ddb4c801\MyAssembly.DLL 
   2:  C:\Development\TestApplication\MyAssembly.dll

This was the problem: One assembly was loaded from two different sources. What was the cause? Turns out that the first assembly was loaded because of an assembly reference. This way it was correctly placed in the Temporary ASP.NET Files. But the second instance of the assembly was loaded via reflection with a call to System.Reflection.Assembly.LoadFile. The solution is to use System.Reflection.Assembly.LoadFrom instead of the LoadFile method.

Why is this a solution? The main difference between LoadFrom and LoadFile is that LoadFrom goes through the normal Fusion bind process, thus allowing assembly redirection to happen. LoadFile on the other hand, just loads the assembly from the specified location.

For more information on this, see this blog posting from Suzanne Cooks blog.

Posted by Henning Krause on Tuesday, May 13, 2008 10:23 PM, last modified on Tuesday, November 30, 2010 12:19 AM
Permalink | Post RSSRSS comment feed

Windows Installer Xml 3.0 Extension for managed installers

The Windows Installer technology unfortunately lacks support for managed custom actions or the System.Configuration.Install.Installer class. Rob Mensching posted an article on his blog a while back why Microsoft considers custom action in general and managed custom action in particular a bad idea. While he makes some valid points (some technical and some strategic), I think managed custom installers are not a bad thing:

  • While they add a dependency on the .NET Framework at setup time, for applications using a windows service written in managed code, the Framework must be installed on the target computer at the setup time because Windows Installer will start the service during setup. And for all other managed applications you'll need the framework right after the application has been installed (to run it). Since the Framework cannot be installed using a Merge module, it must be installed before the actual setup of the application.
  • I consider myself a fairly good software developer when it comes to managed code. But my C or C++ knowledge is minimum at best. So I can either write solid managed custom actions (where I have a well-tested BCL at hand) or create spooky and unreliable custom actions in C. I prefer the former option.
  • He mentions a problem with managed custom actions using different versions of the CLR. While this may be a problem, you'll mostly write custom actions using .NET 2.0 these days. And .NET 3.0 and 3.5 both use the same CLR as 2.0.
  • What remains of his technical problems is the fact that Windows Installer on Windows 2003 will try to load the .NET Framework 1.1 into the deferred-custom action server when it tries to register assemblies into the Global Assembly Cache, which will fail if you force the .NET 2.0 runtime into the process with a managed custom action.
  • All those strategic reasons might be ok for the Windows Installer team, but I can't wait a few years until the Windows Installer team bakes all the actions I need into the core of the product. And when they do, you'll need Windows 2015 at the very least…. Not an option.

Apparently, the Visual Studio Team doesn't consider managed custom actions to be harmful - otherwise they wouldn't give you the option to run managed installers in those Visual Studio deployment projects. But these installers do lack a serious feature: The Windows Installer context. It's not that the installer context isn't propagated to the runtime (You may have wondered what the IManagedInstaller interface is meant for :-) ).

Windows Installer Xml also doesn't support managed custom actions out of the box. You have to do this yourself. One option is to decompile a Visual Studio Deployment project and see what Visual Studio does to call a managed custom action. While this will certainly work, you'll end up with a managed installer which has the same limitations as the Visual Studio deployment project: No access to the installer context. Additionally, these installer classes are always called as deferred custom actions. This means that they neither work in the immediate phase of the InstallExecuteSequence nor in the InstallUISequence.

Reinventing the wheel…

To call managed code in-process from the Windows Installer process, an intermediate unmanaged DLL must be called which in turn loads the .NET Framework into the process, spawns an AppDomain and finally runs the managed code inside this AppDomain. This is actually what the Visual Studio Deployment Project does.

The approach I'm using here is based on the article "WiX - Managed Custom Actions" by ForestWalk, which in turn is based on two other aricles: "Wrapping the Windows Installer 2.0 API" by Ian Marino and  “Hosting the CLR within a custom action” by Pablo M. Cibraro (aka Cibrax). The code in the article make it possible to call managed code in every part of the sequence. But the usage is not very intuitive to use, especially managed installer classes.

Since deferred custom actions can only access one property (CustomActionData), all information needed by be managed installer must be placed in this property. And since the CustomActionData is only an unstructured simple string property, some form of serialization is needed to put multiple properties into it.

To support all four methods of the managed installer class, you'll have to create and sequence eight custom actions: For each of the four methods (Install, Commit, Rollback, Uninstall) one action for the parameters (CustomActionData) and one action to run it.

Multiple managed installers will seriously degrade the readability of your Windows Installer XML file. That's why a took the code from the article and put it into a Windows Installer Xml Extension. I also created a small framework to simplify the development of managed installers.

Here is a simple example setup file:

    1 <?xmlversion="1.0"encoding="UTF-8"?>

    2 <Wixxmlns=""

    3     >

    4   <ProductId="4ed3ff4f-7b33-4915-9801-a0fdd5515647"

    5     UpgradeCode="d4bacea3-a59a-4d44-b95b-1e144edfb88b"

    6     Name="Acme Sample Application"Language="1033"Version=""

    7     Manufacturer="Acme Software Ltd."

    8   >

    9     <PackageInstallerVersion="300"Compressed="yes"   />  

   10     <MediaId="1"Cabinet=""EmbedCab="yes" />

   11     <DirectoryId="TARGETDIR"Name="SourceDir"FileSource=".\">

   12       <ComponentId="ProductComponent"Guid="865018ca-dc6f-4987-9766-cffe792cb937">

   13         <FileId="f1"Name="ManagedCustomAction.dll"Source="Include\ManagedCustomAction.dll" >

   14           <ManagedInstallerxmlns="">

   15             <ParameterName="TargetDir">[TARGETDIR]</Parameter>

   16             <ParameterName="AssemblyFile">Assembly is run from [#f1]</Parameter>

   17           </ManagedInstaller>

   18         </File>

   19       </Component>

   20     </Directory>

   21     <FeatureId="ProductFeature"Title="Main Feature"Level="1">

   22       <ComponentRefId="ProductComponent" />

   23     </Feature>

   24     <UIRefId="WixUI_Minimal"/>

   25     <BinaryId="ManagedCustomAction"SourceFile="Include\ManagedCustomAction.dll" />

   26     <ManagedCustomActionId="test"BinaryKey="ManagedCustomAction"Type="ManagedCustomAction.CustomAction"Execute="immediate"xmlns="" />

   27     <ManagedActionSequencexmlns="">

   28       <ManagedAction="test"After="CostFinalize"SequenceTable="InstallUISequence" />

   29     </ManagedActionSequence>

   30   </Product>

   31 </Wix>

Managed Installers

The extension makes it very easy to call managed installers or managed custom action.

Just put the tag ManagedInstaller into a File tag and the installer will be called during setup. If you need context information stored in other MSI properties, add a Parameter tag into the ManagedInstaller tag with an appropiate name and the value. From your managed installer, you can use the Parameters dictionary from the InstallContext class. Here is a sample implementation for the Install method of a System.Configuration.Install.Installer class:

    1 publicoverridevoid Install(IDictionary stateSaver)

    2 {

    3     string targetDir = Context.Parameters["TargetDir"];


    5     for (int i = 3 - 1; i >= 0; i--)

    6     {

    7         InstallerContext.Current.StartAction("ManagedCustomAction", string.Format("Install: Waiting {0} seconds...", i), "");

    8         Thread.Sleep(1000);

    9     }

   10     base.Install(stateSaver);

   11 }

In line 5, the property TargetDir is accessed. This property contains the value as specified in line 15 of the Windows Installer XML file. But far more interesting are the lines 7 and 11: These lines access the Windows Installer process and report details about what the custom action is doing. The two function wrap two flavors of the MsiProcessRecord function. The StartAction method reports the start of a major action (such as "Copying files" or "Creating registry values"). Additionally, a format string for details is specified, in this case "Waiting [1] more seconds"). The ReportDetails now just take the replacement values for the format string, in this case the number of seconds remaining).

Another important method of the InstallerContext class is the LogMessage method which writes directly to the Windows Installer log. Note that you don't have to use this method to log data. You can also use InstallContext.LogMessage or Trace.WriteLine or Console.WriteLine. The output of all those methods is captured and written to the log.

All unhandled exceptions from an Installer class are catched by the framework and cause an error message to be displayed. Unhandled exceptions in the Install, Commit and Rollback methods cause the installation to be aborted. If an exception occurs in the Uninstall method, an error dialog is displayed, but the uninstall will continue.

The four methods are sequence in the InstallExecuteSequence at the following positions:

  • Install, Commit, Rollback: Before InstallServices

  • Uninstall: Before UnpublishComponents

The installer will only be invoked if the component the file is associated with is installed.

Managed custom actions

To run a managed custom action, two things have to be done: Create a ManagedCustomAction tag under the Product tag and fill in the blanks:

  • Id: The name of the custom action
  • BinaryKey: If you want to run the custom action in the immediate sequence or in the InstallUISequence table, add the assembly to the binary table (via the Binary tag) and enter its key here.
  • FileKey: If you want to run this custom in the deferred sequence, add the assembly to the file table (via the File tag) and enter its key here.
  • Type: Name full qualified name of the type you want to run (Namespace + type name)
  • Execute: Either commit, deferred, firstSequence, immediate, oncePerProcess, rollback or secondSequence. These are the same options you have with normal Custom actions (Cutom tag)
  • Impersonate: Yes to run the custom action in the security context of the logged on user. False otherwise. The default is true. Only valid for deferred custom actions.
  • Return: asyncNoWait, asyncWait, check or ignore. These are the same options you have with normal Custom actions (Cutom tag)

Unfortunately Windows Installer XML does not allow extensions in the sequence tables, so I had to create my own: ManagedActionSequence. Add a Managed tag for each custom action you want to schedule. The Managed tag has these attributes:

  • Action: The name of the managed custom action to run.
  • After: The name of the action the managed custom action should be executed after.
  • Before: The name of the action the managed custom action should be execute before.
  • Sequence: The absolute sequence where the managed custom action should run.
  • SequenceTable: The name of the sequence table where the managed custom action should be scheduled: Either InstallUISequence, InstallExecuteSequence, AdminUISequence, AdminExecuteSequence.

The managed custom action must be a class which implements the InfiniTec.Configuration.Install.ICustomAction interface, like in this example:

    1 publicclassCustomAction: ICustomAction

    2     {

    3         publicvoid Execute()

    4         {

    5             string targetDir = InstallerContext.Current.GetProperty("TARGETDIR");


    7             InstallerContext.Current.StartAction("ManagedCustomAction", "Running custom action...", "Waiting [1] seconds...");

    8             for (int i = 3 - 1; i >= 0; i--)

    9             {

   10                 InstallerContext.Current.ReportDetails(i.ToString());

   11                 Thread.Sleep(1000);

   12             }



   15         }

   16     }

This implementation has full access to the MSI properties (see line 5, if scheduled as immediate action) and of course access to the Installer log via InstallerContext.Current.LogMessage.

Other useful classes

Since the custom actions can be executed in the immediate sequence of the install process, it has full access to all properties and tables of the installer. The rows of a view can be accessed via the View class:

    1 using (View view = InstallerContext.Current.OpenView("SELECT * FROM Binary"))

    2 using (RecordCollection records = view.Execute())

    3 {

    4     foreach (Record record in records)

    5     {

    6         string name = record[1].GetString();

    7     }

    8 }

The view returns a RecordCollection which in turn provides access to it's Record instances. Each record consists of one or more fields. Note that if you create a record with the Record.Create(int columnCount) method, the resulting record will have columnCount+1 fields - 0 to the specified value.

Modifications to the original source code

Apart from the newly added code, I made significant changes to the existing code:

To load the .NET runtime into the process I use the CLRHosting project from the article mentioned above. I have replaced all dangerous API calls (strcat, sprintf) with secure ones. But my C and C++ knowledge is VERY limited. I would appreciate it if someone with more knowledge could take a look at the code….

I have also made significant changes to the managed part of the solution. Mainly, I have encapsulated all unmanaged MSI handles in a custom SafeHandle class.

Open issues

  • Deferred custom actions with assemblies in the Binary table are not yet supported
  • Immediate custom actions with assemblies in the File table are not supported (And I don't see how this could work)
  • Managed installers do not have an immediate part
  • Managed custom actions and managed installer classes don't add ticks to the progress bar.
  • A much cleaner approach is to call the custom action in a separate process and provide access to the Windows Installer context via remoting. This approach is dicussed in more detail in the article A New Approach to Managed Custom Actions by Christopher Painter. Unfortunately he didn't release any source code and I'm lacking the necessary C and C++ skills right now.


Just copy the zip file to a directory and decompress it. In your WIX Project add a reference to the ManagedInstallerWixExtension.dll. In your setup file add the namespace to the list of namespace definitions.


The extension is compiled against WIX 3.0.361 (build from December 21, 2007) using .NET 2.0.


The authors of the original articles haven't lost a word about the license, so I assume it's freely available. To keep this stuff freely available I publish this work under the Common Public License, the same license Windows Installer XML is published under.

Downloads (637,678 Bytes)
Source files and binaries


Posted by Henning Krause on Sunday, December 30, 2007 12:00 AM, last modified on Monday, December 31, 2007 12:00 PM
Permalink | Post RSSRSS comment feed

Visual Studio 2008 WCF proxy generation tool and the ObservableCollection

Visual Studio 2008 has a nice WSDL generator for WCF services. Among other things, one nice feature is it's ability to generate strongly-typed lists (List<T> in C#) where the Visual Studio 2005 service proxy generator would have generated arrays. But there's more: The configuration program let's you actually choose the type of list to implement:

The WCF service configuration editor (click to enlarge)

Since I'm using WPF here I badly missed one list in the available dropdown: The ObservableCollection<T>.

But it turns out that you can force the configuration editor to generate a service proxy using the ObservableCollection<T> for arrays: Just hit the "Show all files" button in the solution explorer and navigate to your WCF reference, expand the node and open the file called Reference.svcmap. It's an XML file containing the settings for the proxy generation tool. And this file contains a tag called CollectionMapping. If you have previously changed the collection mapping with the editor, you'll find your settings there. For the above selection of the standard generic List, you'll find an entry like this:

    1 <CollectionMappings>

    2   <CollectionMappingTypeName="System.Collections.Generic.List`1"Category="List" />

    3 </CollectionMappings>

If you change the TypeName attribute from System.Collections.Generic.List`1 to System.Collections.ObjectModel.ObservableCollection`1 the service proxy generation tool will use ObservableCollections in the next update. And the configuration editor will just show a (Custom) in the drop-down, so you can still use it to modify the other settings.


Posted by Henning Krause on Monday, August 13, 2007 12:00 AM, last modified on Tuesday, August 14, 2007 12:00 PM
Permalink | Post RSSRSS comment feed

Building SQL Server 2000 compatible scripts with the SQL Server Management Studio

A very nice feature was added to the SQL Management Studio with SQL Server 2005 Service Pack 2: The ability to generate scripts which are compatible with SQL Server 2000!

To enable this option, open the options dialog (Tools --> Options) and select the Scripting node on the left. Click on Script for server version and select either SQL Server 2000 or 2005.

Options dialog from the SQL Server Management Studio (click to enlarge)



Posted by Henning Krause on Monday, July 23, 2007 12:00 AM, last modified on Monday, July 23, 2007 12:00 PM
Permalink | Post RSSRSS comment feed

Displaying progress updates when hashing large files

To hash a file (either using MD5 or SHA1), you can use the corresponding classes in the .NET Framework (MD5 or SHA1) with only a few lines of code:

    1 using (Stream stream = File.OpenRead(filename)

    2 using (MD5 md5 = MD5.Create())

    3 {

    4     md5.ComputeHash(stream);

    5    // Hash can be accessed through md5.HashValue

    6 }

This will compute the hash for the specified filename. But if your file is very large, it would be nice to have a progress bar. But ComputeHash does not implement some sort of callback mechanisms to report progress.

But the HashAlgorithm class (which all hash algorithms in the .NET Framework derive from) has two other methods: TranformBlock and TransformFinalBlock. These methods make it possible to compute the hash from discrete blocks of data. For every block other than the last, call TransformBlock. For the last block, call TransformFinalBlock. The resulting hash is now accessible via the HashValue property.

Here is a sample implementation using a background worker:

    1 privatevoid BackgroundWorker_DoWork(object sender, DoWorkEventArgs e)

    2 {

    3     byte[] buffer;

    4     byte[] oldBuffer;

    5     int bytesRead;

    6     int oldBytesRead;

    7     long size;

    8     long totalBytesRead = 0;


   10     using (Stream stream = File.OpenRead((string) e.Argument))

   11     using (HashAlgorithm hashAlgorithm = MD5.Create())

   12     {

   13         size = stream.Length;


   15         buffer = newbyte[4096];


   17         bytesRead = stream.Read(buffer, 0, buffer.Length);

   18         totalBytesRead += bytesRead;


   20         do

   21         {

   22             oldBytesRead = bytesRead;

   23             oldBuffer = buffer;


   25             buffer = newbyte[4096];

   26             bytesRead = stream.Read(buffer, 0, buffer.Length);


   28             totalBytesRead += bytesRead;


   30             if (bytesRead == 0)

   31             {

   32                 hashAlgorithm.TransformFinalBlock(oldBuffer, 0, oldBytesRead);

   33             }

   34             else

   35             {

   36                 hashAlgorithm.TransformBlock(oldBuffer, 0, oldBytesRead, oldBuffer, 0);

   37             }


   39             BackgroundWorker.ReportProgress((int)​((double)totalBytesRead * 100 / size));

   40         } while (bytesRead != 0);


   42         e.Result = hashAlgorithm.Hash;

   43     }

   44 }

The full source code is attached to this article.

Downloads (11,121 Bytes)
MD5 Sample application source code


Posted by Henning Krause on Saturday, June 9, 2007 12:00 AM, last modified on Saturday, June 9, 2007 12:00 PM
Permalink | Post RSSRSS comment feed

Antivirus software breaks .NET application [Updated]

Since a few days I got an exception saying

Attempted to read or write protected memory. This is often an indication that other memory is corrupt.

I got this exception while debugging my InfiniTec.Exchange libray, which does a lot web requests using the HttpWebRequest class. I though it was some sort of multi-threading issue on my side, since I use my InfiniTec.Threading library in that project. Furthermore, I thought it could have something to do with the fact that I now have dual core processor....

Well, turned out that it was all wrong. It was my virus scanner.... Which is rather curious, because all I had activated was the filesystem scanner - network scanning was deactivated... But when I deinstalled it, the symptoms disappeared instantly.

Surprisingly, this issue has been around for a while now - I found a posting in a Microsoft forum dated February 28, 2006 - almost a year old (The forum post can be found here)

Anyway - now I have to find a new virus scanner which doesn't break .NET applications.


Wow - that was fast. I opened a support incident at Eset, the creators of NOD32, and within one hour, they responded to my question.

The solution is to reinstall NOD32, but disable the IMON feature all together. The support technician told me that this feature will be removed in the next version completely, because of this very reason. So, no need to get another virus scanner.


Posted by Henning Krause on Tuesday, February 13, 2007 12:00 AM, last modified on Tuesday, February 13, 2007 12:00 PM
Permalink | Post RSSRSS comment feed

Two-way databinding with ASP.NET struggles with DateTime conversion

When you create a website with ASP.NET 2.0 and use the two-way databinding with an object datasource, you might have come accross this error:

Cannot convert value of parameter 'Created' from 'System.String' to 'System.DateTime'

This happens when you bind a data field with a DateTime datatype to a textbox, either with the BoundField method or with the <%# Bind("...") %> keyword.

One additional requirement for this error to happen is that the user local is set to non-english (respectively the invariant culture).

Digging deeper into the ObjectDataSourceView with Reflector, I found the cause for this strange behavior: The Bind-keyword or the BoundField class formats the string according to a format string. This, by default that a DateTime is displayed in german as follows:

29.11.2006 09:12:24 (e.g. dd.MM.yyyy hh:mm:ss).

When ASP.NET attempts to write back the changes made by the user, it reads the value from the textbox and tries to convert it from a string to a DateTime object.  It does this by retrieving the TypeConverter responsible for the DateTime class and calls typeConverter.ConvertFromInvariantString(). This obviously fails with a non-english string and produces the error seen above.

The default.aspx file in the attached Visual Studio Solution shows this error.


The workaround I'm using for this issue is to attach an Item_Updating event to the data control (DetailsView, FormView, etc). This event then converts the dataformat to an invariant datetime string:

    1 protectedvoid DetailsView1_ItemUpdating(object sender, DetailsViewUpdateEventArgs e)

    2 {

    3     e.NewValues["Created"] = DateTime.Parse((string) e.NewValues["Created"]).ToString(CultureInfo.InvariantCulture);

    4 }

The solution.aspx file implements this workaround.

Downloads (18,427 Bytes)
Example application which demonstrates the described problem and workaround


Posted by Henning Krause on Wednesday, November 29, 2006 12:00 AM, last modified on Wednesday, November 29, 2006 12:00 PM
Permalink | Post RSSRSS comment feed