Category: Dot Net

Visual Studio Solutions, Projects, and shared code

I have been having numerous discussions with a variety of people about shared code in .NET code bases and I decided to blog my thoughts on the topic here – partly to reduce repetition, partly to help me distill the concepts in my own mind.

To clarify, these are my guidelines or rules of thumb. It is where I start when investigating options to improve handling shared code but I will bend these rules when required and I reserve the right to change my mind based on my future experiences.

To begin, there seem to be two basic perspectives on the purpose of a Visual Studio Solution.

  1. A Solution is a container, a boundary. It includes everything required for a software system to be built and tested. The only dependencies external to the Solution are third party dependencies or internal dependencies from a package management system like NuGet.
  2. A Solution is a view, a window. It includes only the necessary items to work with a particular aspect of a software system. Projects within a Solution will routinely be dependent on other Projects not in the Solution. There will often be multiple Solutions that overlap with, or completely encompass, other Solutions.

I subscribe to the first group. I believe this is the model that Visual Studio guides developers toward through its default behaviours and through the challenges that arise when veering away from this model. I believe that a new team member should be able to clone a clean working set of source from version control and build the Solution and have all they need within the IDE. I like that a successful build of the open Solution (mostly) indicates that I haven’t accidentally changed or removed code used elsewhere.

To follow, given a common scenario of two mostly discrete Solutions that currently share a common Project between them, I start asking:

  • Can the Project be moved into a new, third Solution and packaged as a NuGet package? The original Solutions then reference this shared Project by its Package from (private) NuGet Repository. This can lengthen the feedback cycle when debugging, so if this leads to a poor experience because the shared Project is a common source of issues, a better suite of Integration Tests in the third Solution may help. If the shared Project changes often to implement features rather than fix bugs this may not be a good option.
  • Can the two Solutions be combined into one all-inclusive Solution? Would the new Solution then have too many Projects resulting a the build and/or test experience too slow or resource intensive? If the Project count is too high and code has been separated into Projects simply to enforce layer separation, perhaps some Projects can be consolidated and a tool like NDepend used to enforce separation.
  • Do the two Solutions together represent too large a system? Is the coupling to the shared Project an indication of a design that would benefit from significant refactoring – for example, favouring composition over inheritance.

Finally, what is the value of sharing the common Project? In my experience, increased code reuse is associated with higher coupling. Duplication of the shared code instead may prove beneficial in other stages of the delivery cycle and reduce each Solutions influence/impact on the other.

I am also reminded of Paul Stovell’s short series of useful articles about Integration. The Shared Database solution is an example where a Data Access Layer Project might be shared between two Solutions but the Messaging approach is an example where the two Solutions could be much more independent.

Binding… and generic IList does not inherit from IList.

A note to myself, and to others who may find it useful:

Use System.Collections.ObjectModel.Collection<T> (or a subclass) as the return type for properties exposing collections of related items on your presentation model classes if you intend to bind to the relationships with Windows Forms BindingSources.

Collection<T> appears to be the lowest-level type in Framework 3.5 that provides both the non-generic IList implementation as required by the binding system and also provides strongly-typed programmatic access to the elements of the collection. It is used as the base for BindingList<T>, ObservableCollection<T> and many others and you can easily inherit from it yourself.

Using a return type of IList<T> or ICollection<T> instead, which may be preferred for being less concrete, is not sufficient as neither of these inherit from the non-generic IList and will fail to bind correctly (sometimes throwing exceptions) at run-time when the parent list of the bound relationship is empty.

Streaming Large Objects with ADO.NET, Properly

It has been a while since I last worked with storing files in a SQL database and I decided to Google around to remind myself of the best way to do it. I was very disappointed with most of the approaches I found. Unfortunately, my Google-Fu didn’t return the MSDN articles I’ve linked to below, and I had to find out the hard way.

To begin, all solutions I found dealt only with reading a BLOB from a SQL Server image or varbinary(max) column in a streaming fashion. Worst of all very few actually understood what streaming should do, and that is not load the entire object into an array in memory.

My whinging aside, streaming a file out of a SQL table is easy. You start by using a DataReader created by passing CommandBehavior.SequentialAccess to a DbCommand’s ExecuteReader function. I also find that selecting only the blob column and only the desired row(s) from the table is the most effective.

When you have the DataReader positioned on the appropriate record you repeatedly call the GetBytes method in a loop, retrieving a small chunk each time and writing it to the output stream. The output can be any IO.Stream like a file or even your ASP.NET response. This MSDN article has a good description of the situation with the SequentialAccess enumeration and some sample code.

Writing a stream of data into a SQL table turned out to be slightly less obvious. I’m only working with SQL Server 2005 so I didn’t consider supporting older versions but the approach is similar. SQL 2005 provides a Write “method” on the large value data types in the UPDATE statement.

My solution was to first insert the new row into the table providing values for all columns except the blob. Then I had a stored procedure that would take the row’s primary key values, an offset, and a chunk of the data to insert and use the UPDATE .Write method to update the row.

Similar to the reading code, my writing code would read a small chunk from the incoming IO.Stream and pass it to the stored procedure, incrementing the offset each time. Once again, there is another MSDN article that describes the process well but their code looks like it will also work with SQL versions prior to 2005.

In both cases tweaking the size of the chunk used in each iteration of the loop will require some testing and measuring to find the best performance but now you can read and write files of almost 2GB into SQL Server without trying to allocate a similarly sized array in memory first.

ClickOnce and Terminal Services

Under just the right conditions that I have been lucky enough to meet, .NET applications deployed by ClickOnce will not start under a Terminal Services session. At the heart of the problem is this error message:

Shortcut activation from http/https or UNC sources is not allowed.

I could not find anything in Google, on the MSDN forums or in the Microsoft Support Knowledge Base. With some trial and error and a little reflection I’ve tracked it down to this very special combination:

  • ClickOnce offline install mode is enabled
  • and the application is started via the Start Menu shortcut
  • and the user is logged into a Terminal Services session
  • and Terminal Services is redirecting the Start Menu to a UNC.

Changing any one of these conditions is enough to avoid the problem and would probably explain why it is documented anywhere. The error message is triggered by code in the System.Deployment assembly and it it cannot be overridden by a system setting either.

Redirecting the Terminal Services Start Menu to a UNC is also a common practice when running multiple terminal servers, or a shared Start Menu for all users (as was my case).

I will be recommending users simply start the apps via the same ClickOnce deployment URL that was used for the initial install and try to disable TS folder redirection where possible.

It seems ClickOnce uses the shortcut location later in the bootstrap process and the problem exists in Orcas too so I don’t think Microsoft will be fixing this one.

Deblector arrives

Felice Pollano has just released the first version of the new Deblector addin for debugging within Reflector. It’s only alpha at the moment and has some obvious bugs but it is very promising. In the past I have looked at having a Virtual PC with a hacked rebuild of the core framework with debugging enabled. Now I can step into framework code just as easily within the familar Reflector interface. I don’t have much experience with IL so I was a little confused at first when multiple lines of IL seemed to execute at once but I quickly realised they all applied to method call setup. I suspect, if possible, we may even be able step through the framework viewed in our chosen language. Keep an eye on this one.

Deblector

Lutz Roeder… you’re the bomb. I’ve been using .NET Reflector for quite some time now and I love every bit of it. If you aren’t using it yet, go get it. However, there is one thing I’ve always wanted to do with Reflector, use it to debug code. As I recently discovered on Mike Stall’s excellent debugging blog someone is working on integrating debugging into Reflector. That someone is Felice Pollano and hopefully Deblector will soon become one of my primary development tools too.