Category: Visual Studio

Visual Studio Solutions, Projects, and shared code

I have been having numerous discussions with a variety of people about shared code in .NET code bases and I decided to blog my thoughts on the topic here – partly to reduce repetition, partly to help me distill the concepts in my own mind.

To clarify, these are my guidelines or rules of thumb. It is where I start when investigating options to improve handling shared code but I will bend these rules when required and I reserve the right to change my mind based on my future experiences.

To begin, there seem to be two basic perspectives on the purpose of a Visual Studio Solution.

  1. A Solution is a container, a boundary. It includes everything required for a software system to be built and tested. The only dependencies external to the Solution are third party dependencies or internal dependencies from a package management system like NuGet.
  2. A Solution is a view, a window. It includes only the necessary items to work with a particular aspect of a software system. Projects within a Solution will routinely be dependent on other Projects not in the Solution. There will often be multiple Solutions that overlap with, or completely encompass, other Solutions.

I subscribe to the first group. I believe this is the model that Visual Studio guides developers toward through its default behaviours and through the challenges that arise when veering away from this model. I believe that a new team member should be able to clone a clean working set of source from version control and build the Solution and have all they need within the IDE. I like that a successful build of the open Solution (mostly) indicates that I haven’t accidentally changed or removed code used elsewhere.

To follow, given a common scenario of two mostly discrete Solutions that currently share a common Project between them, I start asking:

  • Can the Project be moved into a new, third Solution and packaged as a NuGet package? The original Solutions then reference this shared Project by its Package from (private) NuGet Repository. This can lengthen the feedback cycle when debugging, so if this leads to a poor experience because the shared Project is a common source of issues, a better suite of Integration Tests in the third Solution may help. If the shared Project changes often to implement features rather than fix bugs this may not be a good option.
  • Can the two Solutions be combined into one all-inclusive Solution? Would the new Solution then have too many Projects resulting a the build and/or test experience too slow or resource intensive? If the Project count is too high and code has been separated into Projects simply to enforce layer separation, perhaps some Projects can be consolidated and a tool like NDepend used to enforce separation.
  • Do the two Solutions together represent too large a system? Is the coupling to the shared Project an indication of a design that would benefit from significant refactoring – for example, favouring composition over inheritance.

Finally, what is the value of sharing the common Project? In my experience, increased code reuse is associated with higher coupling. Duplication of the shared code instead may prove beneficial in other stages of the delivery cycle and reduce each Solutions influence/impact on the other.

I am also reminded of Paul Stovell’s short series of useful articles about Integration. The Shared Database solution is an example where a Data Access Layer Project might be shared between two Solutions but the Messaging approach is an example where the two Solutions could be much more independent.

Rules to Customising a .NET Build

It doesn’t take long before any reasonable software project requires a build process that does more than the IDE’s default Build command. When developing software based on the .NET platform there are several different ways to extend the build process beyond the standard Visual Studio compilation steps.

Here is a list for choosing the best place to insert custom build steps into the process, with the simplest first and the least desirable last:

  1. Project pre- and post-build events: There is a GUI, you have access to a handful of project variables, can perform the same tasks as a Windows Batch script, and still works with Visual Studio’s F5 build and run experience. Unfortunately only a failure of the last command in a batch will fail the build and the pre-build event happens before project references are resolved so avoid using it to copy dependencies. Continue reading

Query all file references in a Visual Studio solution with PowerShell

Today I was working on introducing Continuous Integration to a legacy code base and was discovering the hard way that the solution of about 20 projects had many conflicting references to external assemblies. Some assemblies were different versions, others the same version but in different paths, and others completely missing altogether. Needless to say this wasn’t going to build cleanly on a build server.

Rather than manually checking the path and version of every assembly referenced by every project in the solution, I wrote a PowerShell script to parse a Visual Studio 2010 solution file, identify all the projects, then parse the project files for the reference information. The resulting Get-VSSolutionReferences.ps1 script is available on GitHub.

Once I had a collection of objects representing all the assembly references I could perform some interesting analysis. Here is a really basic example of how to list all the assemblies referenced by two or more projects:

$Refs = & .\Get-VSSolutionReferences.ps1 MySolution.sln
$Refs | group Name | ? { $_.Count -ge 2 }

It doesn’t take much more to find version mismatches or files that don’t exist at the specified paths, but I’ll leave that as an exercise for you, the reader.

Analyse Code Coverage with PowerShell

Visual Studio 2008 Team System’s built-in Code Coverage is nice but the standard results window only allows you to drill down through each assembly, then namespace, class, and finally method. You can’t easily find the class with the least blocks covered, something I needed to do the other day.

I found John Cunningham’s blog about “off-road” code coverage and was pleased to see that Microsoft had provided an assembly in Visual Studio that can be used to parse the *.coverage file output by a test run. I followed his example to write a PowerShell script to provide basic access to the data.

You can download my script here.

Then you can use it like this:

$CoverageDS = ./Get-CodeCoverageDataSet.ps1 "data.coverage"
$CoverageDS.Class `
  | Sort-Object -Property BlocksNotCovered -Descending `
  | Select-Object `
    -First 25 `
    -Property `
      BlocksNotCovered, `
      @{
        Name = "Namespace";
        Expression = {
          $CoverageDS.NamespaceTable.FindByNamespaceKeyName($_.NamespaceKeyName).NamespaceName
        }
      }, `
      ClassName

The coverage file is typically found in the TestResults\[TestRunName]\In\[ComputerName]\ folder. You can easily perform queries over methods or lines rather than classes by using the other tables in the returned dataset. You can also use the ConvertTo-Html cmdlet to easily create a report for your team.

Report Services Automation With PowerShell

In late September Paul Stovell wrote about a set of VB.NET scripts he prepared to help deploy reports to SQL Server Reporting Services. If you’ve ever had the displeasure of deploying SSRS reports without Visual Studio then you’ll understand how much it sucks.

Paul went to the effort to write individual scripts for creating folders and data sources on the server and uploading report definitions and configuring permissions. With Paul’s work simple command scripts can then be used deploy reports.

However these command scripts still need to be written and they end up containing much of the same information as can be found in the .rptproj project file and the .rds data source files. I despise the idea of maintaining any sort of configuration information in more than one place so adding to the deploy command script whenever I add a report to the project in Visual Studio just makes me cringe.

Additionally, as Paul briefly mentions, MSBuild (and therefore Team Build) does not support Report Services projects so, once again, to deploy your reports as part of Continuous Integration you need to have separate tools.

Today I constructed a lengthy PowerShell script to take a Report Services .rptproj project file and output a command script that utilises Paul’s VB.NET scripts to deploy the reports as per the project settings. Due to the size of the script rather than publishing it inline, you can download it here.

The script accepts three parameters. ProjectFile is the path to .rptproj file for the reports you want to deploy. If you omit this parameter the script uses the first report project file it finds in the current directory. The second parameter, ConfigurationName tells the script which project configuration to use for the target server URL and destination folders. If you omit this parameter the script uses the first configuration defined in the project. The last parameter SearchPaths is a list of paths for the script to search when locating both rs.exe and Paul’s .rss files. The SearchPaths parameter is automatically combined with the environment PATH variable and may be omitted.

Here is an example usage:

PS C:\Users\Jason\Dev\MyReports> .\Deploy-SqlReports.ps1 `
-ProjectName MyReports.rptproj `
-ConfigurationName Release `
-SearchPaths “C:\Tools\Report Services\” `
| Out-File deploy.cmd -Encoding ASCII;

As always, my PowerShell skills are slowly improving and this script is not necessarily perfect in either robustness or efficient use of PowerShell. Hopefully it will be as useful to you as it has been to me and any changes you need should be easily made. Please leave a comment with your thoughts and suggestions.