Team Foundation Server 2010 introduced Team Project Collections for organising Team Projects into groups. Collections also provide a self-contained unit for moving Team Projects between servers and this is well documented and supported.
However, if you’ve ever tried moving a Team Project Collection you’ll find the documentation is a long list of manual steps, and one of the more tedious steps is Savings Reports. This step basically tells you to use the Report Manager web interface to manually save every report for every Team Project in the collection as an .RDL file. A single project based on the MSF for Agile template will contain 15 reports across 5 folders, so you can easily spend a while clicking away in your browser.
To alleviate the pain, I’ve written a PowerShell script which accepts two parameters. The first is the url for the Team Project Collection, and the second is the destination path to save the .RDL files to. The script will query the Team Project Collection for its Report Server url and list of Team Projects via the TFS API, then it will use the Report Server web services to download the report definitions to the destination, maintaining the folder hierarchy and time stamps. You can access this script, called Export-TfsCollectionReports, on Gist.
Obviously, when you reach the step to import the report definitions on the new server, you’ll want a similar script to help. Unfortunately, I haven’t written that one yet but I will post it to my blog when I do. In the mean time you could follow the same concepts used in the export script to write one yourself.
In late September Paul Stovell wrote about a set of VB.NET scripts he prepared to help deploy reports to SQL Server Reporting Services. If you’ve ever had the displeasure of deploying SSRS reports without Visual Studio then you’ll understand how much it sucks.
Paul went to the effort to write individual scripts for creating folders and data sources on the server and uploading report definitions and configuring permissions. With Paul’s work simple command scripts can then be used deploy reports.
However these command scripts still need to be written and they end up containing much of the same information as can be found in the .rptproj project file and the .rds data source files. I despise the idea of maintaining any sort of configuration information in more than one place so adding to the deploy command script whenever I add a report to the project in Visual Studio just makes me cringe.
Additionally, as Paul briefly mentions, MSBuild (and therefore Team Build) does not support Report Services projects so, once again, to deploy your reports as part of Continuous Integration you need to have separate tools.
Today I constructed a lengthy PowerShell script to take a Report Services .rptproj project file and output a command script that utilises Paul’s VB.NET scripts to deploy the reports as per the project settings. Due to the size of the script rather than publishing it inline, you can download it here.
The script accepts three parameters. ProjectFile is the path to .rptproj file for the reports you want to deploy. If you omit this parameter the script uses the first report project file it finds in the current directory. The second parameter, ConfigurationName tells the script which project configuration to use for the target server URL and destination folders. If you omit this parameter the script uses the first configuration defined in the project. The last parameter SearchPaths is a list of paths for the script to search when locating both rs.exe and Paul’s .rss files. The SearchPaths parameter is automatically combined with the environment PATH variable and may be omitted.
Here is an example usage:
PS C:\Users\Jason\Dev\MyReports> .\Deploy-SqlReports.ps1 `
-ProjectName MyReports.rptproj `
-ConfigurationName Release `
-SearchPaths “C:\Tools\Report Services\” `
| Out-File deploy.cmd -Encoding ASCII;
As always, my PowerShell skills are slowly improving and this script is not necessarily perfect in either robustness or efficient use of PowerShell. Hopefully it will be as useful to you as it has been to me and any changes you need should be easily made. Please leave a comment with your thoughts and suggestions.
It has been a while since I last worked with storing files in a SQL database and I decided to Google around to remind myself of the best way to do it. I was very disappointed with most of the approaches I found. Unfortunately, my Google-Fu didn’t return the MSDN articles I’ve linked to below, and I had to find out the hard way.
To begin, all solutions I found dealt only with reading a BLOB from a SQL Server image or varbinary(max) column in a streaming fashion. Worst of all very few actually understood what streaming should do, and that is not load the entire object into an array in memory.
My whinging aside, streaming a file out of a SQL table is easy. You start by using a DataReader created by passing CommandBehavior.SequentialAccess to a DbCommand’s ExecuteReader function. I also find that selecting only the blob column and only the desired row(s) from the table is the most effective.
When you have the DataReader positioned on the appropriate record you repeatedly call the GetBytes method in a loop, retrieving a small chunk each time and writing it to the output stream. The output can be any IO.Stream like a file or even your ASP.NET response. This MSDN article has a good description of the situation with the SequentialAccess enumeration and some sample code.
Writing a stream of data into a SQL table turned out to be slightly less obvious. I’m only working with SQL Server 2005 so I didn’t consider supporting older versions but the approach is similar. SQL 2005 provides a Write “method” on the large value data types in the UPDATE statement.
My solution was to first insert the new row into the table providing values for all columns except the blob. Then I had a stored procedure that would take the row’s primary key values, an offset, and a chunk of the data to insert and use the UPDATE .Write method to update the row.
Similar to the reading code, my writing code would read a small chunk from the incoming IO.Stream and pass it to the stored procedure, incrementing the offset each time. Once again, there is another MSDN article that describes the process well but their code looks like it will also work with SQL versions prior to 2005.
In both cases tweaking the size of the chunk used in each iteration of the loop will require some testing and measuring to find the best performance but now you can read and write files of almost 2GB into SQL Server without trying to allocate a similarly sized array in memory first.