Category: PowerShell

Get Hyper-V guest data without XML parsing

I recently needed to query the Hyper-V KVP Exchange data for a guest VM to find the currently configured IPv4 address of the VM’s network adapter. A quick search of the Internet reveals that the Msvm_KvpExchangeComponent WMI class is the source of this information and there are at least two blog posts that cover it well:

However, in both of these blogs, the actual data comes back as XML which is then parsed using XPath. The original XML looks something like this:

<INSTANCE CLASSNAME="Msvm_KvpExchangeDataItem">
 <PROPERTY NAME="Data" TYPE="string">
  <VALUE>169.254.103.5</VALUE>
 </PROPERTY>
 <PROPERTY NAME="Name" TYPE="string">
  <VALUE>RDPAddressIPv4</VALUE>
 </PROPERTY>
 <PROPERTY NAME="Source" TYPE="uint16">
  <VALUE>2</VALUE>
 </PROPERTY>
</INSTANCE>

As soon as I saw this XML I recognised it as the DMTF CIM XML format – the same format that the new PowerShell v3 CIM Cmdlets use to transport CIM instances over HTTP (I believe). If this is the format used by PowerShell, it seemed a reasonable assumption  that PowerShell or the .NET Framework must already have an implementation for deserializing this XML properly so I don’t have to code it myself.

With Reflector in hand, I started my investigation at ManagementBaseObject.GetText which converts to XML but I couldn’t find any complementary methods to go the other direction. I then proceeded to look at ManageBaseObject’s implementation of the ISerializable interface and corresponding constructor but that appears to use binary serialization of COM types.

Finally, I turned to the CimInstance class and its implementation of ISerializable and discovered the CimDeserializer. Unfortunately the CimDeserializer methods take a byte array as input and I had strings. So assuming round-tripping should work, I looked to the CimSerializer and tried passing it a CimInstance and inspected the byte array that was returned – every second byte is zero, and the rest fit within 7-bits… smells like Unicode.

Taking a small gamble I took the strings from the Msvm_KvpExchangeComponent instance, used System.Text.Encoding.Unicode to convert them to byte arrays and passed them to CimDeserializer.DeserializeInstance. Huzzah! Properly deserialized Msvm_KvpExchangeDataItem instances.

And here is the final PowerShell script to return the items for a given VM name: https://gist.github.com/jstangroome/6068782

PSClrMD – A PowerShell module for CLR Memory Diagnostics

Back in May, the .NET Framework team blogged about a new set of advanced APIs for programmatically inspecting a live process or crash dump of a .NET application. These APIs are called “CLR Memory Diagnostics” or “ClrMD” for short and are available as a pre-release NuGet package called “Microsoft.Diagnostics.Runtime” – I think there may be some naming issues yet to be resolved.

Based on some of the examples on their blog post demonstrating a LINQ-style approach I thought this library would be very useful in a PowerShell pipeline scenario as well. Although there is already a PowerShell module for debugging with WinDbg (PowerDbg), I wanted the practice of building a PowerShell module and the opportunity to play with the ClrMD library.

Today I started building the first set of cmdlets based on the examples demonstrated in the blog’s code samples and have published the code on GitHub. The cmdlets so far are:

  • Connect-ClrMDTarget – establishes the underlying DataTarget object by attaching to a live process or loading a crash dump file.
  • Get-ClrMDClrVersion – lists the versions of the CLR loaded in the connected process. Typically just one.
  • Connect-ClrMDRuntime – establishes the underlying ClrRuntime object to query .NET-related information. Defaults to the first loaded CLR version in the process.
  • Get-ClrMDThread – lists the CLR threads of the connected CLR runtime.
  • Get-ClrMDHeapObject – lists all the objects in the heap of the connected CLR runtime.
  • Disconnect-ClrMDTarget – detaches from the connected process.

The ClrMD API is centered around having a DataTarget and ClrRuntime instance as context for performing all other operations. In PowerShell, it would be awkward to pass this context as a parameter to every cmdlet so I wrote the Connect cmdlets to store the context in a module variable which all other cmdlets will naturally inherit. If desired however, the Connect cmdlets accept a -PassThru switch which will output a context object which can then be passed explicitly to the –Target or -Runtime parameters of the other cmdlets. This would enable two or more processes to be inspected simulataneously, for example.

Included in the source repository is a drive.ps1 script which I used during development to repeatedly try different scenarios and set some default formatting for threads and heap objects. One example in this script is finding the first 20 unique string values in the process, here is an excerpt:

Import-Module -Name PSClrMD
Connect-ClrMDTarget -ProcessId $PID
Connect-ClrMDRuntime
Get-ClrMDHeapObject | 
 Where-Object { $_.IsString } | 
 Select-Object -Unique -First 20 -Property SimpleValue
Disconnect-ClrMDTarget

From this you can hopefully see how easy it can be to connect to a running process (PowerShell itself in this case) and query interesting data. From this it also appears that I should try to combine the two Connect cmdlets into one for the common scenario demonstrated here.

Another example to be found in the drive.ps1 script is listing the top memory-consuming objects by type which, when combined with scheduled tasks and Export-Csv, could provide a simple monitoring solution.

You can download the compiled module here or feel free to get the source and submit a Pull Request for anything you add – I’ve only scratched the surface of what ClrMD exposes.

Update: there is also a scriptcs Script Pack for ClrMD if PowerShell is not your style.

PowerShell v3 for Developers

In November last year I presented, to a small group of my colleagues one evening, a summary of the new features in PowerShell v3 and why developers should care about PowerShell given that most PowerShell marketing targets the IT Pro types with their AD and Exchange management needs. I also spoke briefly about integrating PowerShell and C#.

The screencast of this presentation (~24 minutes) is available on the Readify in the Community website.

In February just past I presented a more detailed (with demos and code) and much more polished version of this presentation to the Sydney .NET User Group. Thanks to an excellent video recording and post-production setup at this user group, the full video of my presentation (~1 hour) is available to watch online on the SSW TV website.

Also, in the theme of PowerShell for Developers, Microsoft has just released the PowerShell 3 SDK Sample Pack containing many C# examples for extending PowerShell in numerous ways.

Upgrade Visual Studio Scrum 1.0 Team Projects to version 2.0 on TFS 2012

Team Foundation Server 2010 shipped with two default Process Templates, one for Agile and another for CMMI, but Microsoft also provided a third template for Scrum teams as a separate download. With the recent release of Team Foundation Server 2012, the latest version of this additional template (Microsoft Visual Studio Scrum 2.0) is not only included in-the-box but is also now the default template for new Team Projects.

If you move from TFS 2010 to TFS 2012 and upgrade your existing Team Projects in the process, your existing Microsoft Visual Studio Scrum 1.0 Team Projects will stay as version 1.0. The new and very improved Web Access in TFS 2012 will give you the option to modify the process of an existing project slightly to enable some new TFS 2012 features but your projects will still be mostly version 1.0. A feature-enabled Scrum 1.0 Team Project will still differ from a new Microsoft Visual Studio Scrum 2.0 Team Project in these aspects (of varying impact):

  • Using Microsoft.VSTS.Common.DescriptionHtml field for HTML descriptions work items instead of the new HTML-enabled System.Description field.
  • Missing ClosedDate field on Tasks, Bugs and Product Backlog Items.
  • Missing extra reasons on Bug transitions.
  • Sprint start and finish dates still in Sprint work items, instead of on Iterations.
  • Queries still based on work item type instead of work item category.
  • Old Sprint and Product Backlog queries.
  • Missing the new “Feedback” work item query.
  • Missing extra browsers and operating systems in Test Management configuration values.
  • Old reports.

While these differences may be subtle now, they could become critical when adopting third party tooling designed only to work with the latest TFS process templates, or when trying to upgrade to the next revision of  TFS and making the most of its features. For me though, having existing projects stay consistent with new Team Projects I create is probably the most important factor. As such I’ve scripted most of the process for applying these changes to existing projects as they can be rather tedious, especially when you have many Team Projects.

The script and related files are available on GitHub.

To use the script, open a PowerShell v3 session on a machine with Team Explorer 2012 installed. The user account should be a Collection Administrator. The upgrade process may run quicker if run from the TFS Application Tier, in which case the PowerShell session should also be elevated. Ensure your PowerShell execution policy allows scripts. Run the following command:

<path>\UpgradeScrum.ps1 -CollectionUri http://tfsserver:8080/tfs/SomeCollection -ProjectName SomeProject

Swap the placeholder values to suit your environment. The ProjectName parameter can be omitted to process all Team Projects or you can specify a value containing wildcards (using * or ?).

Aside from fixing most of the differences listed above, the script will copy Sprint dates to their corresponding Iterations and also copy the description HTML from the old field to the new System.Description field. The script will also map the default Team to the existing areas and iterations. The Sprint work item type will remain in case you have saved important Retrospective notes as the new TFS 2012 template doesn’t not have a corresponding field for this information.

One step my upgrade script doesn’t do yet is upload the new Reports but that can be achieved just as easily with the “addprojectreports” capability of the TFS Power Tools (the Beta version works with RTM).

Also, for anyone who used TFS 11 Beta or TFS 2012 RC and has created Team Projects based on the Preview 3 or Prevew 4 versions of the Scrum 2.0 template, my script will also upgrade those projects to the RTM version of the template. I later plan to implement a similar script to upgrade existing Agile 5.0 Team Projects to the new Agile 6.0 process template.

Warning: If you have customized your work item type definitions from their original state (eg added extra fields) there is potential for the upgrade to fail or maybe even data to be lost. However I have upgraded at least 10 Team Projects so far, all successfully. As the script is written in PowerShell, the implementation is easily accessible for verification or modification to suit your needs.

PowerShell Remoting, User Profiles, Delegation and DPAPI

I’ve been working with a PowerShell script to automatically deploy an application to an environment. The script is initiated on one machine and uses PowerShell Remoting to perform the install on one or more target machines. On the target machines the install process needs the username and password of the service account that the application will be configured to run as.

I despise handling passwords in my scripts and applications so I avoid it wherever possible, and where I can’t avoid it, I make sure I handle them securely. And this is where the fun starts.

By default, PowerShell Remoting suffers from the same multi-hop authentication problem as any other system using Windows security, i.e. the credentials used to connect to the target machine cannot be used to connect from the target machine to another resource requiring authentication. The most promising solution to this in PowerShell Remoting is CredSSP which enables credentials to be delegated but it has some challenges:

  1. It is not enabled by default, you need to execute Enable-WSManCredSSP or use Group Policy to configure the involved machines to support CredSSP.
  2. It is not available on Windows XP or Server 2003, a minor concern given that these OSes should be dead by now, but worth mentioning.
  3. PS Remoting requires CredSSP to be used with “fresh” credentials even though CredSSP as a technology supports delegating default credentials (this is how RDP uses CredSSP).

It is the last point about fresh credentials that kills CredSSP for me, I don’t want to persist another password for my non-interactive script to use when establishing a remoting connection. There is a bug on Microsoft Connect about this that you can vote up.

Instead I need to revert to the pre-CredSSP (and poorly documented) way of supporting multi-hop authentication with PS Remoting: Delegation. You basically require at least two things to be configured correctly in Active Directory for this to work.

  1. The user account that is being used to authenticate the PS Remoting session must have its AD attribute “Account is sensitive and cannot be delegated” unchecked.
  2. The computer account of the machine PS Remoting is connecting to must have either the “Trust this computer for delegation to any service” option enabled or have the “Trust this computer for delegation to specified services only” option enabled with a list of which services on which machines can passed delegated credentials. The latter is more secure if you know which services you’ll need.

I’m not sure which caches needed to expire because it took a while for these changes to start working for me but once they did After the PS Remoting target computer refreshed its Kerberos ticket (every 10 hours by default) I could use PS Remoting with the default authentication method (Kerberos) and could authenticate to resources beyond the connected machine. Rebooting the target computer or using the “klist purge” command against the target computer’s system account will force the Kerberos ticket to be refreshed.

With that hurdle overcome, the fun continues with the handling of the application’s service account credentials. PowerShell’s ConvertTo- and ConvertFrom-SecureString cmdlets enable me to encrypt the service account password using a Windows-managed encryption key specific to the current user, in my case this is the user performing the deployment and authenticating the PS Remoting session. As a one-time operation I ran an interactive PowerShell session as the deployment user and used `Read-Host -AsSecureString | ConvertFrom-SecureString` to encrypt the application service account password and I stored the result in a file alongside the deployment script.

At deployment time, the script uses ConvertTo-SecureString to retrieve the password from the encrypted file and configure the application’s service account. At least it worked with my interactive proof-of-concept testing. It failed to decrypt the password at deployment time with the error message “Key not valid for use in specified state”. After quite some digging I found the culprit.

The Convert*-SecureString cmdlets are using the DPAPI and the current user’s key when a key isn’t specified explicitly. DPAPI is dependent on the current user’s Windows profile being loaded to obtain the key. When using Enter-PSSession to do my interactive testing, the user profile is loaded for me but when using Invoke-Command inside my deployment script, the user profile is not loaded by default so DPAPI can’t access the key to decrypt the password. Apparently this is a known issue with impersonation and the DPAPI. It’s also worth noting that when using the DPAPI with a user key for a domain user account with a roaming profile, I found it needs to authenticate to the domain controller, a nice surprise for someone trying to configure delegation to specific services only.

My options now appear to be one of:

  1. Use the OpenProcessToken and LoadUserProfile win32 API functions to load the user profile before making DPAPI calls.
  2. Ignore the Convert*-SecureString cmdlets and call the DPAPI via the .NET ProtectedData class so I can use the local machine key for encryption instead of the current user’s key.
  3. Decrypt the password outside the PS Remoting session and pass the unencrypted password into the Remoting session ready to be used. I don’t like the security implications of this.

I’ll likely go with option (2) to get something working as soon as possible and look into safely implementing option (1) when I have more time.

Adopting PsGet as my preferred PowerShell module manager

I recently blogged about what features I think a PowerShell module manager should have and I briefly mentioned a few existing solutions and my current thoughts about them. A few people left comments mentioning some other options (which I’ve now looked into). I finished the post suggesting that it would be better to for me to contribute to one of the existing solutions than to introduce my own new alternative to the mix. So I did.

I chose Mike Chaliy‘s PsGet as my preferred solution (not to be confused with Andrew Nurse’s PS-Get) because I liked its current feature set, implementation, and design goals most. I have been committing to PsGet regularly, implementing the missing features I blogged about previously and anything else I’ve discovered as part of using it for my day job. My favourite contributions that I have made to PsGet so far include:

Since I adopted PsGet, Ethan Brown also contributed some important changes to support modules hosted on CodePlex, the PowerShell Community Extensions being a popular example. I’ve also been experimenting with integrating PsGet with the same web service that Microsoft’s new Script Explorer uses to download the wide selection of scripts and modules available from the TechNet Script Repository. PsGet also handles the scripts hosted on PoshCode well but I think we can improve the search capability.

Looking back at the list of requirements I made for a PowerShell module manager, this is what I think remains for PsGet, in order of  most important first:

  1. Side-by-side versioning. While already achievable by overriding the module install destination, getting the concept of a version into PsGet is important, and will improve NuGet integration too.
  2. There is still a lot of work to be done to enable authors to more easily publish their modules. I’ve found publishing my own modules on GitHub and letting PsGet use the zipballs works well but it doesn’t handle modules than require compilation (eg implemented with C#).
  3. I still want to add opt-in support for marking downloaded modules as originating from the Internet Zone for those who want to use the features of Execution Policy. I believe the intent of Execution Policy is misunderstood by many and it serves a useful purpose. I might blog about this one day. This is quite a low priority feature though, especially as I recently discovered PSv2 ignores the zone on binary files.

I hope you’ll take a look at PsGet for managing installation of your PowerShell modules and provide your feedback to Mike, I and other contributors via the PsGet Issues list on GitHub.

Beware of unqualified PowerShell type literals

In PowerShell we can refer to a type using a type literal, eg:

[System.DateTime]

Type literals are used when casting one type to another, eg:

[System.DateTime]'2012-04-11'

Or when acessing a static member, eg:

[System.DateTime]::UtcNow

Or declaring a parameter type in a function, eg:

function Add-OneWeek ([System.DateTime]$StartDate) {
    $StartDate.AddDays(7)
}

PowerShell also provides a handful of type accelerators so we don’t have to use the full name of the type, eg:

[datetime] # accelerator for [System.DateTime]
[wmi] # accelerator for [System.Management.ManagementObject]

However, unlike a C# project in Visual Studio, PowerShell will let you load two identically named types from two different assemblies into the session:

Add-Type -AssemblyName 'Microsoft.TeamFoundation.Client, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
Add-Type -AssemblyName 'Microsoft.TeamFoundation.Client, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'

Both of the assemblies in my example contain a class named TfsConnection, so which version of the type is referenced by this type literal?

[Microsoft.TeamFoundation.Client.TfsConnection]

From my testing it appears to resolve to the type found in whichever assembly was loaded first, in my example above it would be version 10. However, a single script or module would be unlikely to load both versions of the assembly itself so you would likely encounter this situation when two different scripts or modules load conflicting versions of the same assembly, and in this case you won’t control the order in which each assembly is loaded so you can’t be sure which is first.

It is possible for a script to detect if a conflicting assembly version is loaded if it is expecting this scenario but the CLR won’t allow an assembly to be unloaded so all the script could do is inform the user and abort, or maybe spawn a child PowerShell process in which to execute.

It is also possible to have two identically named types loaded in PowerShell via another less obvious scenario. If you use the New-WebServiceProxy cmdlet against two different endpoints implementing the same web service interface, PowerShell generates and loads two different dynamic assemblies with identical type names (assuming you specify the same Namespace parameter to the cmdlet). I’ve run into this issue with the SQL Server Reporting Services web service.

To address this issue, referring to my first example, you can use assembly-qualified type literals, eg:

[Microsoft.TeamFoundation.Client.TfsConnection, Microsoft.TeamFoundation.Client, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]

However these quickly create scripts which are harder to read and maintain. For accessing static members you can assign the type literal to a variable, eg:

$TfsConnection11 = [Microsoft.TeamFoundation.Client.TfsConnection, Microsoft.TeamFoundation.Client, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a]
$TfsConnection11::ApplicationName

But for casting and declaring parameter types I don’t have a better solution. There are ways to add to PowerShell’s built-in type accelerator list but it involves manipulating non-public types which I wouldn’t feel comfortable using in a script or module I intend to publish for others to use.

For the New-WebServiceProxy situation, I have created a wrapper function which will reuse the existing PowerShell generated assembly if it exists.