My first anniversary of working with Squixa passed recently and I began to reflect on just how much working with an entirely unfamiliar technology stack has been different from working with the Microsoft platform, and how it has been different when compared to my initial expectations.
In the early weeks into the new job I began writing down the names of tools and technologies that I was learning each day but it quickly reached more than 50 long and I stopped updating it. Looking back at that list now, it has become a list of things I use every day, most have formed muscle memories, many I have read the source code for, and a number I have submitted patches to for bug-fixes or enhancements.
On the Microsoft platform I was a regular user and contributor to open-source projects on CodePlex and GitHub and more often than not I trusted .NET Reflector over documentation to better understand how some component should work. Over in *nix land though, source code is unavoidable, sadly sometimes as an alternative to documentation, but more often simply as the preferred distribution method. I’ve certainly read a lot more source code each week than I have previously, and in a wider variety of languages, and although it is sometimes tedious it has also taught me a lot. It’s not just developers that need a compiler installed but any user looking outside what their favourite *nix-flavour packages for them.
I was never particularly bothered by the lack of system package manager on Windows even though I’d heard its absence was oft-maligned by *nix folk. Having used a package manager in anger now, I can both appreciate just how much effort it saves when trying to automate machine provisioning but also found that there are many challenges with version pinning and when one’s chosen distribution does not stay current with new software releases. I’m sure the new Windows 10 PackageManagement (formerly OneGet) will be an awesome step forward for Microsoft in this space.
I wrote a lot of PowerShell before I changed jobs and I revelled in the language’s ability to work with objects and APIs. The typical shell in *nix lacks this but I’ve rarely had to deal with objects or APIs in my new job. Here Everything is a file and normally a plain-text file at that and so languages focused on text manipulation instead are ample. Configuration management systems end up spending most of their time overwriting files generated from templates instead of trying to interact with an API in some idempotent manner. Personally though, dealing with pattern matching and character- or field-offsets still feels too brittle and harder to re-comprehend later.
There are some popular applications in Linux doing some really awesome tricks. One favourite example is the nginx web server which can upgrade its binary, launch a new version of itself, hand-over existing connections and listening sockets and never drop a packet. It’s not that things like this are not achievable on the Windows platform, it’s just that for some unknown reason, nobody is doing it. While Microsoft is still fighting hard against a “just restart it” culture to avoid unnecessary down-time, Torvalds recently merged live kernel patching in Linux.
Ultimately though all the problems are the same across both platforms. You need to make sure you understand exactly what each application needs access to so you can constrain it to the least possible privileges – but not everyone does. You hit resource limits on process counts, file handles, network connections, etc but at different thresholds. You’re susceptible to the same failure conditions but they often have different failure modes, and rarely the one you would have preferred.
For every difference, there are double the similarities. The platforms have different driving principles guiding which solution to prefer for a given problem, but neither is necessarily better, simply idiomatic. At this point I’m expecting that I’ll continue to use whichever platform my current project requires without any favouritism and hopefully be switching back and forth enough to stay abreast of the latest developments on each.
As part of my new job with Squixa I have been working with Varnish Cache everyday. Varnish, together with its very capable Varnish Configuration Language (VCL), is a great piece of software for getting the best experience for websites that weren’t necessarily built with cache-ability or high-volume traffic in mind.
At the same time though, getting the VCL just right to achieve the desired caching outcome for particular resources can be an exercise in reliably reproducing the expected requests and careful analysis of the varnish logs. It isn’t always possible to find an environment where this can be done with minimal distraction and impact on others.
VclFiddle enables you to specify a set of Varnish Configuration Language statements (including defining the backend origin server), and a set of HTTP requests and have them executed in a new, isolated Varnish Cache instance. In return you get the raw varnishlog output (including tracing) and all the response headers for each request, including a quick summary of which requests resulted in a cache hit or miss.
Each time a Fiddle is executed, a new Fiddle-specific URL is produced and displayed in the browser address bar and this URL can then be shared with anyone. So, much like JSFiddle, you can use VclFiddle to reproduce a difficult problem you might be having with Varnish and then post the Fiddle URL to your colleagues, or to Twitter, or to an online forum to seek assistance. Or you could share a Fiddle URL to demonstrate some cool behaviour you’ve achieved with Varnish.
VclFiddle is built with Sails.js (a Node.js MVC framework) and Docker. It is the power of Docker that makes it fast for the tool to spawn as many instances and versions of Varnish as needed for each Fiddle to execute and easy for people to add support for different Varnish versions. For example, it takes an average of 709 milliseconds to execute a Fiddle and it took my colleague Glenn less than an hour to add a new Docker image to provide Varnish 2.1 support.
The README in the VclFiddle repository has much more detail on how it works and how to use it. There is also a video demo, and a few example walk-throughs on the left-hand pane of the VclFiddle site. I hope that, if you’re a Varnish user you’ll find VclFiddle useful and it will become a regular tool in your belt. If you’re not familiar with Varnish Cache, perhaps VclFiddle will provide a good introduction to its capabilities so you can adopt it to optimize your web application. In any case, your feedback is welcome by contacting me, the @vclfiddle Twitter account, or via GitHub issues.
After about five and a half years I have resigned from my job with Readify. I have had a great time working for Readify as a software developer, a consultant, an ALM specialist, and an infrastructure coder. Had a new opportunity not presented itself I could have easily continued working for Readify for years to come. The decision to leave was definitely not easy.
Over the last 16 years working as an IT professional I’ve had the opportunity to gain experience with almost all aspects of software development, system administration, networking, and security but all of it on the Microsoft platform. I did do some work with PERL and PHP on Apache and MySQL back in the late 90s (like everyone did I’m sure) but I haven’t spent any quality time with Linux or Mac OS X since.
Starting on June 10th this year (2014) I will begin a new job with Squixa. Squixa provide a set of services for improving the end-user performance of existing web sites and exposing analytics to the web site’s owners. Squixa’s implementation currently involves very few Microsoft technologies, if any. Subsequently my future includes the exciting experience of learning a new set of operating systems, development languages, web servers, database systems, build tools, and so on.
I still have a passion for PowerShell and I feel that the direction Microsoft is heading with Azure, Visual Studio Online, and Project K is exciting and promises to become a much better platform than it is today so I will continue to stay informed of new developments. However, aside from small hobby projects, most of my time, effort, and daily challenges will come from the *nix world and future blog posts will likely reflect this.
When using Forefront Threat Management Gateway 2010 (TMG) to expose an internal web server to the public Internet beware of the unexpected side-effects of the “Forward the original host header instead of the actual one (specified in the Internal site name field)” check box on the “To” tab in the properties dialog of a Web Publishing Rule.
If the behaviour of the internal web server is to detect an incoming request over HTTP and respond with a 3xx response to a new HTTPS location (for all or only specific URLs) and the “Forward the original host header” option in TMG is checked, then TMG interferes and the HTTP Location header that is returned to the original client does not include the HTTPS scheme and the client’s browser/user-agent gets caught in an infinite redirection loop.
Unchecking the “Forward the original host header” option for the rule and applying the changes fixes the issue and the redirection works correctly.
I have been unable to find any official documentation on this behaviour and only two references to anything related. I found one post on the isaserver.org forums (with no responses) suggesting that this behaviour was introduced in ISA 2004 SP2 (ISA being the original name for TMG). The other reference is a Microsoft Support KB article describing that the request Host header passed from TMG to the internal web server can include the port number with the host name – if the port number is embedded I can imagine this impacting the scheme, but it’s a stretch.
Hopefully this post will help the next person hitting the same issue.
sqlcmd.exe -S MySqlServer -E -e -i “%~f0” -o “%~f0.log”