Category: Web Performance
In January I presented at the Sydney ALT.NET user group about HTTPS, focusing on all the new advancements in this space and some long-held misconceptions too. It was well received so I re-presented it at the Port80 Sydney meetup in March.
I met Steve Cassidy from Macquarie University who was also presenting at the same Port80 meetup and I was invited to present the talk a third time as a guest lecture to second year Macquarie University Computer Science students on May 4th. The lecture was filmed but is only available to those with a student login. My slide deck from Port80 is available on SlideShare though.
On May 19th I delivered a breakfast talk about my experience deploying some of section.io’s infrastructure into Azure. The video of this talk is publicly available and so are the slides.
This year my friend Aaron lead the organising of the return of the DDD conference in Sydney. I submitted a talk proposal and was fortunate to receive enough votes to earn a speaking slot. So, on Saturday May 28th I presented “Web Performance Lessons” which covered a variety of scenarios I had encountered while improving the performance of other people’s websites as part of my job at section.io. The talk was recorded by the conference sponsor SSW and is available to watch here. Also my slides can be viewed at SlideShare.
At the Port80 meetup in March I also met Mo Badran who organises the Operational Intelligence Sydney meetup. Mo asked if I could do a presentation of how section.io handles operations so on Tuesday May 31st I presented “Monitoring at section.io” where I shared a bunch of detail about our tools and processes for operational visibility at section.io, both for the platform itself, and for users of our CDN. Those slides are published on SlideShare too.
I’ll take a break from speaking in June and instead absorb what other people have to share at the Velocity conference in Santa Clara and take the opportunity to also check out the new section.io office in Colorado.
I know this blog has been quiet for a while. I have been posting most of my written content over at the section.io blog lately and will probably continue to blog there more often than here in the near future. Some of my recent posts include:
Command line parsing in Windows and Linux
I have been working almost completely on the Linux platform for the last six months as part of my new job. While so much is new and different from the Windows view of the world, there is also a significant amount that is the same, not surprisingly given the hardware underneath is common to both.
Just recently, while working on a new open source project, I discovered a particular nuance in a behavioural difference at the core of the two platforms. This difference is in how a new process is started.
When one process wants to launch another process, no matter which language you’re developing with, ultimately this task is performed by an operating system API. On Windows it is CreateProcess in kernel32.dll and on Linux it is execve (and friends), typically combined with fork.
The Windows API call expects a single string parameter containing all the command-line arguments to pass to the new process, however the Linux API call expects a parameter with an array of strings containing one command-line argument in each element. The key difference here is in where the responsibility lies for tokenising a string of arguments into the array ultimately consumed in the new process’ entry point, commonly the “argv” array in the “main” function found in some form in almost every language.
On Windows it is the new process, or callee, that needs to tokenise the arguments but the standard C library will normally handle that, and for other scenarios the OS provides CommandLineToArgvW in shell32.dll to do the same thing.
On Linux though it is the original process, or caller, that needs to tokenise the arguments first. Often in Linux it is the interactive shell (eg bash, ksh, zsh) that has its own semantics for handling quoting of arguments, variable expansion, and other features when tokenising a command-line into individual arguments. However, at least from my research, if you are developing a program on Linux which accepts a command-line from some user input, or is parsing an audit log, there is no OS function to help with tokenisation – you need to write it yourself.
Obviously, the Linux model allows greater choice in the kinds of advanced command-line interpretation features a shell can offer whereas Windows provides a fixed but consistent model to rely upon. This trade-off embodies the fundamental mindset differences between the two platforms, at least that is how it seems from my relatively limited experience.
PowerShell starts to blur the lines somewhat on the Windows platform as it has its own parsing semantics yet again but this applies mostly to calling Cmdlets which have a very different contract from the single entry point of processes. PowerShell also provides a Parser API for use in your own code.