Category: Uncategorized

minikube with WSL kubectl

Update: minikube 0.29.0 has been released and includes my merged PR so you can enable embedded certificates with minikube config set embed-certs true once and then just symlink your .kube/config file from your WSL home directory to the same file in your Windows home directory.


I recently blogged about how I work with minikube from the Windows Subsystem for Linux (WSL), describing some of the friction points and workarounds.

At the time I recommended using the Windows version of kubectl to avoid needing to translate the Windows file paths found in the .kube/config file to be usable with the Linux version of kubectl. Also, I hadn’t at that time encountered any use cases where using the Linux kubectl inside WSL would work better than the Windows version.

The first scenario most people would likely encounter with different or breaking behaviour would be passing absolute file paths, e.g. kubectl apply -f /home/jason/my.yaml would usually fail to locate the file with the Windows version. This is worked around most often by using paths relative to the working directory.

Another scenario where the Linux version of kubectl is preferred is the TTY support when running kubectl exec --tty mypod. This was the reason I personally decided to get the Linux version of kubectl working with minikube in my WSL environment.

My first approach was to copy the .kube/config file that is created in my Windows user profile directory during minikube start, modify the three certificate paths to be WSL-compatible paths, and save the result in my WSL home directory.

Later I realised (from the files generated by kubeadm init on production Kubernetes clusters) that the certificate entries in the config file don’t need to be paths, but can have the certificate content embedded as base64 blobs. Naturally I wrote a bash script that I could run from WSL to perform these steps for me each time my minikube IP address or certificates changed (which to be fair, isn’t often). The script will use the translated paths approach by default but if executed with the --embed parameter it will use the embedded certificates alternative.

After using this solution for a while I began wondering why minikube didn’t just generate a .kube/config file with embedded certificates so WSL support could be solved with a simple symlink instead of copying and rewriting the file each time it changed.

So I dived into the minikube source, and raised a pull request, and as at the time of writing this post, the PR has been merged into master and is just awaiting an official release. Once the new version of minikube is published (or if you’re keen to build it from source yourself), you will be able to execute minikube config set embed-certs true once and then minikube will always generate a .kube/config file with the certificates embedded as base64 blobs.

Then you can symlink your WSL ~/.kube/config file to your Windows %USERPROFILE%/.kube/config file and use either version of kubectl with no ongoing management.

PS: hat tip to Nuno do Carmo who also found a solution to embedding the certificates in the .kube/config file by using a pair of kubectl config ... --embed-certs commands. See the “Bonus 3: Do the same with Minikube” section of his extensive blog on WSLinux+K8S: The Interop Way”.

Sennheiser Presence headset review

Update: I’ve been using the Sennheiser Presence for several hours every weekday for a year now and I’m still very happy with it and recommend it to others.

headsetI work from home near full-time, and the rest of my team works remotely too, so I spend a decent amount of time on VoIP calls for scheduled meetings, paired debugging sessions, and general chit-chat.

For at least the last four years I’ve been using a Plantronics .Audio 615M USB headset that I received free at a conference back around 2009 and it has been working very well. Built-in Windows drivers recognise it as a USB communications headset so it becomes the default microphone and speaker for Slack Calls, Zoom, Google Hangouts, etc. I also love that it is a single-ear headset so I can still be aware of my surroundings while I’m wearing it, and the over-ear design (as opposed to in-ear) means it is comfortable to wear for extended periods.

What frustrated me about this headset was that I was tethered to my desk during a call, and I couldn’t use the same headset with my phone whilst on the go. The long cable also made it awkward to use when working from a café. So I started looking for an alternative.

I found TechRadar’s best Bluetooth headsets 2018 article and was drawn to the Sennheiser Presence UC at position 5 primarily because it “can connect to phone and laptop at the same time for easy switching”. The same article also called this headset “not the most comfortable” but I’ve had good experiences with other Sennheiser products, so I checked out the official product page where I discovered that an over-ear headband and charging stand was also available for this device. So I took a chance, and ordered the whole set.

I’ve been using the new Presence headset for over a week now and I am very happy with the product. It paired to my Android phone trivially, and the USB dongle for the laptop required no special drivers. Like the old Plantronics, the new Sennheiser headset also became the default device for my VoIP applications.

The USB dongle also has a light indicating that the headset is paired (dull blue), active (bright blue), or even disabled (red) because I toggled the “microphone mute” key on my laptop keyboard.

Battery life is supposedly 10 hours of talk time but with the charging stand in easy reach, I leave the headset there when I’m not on a call and I don’t need to think about the battery. The lack of cable also means I’m not getting it tangled in my chair wheels, and I can step away from my desk to refill my coffee while I’m on a call.

So far I only have two complaints with Presence headset: firstly, the slider switch to power on/off the device is a little too easy to slide accidentally when picking up the headset from the stand to put it on for a call. I need to be mindful of how I pick it up to avoid waiting for a power-off, power-on, pair cycle.

Secondly, it is not simple to remove the over-ear headband and revert back to the in-ear configuration. There are a couple of small pieces that need to be found and reconnected, and later removed again to re-attach the headband, and it would be easy to lose them. As such I think I’ll just be using the headband accessory while I’m travelling.

minikube and WSL

I develop services that run on Kubernetes. During development minikube provides an convenient way to run a local Kubernetes “cluster” regardless of whether you use Windows, OS X, or a Linux distribution as your host OS.

Day-to-day I use minikube on Windows 10 and I prefer to use the Windows Subsystem for Linux (WSL) bash shell to have a scripting environment consistent with my colleagues, some of whom do not use Windows, and consistent with the CI system.

The Linux binary of minikube isn’t very useful in WSL since it doesn’t support the Hyper-V driver and the Virtualbox driver cannot deal with the path differences it sees within WSL compared to those reported by VboxManage.exe.

However, when running the Windows minikube.exe binary, many of the commands (e.g. start, stop, ip, dashboard) just work without any special configuration. Furthermore, creating a symlink so minikube can be executed on the PATH without the .exe extension easily improves the default experience. Beyond these initial commands though, some extra effort is required.

SSH can be a little flakey with minikube ssh so I find it better to create an alias to use WSL’s ssh client:

ssh -a -i "$(wslpath -u "$(minikube ssh-key)")" -l docker "$(minikube ip)"

You may get an error from this SSH command that the permissions of the identity file are too open. This is fixed in two steps. Firstly ensure you have added metadata to the automount options in your /etc/wsl.conf and restart your WSL session.

Secondly, change WSL’s view of the permissions of the key file:

chmod 0600 "$(wslpath -u "$(minikube ssh-key)")"

The minikube docker-env command doesn’t recognise the WSL environment and outputs the PowerShell environment commands instead. This can be worked around by passing the --shell bash arguments but the DOCKER_CERT_PATH environment variable value won’t work with the docker Linux binary as-is and the Windows binary needs the WSLENV environment variable set appropriately. These extra steps are enough to justify a helper script which I’ve published as a GitHub Gist. With this script dot-sourced, both the Windows and Linux binaries for the Docker client will work with Minikube’s Docker daemon.

Lastly, use the Windows binary for kubectl. The paths for Kubernetes certificates in the .kube/config file make it difficult to use the kubectl Linux binary and so far I haven’t found a problem with the Windows version.

wslpath and mktemp

The Windows Subsystem for Linux (aka WSL or Bash on Ubuntu on Windows) provides a fantastic reproduction of a local Linux environment without needing a virtual machine.

Even better than a virtual machine, WSL includes a lot of conveniences for interoperating with the host Windows file system and processes. That is, I can access my C: drive via /mnt/c/ and I can pop calc via calc.exe.

Naturally, the nature of file paths in Linux and Windows are quite different so WSL performs some translations where it can (e.g. for the current working directory) and provides the wslpath utility for explicit conversions where necessary.

Recently I discovered that even though the root filesystem of my particular WSL installation is accessible from Windows (via %LocalAppData%/lxss/rootfs in my case), WSL will not translate just any path within Linux to a path within this rootfs directory. And this is because WSL is designed with the idea that Windows processes should not modify WSL files.

However I work with various version controlled scripts shared amongst developers on Mac, Linux, and Windows (via Cygwin mostly) that use /tmp/ as a staging area (via mktemp) and when using WSL, Windows processes don’t see this directory. If the current working directory is in /tmp/, the working directory of the Windows process will become the Windows user profile directory instead. And running wslpath -w /tmp/ just returns Result not representable.

To avoid modifying the shared scripts to be WSL-aware, I instead converted my WSL tmp directory to be mounted from the Windows host file system via the following set of commands.

First, define the directory to use as WSL’s tmp, I chose C:\wsltemp\ out of convenience, but it could be any path you prefer.

$ mkdir -p /mnt/c/wsltmp
$ chmod 1777 /mnt/c/wsltmp # tmp dir should have the Sticky-bit set
$ sudo chown root: /mnt/c/wsltmp

Also, to ensure Linux’ case-sensitivity is honoured for this directory, from an elevated PowerShell, run:

> fsutil.exe file setCaseSensitiveInfo c:\wsltmp enable

While NTFS has supported opt-in case-sensitivity for a very long time, it has only recently supported setting it per directory.

Finally, define the mount in WSL and mount it:

printf '\n/mnt/c/wsltmp\t/tmp\tnone\tbind\t0\t0\n' | sudo tee -a /etc/fstab
sudo mount -a

Now your WSL session, and future sessions (assuming you haven’t disabled mountFsTab), will have a /tmp/ directory which will be correctly translated for Windows processes.

Warning: if you use ssh-agent in WSL, mounting /tmp/ to a DrvFS volume instead of LxFS will mean the ssh-agent socket (in /tmp/ssh-*/agent.*) will not be available for WSL processes to connect, it will only be accessible by Win32 processes and therefore not useful for typical scenarios.

Inspecting Docker container processes from the host

While I favour a containerize-all-the-things approach to new projects I still need to maintain systems that were designed several years ago around a combination of containers and host-based applications working together.

In these situations it is common enough to execute ps or iotop on the host and see all the host and container processes together with no obvious indication of which processes belong to which containers.

Here I will share some simple commands to help map the host-view of a containerised process to its container.

First, given a host PID, how do I know which container it belongs to?

$ cat "/proc/${host_pid}/cgroup"

The procfs cgroup file will show the full Docker container ID which you can then use with docker inspect to get more container details.

Vice-versa if you have the container ID and want to locate the host process(es) you can use:

$ sudo ps -e -o pid,comm,cgroup | grep "/docker/${cid}"

Lastly, if you’re trying to debug a container process from the host and need the host-path to the process’ binary I have found a method to has been working reliably.

Unfortunately, because the procfs exe file is a symbolic link, not a hard link it won’t resolve to the file within the container’s layered file system so a few extra steps are required.

First, read the symlink to get the fully-qualified container-path to the binary:

$ exe=$(readlink "/proc/${host_pid}/exe")

Next, parse the process’ memory-mapped files to locate the first memory region referencing this file path:

$ map=$(grep -m1 -F "${exe}" "/proc/${host_pid}/maps" | cut -d' ' -f1)

Lastly read the symlink for this memory map from procfs’ map_files directory:

$ readlink "/proc/${host_pid}/map_files/${map}"

This final output should look something like this:


Note that the long identifier in that path is not the container ID, nor is it available via docker inspect, although I’m sure someone else has posted online how to locate this path via other means.

Lessons from DigitalOcean Networking

Update: On 2017-DEC-13, DigitalOcean announced that private networking will be isolated to each account beginning February 2018.

If you’ve come from running virtual machines on AWS, Azure, or Google Cloud, you will be familiar with the idea that the VMs can have a public Internet-facing IP address and a private IP address, or some combination or multiple of the two options.

DigitalOcean offers something similar, but just different enough to throw you when you’re accustomed to the networking models of the other cloud providers. When you create a DigitalOcean Droplet via their Control Panel, or via their API, you have the option to enable “Private networking” but when you read the official documentation, this feature is actually called “Shared private networking” and it is a very important distinction.

Where private networking in AWS, Azure, or Google Cloud gives your VM a private interface to a network shared only with your VMs, the shared private networking in DigitalOcean is, according to this DigitalOcean tutorial, “accessible to other VPSs in the same datacenter–which includes the VPSs of other customers in the same datacenter”. And I have verified that statement is true.

To clarify, if you enable private networking on a DigitalOcean VM in their SFO2 region, every other VM in the SFO2 region from every other DigitalOcean customer can route packets to your VM’s private network interface. While I advocate the use of strict firewall configurations in any cloud hosting environment, the importance of doing so correctly is much higher on DigitalOcean, even for non-Production environments where firewalls have a history of being more relaxed.

The bright side of all this is that DigitalOcean’s tag-based Cloud Firewall applies to both the public and private network interfaces and implements a deny-by-default behaviour. By using tags to restrict which other droplets are permitted to communicate on specific ports and protocols you can achieve a very similar level of isolation as offered by other cloud providers.

There is another caveat though: to improve the security of this shared private networking environment, DigitalOcean do not allow VMs to send packets with a source IP address that does not match their assigned private IP address. This prevents you, for example, from operating one DigitalOcean VM as a Virtual Private Network gateway for your other DigitalOcean VMs to connect through to another non-DigitalOcean private network.

In summary, while DigitalOcean is providing a great service, and adding new features seemingly every quarter, it offers a conceptual model slightly out of sync with the big name cloud companies, and you need to by mindful of this, but the same would be true I guess for people experienced with DigitalOcean moving to AWS or Azure.

Finding deleted code in git

Recently, Matt Hilton blogged about Source Control Antipatterns which included the practice of commenting code instead of deleting the code.

As wholeheartedly as I agree with deleting code, I know that a popular objection is that deleted code is harder to find. While it might be harder than your favourite editor’s Find In Files feature, it is important to know how to use the tools central to your development workflow.

For my work, and seemingly the majority of projects today, git is the version control tool of choice. So I’m sharing some git commands here that I have found useful for locating deleted code. I’m using the Varnish Cache repository for my examples if you want to try them yourself.

If you know some text from the code that was deleted, you can find the commit where it was deleted. In this example I’m looking for when the C structure named smu was deleted.

$ git log -G "struct +smu" --oneline

766dee0 Drop long broken umem code

If you know the name of a file that was deleted, but aren’t sure which directory the file was in, you can find the commit when the file was deleted with:

$ git log --oneline -- **/storage_umem.c

766dee0 Drop long broken umem code
b07c34f include cleanup - found by FlexeLint
75615a6 When I grow up, I want to learn to program in C


If you’re not even sure what the deleted file was named but just want to see recent commits with deleted files, you can use:

$ git log --diff-filter=D --summary

commit f4faa6e3c431d6ccf581f5683af56008e4d4be10
Author: Federico G. Schwindt <>
Date: Fri Mar 10 18:59:14 2017 +0000

Fold r00936.vtc into vcc_action.c tests

delete mode 100644 bin/varnishtest/tests/r00936.vtc

There is a lot more you can do with git log than just find deleted code but hopefully these examples are a useful start.