Shared Drives with Docker for Windows

I’ve mentioned in a previous post that Docker recommends that you avoid volume mounts from the Windows host, but sometimes you just have to have it. You’ll want to set up the Shared Drives feature  in your Docker for Windows settings to get that going.  You’ll only need this feature if you need to share files from your Windows host to the Linux containers.  If you are working with Windows containers, it shouldn’t be necessary, as per the Docker documentation.

Simply select the checkbox for the drive letter you want to share and you will be prompted for credentials. After that the drive letter should remain checked and you’ll be able to mount volumes under your users home directory.

In my case, the only account on my machine with Administrator privileges was my “domain” account… aka DOMAIN\username, which was prefilled in the credentials box. Upon entering my password, Docker for Windows thought at bit, reporting that it was updating the settings, cleared the checkbox and the declared itself to be finished. Leaving my C drive unshared. Grrr.

It occurred to me that maybe Docker for Windows didn’t like the DOMAIN\username format, so I tried my UPN format instead – username@domain.com. That immediately failed as an invalid account, however when I checked my account settings on my PC, the account is clearly listed at DOMAIN\username. I say this is a “Domain Account”, however the machine is not domain joined so its authenticating via Azure AD. I did some hunting around and there are some related issues since 2016, which don’t seem to have a clear resolution – https://github.com/docker/for-win/issues/132 and https://github.com/docker/for-win/issues/303

In addition to my work account, I also have my personal MSA (Microsoft Account) account as an alternate account on this machine. It didn’t have Administrator rights, but figured it was worth a shot. I entered my MSA email address and password and lo and behold… It worked! The C drive checkbox stayed and Docker was able to mount some local volumes. Due to the non-Administrative nature of that account, I did find I had to add on some additional file sharing at subfolder needed during a Docker build, but otherwise I was good to go.

The end result is if you are having problems turning on the Docker for Windows shared drives feature, you may need to use or create an alternative local account.

Advertisements

Azure Containers, SSH Keys and Windows

When working with containers on Azure there are a couple things to keep in mind around key management. I’ll use Azure Container Service (AKS) for the context here, but in the end, keys are keys.

You have two options when creating a cluster on AKS:

az aks create --resource-group YourRG --name YourCluster --generate-ssh-keys

az aks create --resource-group YourRG --name YourCluster --ssh-key-value \PATH\TO\PUBLIC\KEY

With –generate-ssh-keys, Azure will automatically create the necessary key for you named id_rsa and id_rsa.pub in the $HOME\.ssh folder, of the machine that created the cluster. If there are already keys with that name there, it will re-use those.

Once your cluster is created, you’ll use

az aks get-credentials --resource-group YourRG --name YourCluster

to download an access token to set the current context for your session, manage the cluster and deploy containers.

If you happen to work from more than one machine, or expect other people to also access this cluster or make other clusters using the same keys, you need to share these auto-created keys appropriately. I work from two different machines, wasn’t paying attention and ended up with two different “default” sets of keys. I awkwardly discovered this when creating a cluster with my home machine, traveling with my laptop and then finding myself unable to access the cluster while out of town. Joys.

Using “–generate-ssh-keys” shall henceforth be known as “the lazy way” of key management.

To do this better, create your keys manually, put them in a secure location accessible by those who matter and then make your clusters using “–ssh-key-value” instead.  (Let’s call this the “thoughtful way.”) You will also need to provide the path to the key when requesting the access token. For example:

az aks get-credentials --resource-group YourRG --name YourCluster --ssh-key-value \PATH\TO\PUBLIC\KEY

As I’m a Windows user, I use PuttyGen for my key creation. I will refrain from recreating the wheel of how to do this, as there are already some pretty comprehensive posts, either in Microsoft Docs or this one by Pascal Naber.

A Note about AKS vs ACS: As of this writing, you have two different ways of creating container clusters in Azure. ACS allows you to create clusters orchestrated with Kubernetes, Docker Swarm or DC/OS. Due to the nature of the way these are created, you have full access to the master node VM. If you’ll be using Putty to connect to the master node of your ACS cluster directly, you’ll need to use a Putty-specific PPK file for your private key and specify it in your Putty session settings. If you create a Kubernetes cluster using AKS (as I did in my examples above) you won’t have SSH access to the master node.

A Note About Service Principals: In addition to automatically generating keys, AKS/ACS will automatically generate the necessary service principals needed. However, it won’t generate a new SP for each cluster. If you have a suitable SP already in your subscription, it will re-use that one. So just keep that in mind for your production clusters. You may want to provide different service principals for various clusters, etc. You can read more about setting up Azure AD SPs for AKS if you so desire.

Working with Containers while working on Windows

With all the rage with containers these days, you may be wondering how to get started and make sure you can be successful if you use a Windows on your preferred client device. One of the cool things about working with containers from a Windows machine is that you can work with both Linux and Windows containers. This post will focus on working with Linux containers, but you’ll need all these tools for working with Windows containers too.

For building containers and working with images locally, you’ll need Docker for Windows. Just go with the default installer options and you should be ready to go in short order. You will need to have machine that supports virtualization and has those features on. When you work with Windows containers, they will run on your OS. When you work with Linux containers, they will run on a Hyper-V VM you can find if you run Hyper-V Manager on your machine.

It’s worth noting if you are going to be working with persistent or shared volumes on your containers, they work a little bit differently on your windows machine. Docker recommends that you use the –mount flag with volumes  and when using them for Linux containers, it’s better to share from the Linux MobyVM and avoid using the Windows host directly.  However, if you need to use the host directly, you can by sharing the required drive via the Shared Drives feature under the Docker for Windows Settings.

For deploying containers to Azure, you will want the latest version of the Azure CLI 2.0.  You DO NOT want anything less than version 2.0.21, trust me. You will use the Azure CLI to do things like create and manage container services (either ACS or AKS), push images to Azure Container Registry, deploy containers to Azure Container Instances and get the credentials to connect to those resources.

Once you are connected to those resources (particularly if they are going to be used for Linux containers), you’ll be using the same tools as anyone working from a Linux client, such as Kubectl for deploying containers to a Kubernetes cluster.

For sanity checking purposes, I also make sure I have Windows Subsystem for Linux (aka “Bash on Windows”) installed and the latest version of Azure CLI 2.0 installed in that environment too. I usually can do everything I need from CMD, but sometimes a strange error has me double checking my work in “Linux-land”. 🙂 Speaking of WSL, if you really want to trick out your WSL setup, read this.

Once you have Linux container hosts deployed in Azure, you may want to connect to one directly using SSH – perhaps your Kubernetes master agent. I use Putty for this, because I like being able to save my connection settings in the application to use again when I’m working on a project over several days. You will need to convert your SSH keys to a PPK file type with PuttyGen before using them to connect to Linux container host.  (More to come on key management later, I promise.)

So to sum up… To get started with containers on a Windows machine, you need:

  • Docker for Windows
  • Azure CLI 2.0
  • Putty and PuttyGen

Happy Containerizing… and if you run into some “beyond the basics” challenges, let me know in the comments.

Keeping Up with the Releases

There are a lot of great things to say about the faster release cycles we see with software these days. Bugs are fixed and features become available to us sooner, security issues are resolved quicker too. In a lot of cases, our operating systems and software packages are smart enough to check themselves and let us know updates are available or automatically install themselves.

I work between two different machines regularly and depending on my schedule sometimes favor one machine over the other for several weeks at a time. For better or for worse (mostly for the better), Windows 10 takes care of itself for me, as does Visual Studio Code and Docker for Windows. This means I often find myself sitting down at the “other” machine and once again waiting for those updates to install. While sometimes I admit to rolling my eyes in frustration every time I get an update alert, I do appreciate that I don’t have to think about those updates otherwise.

But for software that doesn’t automatically update, I will sometimes find myself wondering why demo notes I’ve drafted on one machine suddenly aren’t working when I try them on the other machine or worse, blaming documentation for being incorrect when the commands don’t work as instructed.

When it comes to documentation freshness vs software freshness… Let’s not go there today. I generally always start with docs.microsoft.com when I’m looking for information about Azure and other Microsoft products. While nothing is above being error free and sometimes out of date, more often than not my problems exist between my keyboard and monitor – in the form of some piece of software needing an update.

The top two things on my machines that I have to manually update regularly are:

  • Azure CLI 2.0Instructions for Installing or Updating Azure CLI 2.0
    • Type “az –version” at your command line to see what version you at running.  As of this writing (10/17/17) the current version is 2.0.19.
    • If you aren’t a regular Azure CLI user and just want to try it out via the Azure Portal, check out the Cloud Shell.
  • Azure PowerShellInstructions for Installing or Updated Azure PowerShell 4.4.1
    • I recommend the command line installer for this one, but if you want to do something other than that (like install within a docker container) you can find those instruction here.
    • You can check your version of Azure PowerShell by typing “Get-Module AzureRM -list | Select-Object Name,Version,Path” at the PowerShell command line, however if you don’t get any response back, you don’t have 4.4.1 installed at all.
    • Also, don’t confuse the Azure PowerShell Modules with the PowerShell that comes on your Windows machine itself.  That’s at version 5.1 right now if you have Windows 10 with your updates turned on. You can check that by typing “$PSVersionTable” at your PowerShell command line. If you want instructions running the beta version 6, you can find all that information here with the general installation instructions.

 

Windows containers, dockerfiles and npm

As part of my adventure with the IoTPlantWatering project, I ran into the issue of not being able to automatically launch “npm start” from within a Windows container using this command in my dockerfile, which would work just fine if this was a Linux container.

CMD [ "npm", "start"]

If I built the container without this command, connected to it interactively and typed “npm start” it worked fine. What gives? For Windows you need to use:

CMD [ "npm.cmd", "start"]

Here are a couple links that give you a little more context to why, but if nothing else, just remember – npm.CMD!

EPS Files and Office: Not Quite Better Together

Today I learned that EPS file formats have been unsupported in Office applications as of April 11, 2017.  I don’t often have a need to embed EPS files in my documents, but I do live with a designer and he loves to send EPS files for various family-related side projects.  As a Mac user, it’s not an issue for him, as Office 2011 and Office 2016 for Mac is not affected by this change.

If you’d like to read more about it, you can find out more details here – https://support.office.com/en-us/article/Support-for-EPS-images-has-been-turned-off-in-Office-a069d664-4bcf-415e-a1b5-cbb0c334a840.

Meanwhile, I’ll go ask for some JPEGs.

Containers with Windows and Node.js

Recently, I’ve been working on a project with one of my TE colleagues, who hails from the “developer” side of the house. One of the challenges I have being interested in infrastructure and less interested in writing applications is that I’m often lacking something to build infrastructure for. So this has been a great opportunity to have something to focus spending my Azure credits on.

So for this project, we agreed to combine some of the things she wanted to do (Internet of Things, PowerBI, Bots, etc) with some of things I wanted to learn more about, like Containers and Service Fabric. The result was an idea for a sensor that would detect soil humidity and air temperature (IoT) for plants, report that data to the cloud for collection (via IoT Hub and CosmosDB) and make that data available via PowerBI for review. Ideally, having a Bot that lets me know when my plants need watering would really help with my lack of a green thumb. 🙂

As part of this we needed to be able to deploy an API that took the data from the IoT Hub and moved it the database. We also needed a front-end web application to show the collection of information. Both of these applications were going to be written in Node.js.

Now before you start tearing apart what is clearly going to be over-kill for this size of a project, keep in mind we know we can do all of this with PaaS offerings. But that would be less “fun”! You can check out the project at https://github.com/jcocchi/IoTPlantWatering and see that we’ve listed out many of the possible architecture scenarios. However, this post is about putting one of those Node.js applications in a container.

Step 1: Get Node.js onto a Windows Server Core container

Now, you’ll find plenty of information on the Web about creating a Docker container with Node.js, particularly if you’d like to run that on Linux. Combine that with the fact that Node.js is most easily installed on Windows with the MSI and you’ll find a lot less documentation about getting it on a Windows container. However, I came across this somewhat dated documentation and sample which got me started. It’s circa November 2016, which is a lifetime ago at this point and references Server 2016 TP 3 when Microsoft had a choice between managing Windows containers with PowerShell or Docker. I edited the HybridInstaller.ps1 script to download the latest version of Node.js and then followed the rest of the instructions in the “docker-managed” section.

The key bits are to download the HybridInstaller.ps1 and dockerfile to a new folder, then run:

Docker build -t windowswithnodejs:v1 C:\YOUR\FOLDER

You’ll end up with an image tagged “windowswithnodejs:v1” that you can then use as a base for the next steps.

Step 2: Make Sure Node IS actually installed

At this point I had a local image of my container available to run and I wanted to make sure that I really did install Node.js correctly.   For that, you can find some handy instructions here for connecting interactively to a Windows container. The whole Wiki is actually very informative if you are new to Windows containers.

Step 3: Install A Node.js Application

We have two Node.js applications in our project, but I started with the simpler of the two – the RecieveHubMessages app. My project partner had nicely detailed the installation process and dependencies so I was able to clone the application code to my desktop, create the necessary .ENV file (because you don’t want your secrets in GitHub!) and put together a dockerfile to build a fresh image based off my image with Node.js already installed.   The process is exactly the same as Step #1 (above) just using DOCKER BUILD with a different docker file and folder with the right application code in it.

After this was complete, I ran a container with this new image, connected to it and confirmed that the application was running. Since our goal was to be able to deploy this application in Azure, I also created an Azure Container Registry to host the image. From there, I was able to deploy it to Azure Container Services (using Kubernetes) and Azure Service Fabric.  (More later on the differences between ACS and Service Fabric.)