Working with Containers while working on Windows

With all the rage with containers these days, you may be wondering how to get started and make sure you can be successful if you use a Windows on your preferred client device. One of the cool things about working with containers from a Windows machine is that you can work with both Linux and Windows containers. This post will focus on working with Linux containers, but you’ll need all these tools for working with Windows containers too.

For building containers and working with images locally, you’ll need Docker for Windows. Just go with the default installer options and you should be ready to go in short order. You will need to have machine that supports virtualization and has those features on. When you work with Windows containers, they will run on your OS. When you work with Linux containers, they will run on a Hyper-V VM you can find if you run Hyper-V Manager on your machine.

It’s worth noting if you are going to be working with persistent or shared volumes on your containers, they work a little bit differently on your windows machine. Docker recommends that you use the –mount flag with volumes  and when using them for Linux containers, it’s better to share from the Linux MobyVM and avoid using the Windows host directly.  However, if you need to use the host directly, you can by sharing the required drive via the Shared Drives feature under the Docker for Windows Settings.

For deploying containers to Azure, you will want the latest version of the Azure CLI 2.0.  You DO NOT want anything less than version 2.0.21, trust me. You will use the Azure CLI to do things like create and manage container services (either ACS or AKS), push images to Azure Container Registry, deploy containers to Azure Container Instances and get the credentials to connect to those resources.

Once you are connected to those resources (particularly if they are going to be used for Linux containers), you’ll be using the same tools as anyone working from a Linux client, such as Kubectl for deploying containers to a Kubernetes cluster.

For sanity checking purposes, I also make sure I have Windows Subsystem for Linux (aka “Bash on Windows”) installed and the latest version of Azure CLI 2.0 installed in that environment too. I usually can do everything I need from CMD, but sometimes a strange error has me double checking my work in “Linux-land”. 🙂 Speaking of WSL, if you really want to trick out your WSL setup, read this.

Once you have Linux container hosts deployed in Azure, you may want to connect to one directly using SSH – perhaps your Kubernetes master agent. I use Putty for this, because I like being able to save my connection settings in the application to use again when I’m working on a project over several days. You will need to convert your SSH keys to a PPK file type with PuttyGen before using them to connect to Linux container host.  (More to come on key management later, I promise.)

So to sum up… To get started with containers on a Windows machine, you need:

  • Docker for Windows
  • Azure CLI 2.0
  • Putty and PuttyGen

Happy Containerizing… and if you run into some “beyond the basics” challenges, let me know in the comments.

Containers with Windows and Node.js

Recently, I’ve been working on a project with one of my TE colleagues, who hails from the “developer” side of the house. One of the challenges I have being interested in infrastructure and less interested in writing applications is that I’m often lacking something to build infrastructure for. So this has been a great opportunity to have something to focus spending my Azure credits on.

So for this project, we agreed to combine some of the things she wanted to do (Internet of Things, PowerBI, Bots, etc) with some of things I wanted to learn more about, like Containers and Service Fabric. The result was an idea for a sensor that would detect soil humidity and air temperature (IoT) for plants, report that data to the cloud for collection (via IoT Hub and CosmosDB) and make that data available via PowerBI for review. Ideally, having a Bot that lets me know when my plants need watering would really help with my lack of a green thumb. 🙂

As part of this we needed to be able to deploy an API that took the data from the IoT Hub and moved it the database. We also needed a front-end web application to show the collection of information. Both of these applications were going to be written in Node.js.

Now before you start tearing apart what is clearly going to be over-kill for this size of a project, keep in mind we know we can do all of this with PaaS offerings. But that would be less “fun”! You can check out the project at https://github.com/jcocchi/IoTPlantWatering and see that we’ve listed out many of the possible architecture scenarios. However, this post is about putting one of those Node.js applications in a container.

Step 1: Get Node.js onto a Windows Server Core container

Now, you’ll find plenty of information on the Web about creating a Docker container with Node.js, particularly if you’d like to run that on Linux. Combine that with the fact that Node.js is most easily installed on Windows with the MSI and you’ll find a lot less documentation about getting it on a Windows container. However, I came across this somewhat dated documentation and sample which got me started. It’s circa November 2016, which is a lifetime ago at this point and references Server 2016 TP 3 when Microsoft had a choice between managing Windows containers with PowerShell or Docker. I edited the HybridInstaller.ps1 script to download the latest version of Node.js and then followed the rest of the instructions in the “docker-managed” section.

The key bits are to download the HybridInstaller.ps1 and dockerfile to a new folder, then run:

Docker build -t windowswithnodejs:v1 C:\YOUR\FOLDER

You’ll end up with an image tagged “windowswithnodejs:v1” that you can then use as a base for the next steps.

Step 2: Make Sure Node IS actually installed

At this point I had a local image of my container available to run and I wanted to make sure that I really did install Node.js correctly.   For that, you can find some handy instructions here for connecting interactively to a Windows container. The whole Wiki is actually very informative if you are new to Windows containers.

Step 3: Install A Node.js Application

We have two Node.js applications in our project, but I started with the simpler of the two – the RecieveHubMessages app. My project partner had nicely detailed the installation process and dependencies so I was able to clone the application code to my desktop, create the necessary .ENV file (because you don’t want your secrets in GitHub!) and put together a dockerfile to build a fresh image based off my image with Node.js already installed.   The process is exactly the same as Step #1 (above) just using DOCKER BUILD with a different docker file and folder with the right application code in it.

After this was complete, I ran a container with this new image, connected to it and confirmed that the application was running. Since our goal was to be able to deploy this application in Azure, I also created an Azure Container Registry to host the image. From there, I was able to deploy it to Azure Container Services (using Kubernetes) and Azure Service Fabric.  (More later on the differences between ACS and Service Fabric.)