Service Fabric, Containers and Open Networking Mode

In case you haven’t noticed, deploying applications in containers is the way of the future for a lot of workloads.  Containers can potentially solve a lot of problems that have plagued developers and operations teams for decades, but the extra layer of abstraction can also bring new challenges.

I often deploy Windows containers to Service Fabric, not only because it’s a nifty orchestrator, but it also provides a greater array of options for modernizing Windows workloads since you can run Service Fabric on-prem as well as in Azure to support hybrid networking and other business requirements.

You can quickly create a Service Fabric cluster in Azure with the portal and Visual Studio can get you started with deploying existing containers to a Service Fabric cluster pretty quickly with the project wizard, but as with anything in the technology space, what comes out of the box might not do exactly what you need.

In the case of a recent project, I wanted to be able to deploy more instances of a container than I had nodes in my cluster.  By default, Service Fabric will deploy one instance of application to each node until you’ve placed that application on all nodes.  However, depending on what your container does, you might want to double or triple up.  This is accomplished with two things: open networking and partitions.

You can get the majority of the way with this documentation about container networking nodes on Service Fabric – https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-networking-modes.  You’ll need to make some changes to your Service Fabric deployment template, including parts to issue each node in your VM Scale Set additional IP addresses on your subnet.  Each container deployed with get one of these IP addresses. Then you will need to make some changes to your application and service manifest files, which include specifying the networking mode to “open” and adjusting how you handle port bindings.

Because your application is really a container, its deployed as a stateless service.  Most of the Service Fabric documentation talks about partitions in relation to stateful services and it’s a bit unclear how to apply that to stateless ones.

Within your application manifest, you’ll need to edit your service instance to use either the named or ranged partition type, instead of “SingletonPartiton” which is the default.  I prefer using the ranged version as it’s much easier to adjust the partition number, but admittedly don’t really have a good understanding on how the low and high keys apply to the containers when they aren’t actually using those ranges to distribute data.

Named Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
         <NamedPartition>
             <Partition Name="one" />
             <Partition Name="two" />
             <Partition Name="three" />
             <Partition Name="four" />
        </NamedPartition>
    </StatelessService>
</Service>

Ranged Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
           <UniformInt64Partition PartitionCount="4" LowKey="1" HighKey="10" />
     </StatelessService>
</Service>

Once you’ve made all these changes, Service Fabric will deploy containers equal to the number of instances multiplied by the partition count, up to the available number of IP addresses.  So two instances of four partitions will be eight containers and eight IP addresses.  Keep in mind that if a deployment exceeds the number of IP addresses you have available, you will have errors.  Based my testing so far, I don’t recommend trying to max out your available IP addresses, there seems to be a need for a little wiggle room for scaling operations.

Advertisement

Microsoft OpenHack on Containers comes to San Francisco – May 15-17

Who?

OpenHack brings together groups of diverse developers to learn how to implement a given scenario on Azure through three days of immersive, structured, hands-on, challenge-based hacking. This scenario is focused on implementing container solutions and move them to the cloud.

What!

Join us for three-days of fun-filled, hands-on hacking where you will team up with community peers and learn how to containerize Linux and Windows based workloads to the cloud. During OpenHack you will:

  • Choose your desired tooling and technology based on Kubernetes or Azure Service Fabric.
  • Hack on challenges structured to leave you with skills and expertise needed to deploy containers and clusters in the work place.
  • Network with fellow community members and other professional developers from startups to large enterprises, as well Microsoft developers.
  • Get answers to your technology and workplace project questions from Microsoft and community experts.

Bonus

In addition to the challenge-based learning paths, a limited number of 1-hour envisioning slots will be made available on a first come, first served basis to work side-by-side with Microsoft experts on your own workplace projects.

OpenHack is FREE for registered attendees!

Food, refreshments, prizes and fun will be provided. If travelling, attendees are responsible for their own travel expenses and evening meals.

What you need:

To be successful and maximize value from the event, participants should have a basic understanding of the following concepts and technologies. You are not required to be an expert or authority, but a familiarity with each will be advantageous:

  • Docker containers
  • Cloud hosted services
  • REST Services
  • DevOps
  • IP Networking & Routing

Click here to register!

OpenHacks are invite only and space is limited. You may be put on a waitlist. When your registration is confirmed, we will follow up with additional details.

Shared Drives with Docker for Windows

I’ve mentioned in a previous post that Docker recommends that you avoid volume mounts from the Windows host, but sometimes you just have to have it. You’ll want to set up the Shared Drives feature  in your Docker for Windows settings to get that going.  You’ll only need this feature if you need to share files from your Windows host to the Linux containers.  If you are working with Windows containers, it shouldn’t be necessary, as per the Docker documentation.

Simply select the checkbox for the drive letter you want to share and you will be prompted for credentials. After that the drive letter should remain checked and you’ll be able to mount volumes under your users home directory.

In my case, the only account on my machine with Administrator privileges was my “domain” account… aka DOMAIN\username, which was prefilled in the credentials box. Upon entering my password, Docker for Windows thought at bit, reporting that it was updating the settings, cleared the checkbox and the declared itself to be finished. Leaving my C drive unshared. Grrr.

It occurred to me that maybe Docker for Windows didn’t like the DOMAIN\username format, so I tried my UPN format instead – username@domain.com. That immediately failed as an invalid account, however when I checked my account settings on my PC, the account is clearly listed at DOMAIN\username. I say this is a “Domain Account”, however the machine is not domain joined so its authenticating via Azure AD. I did some hunting around and there are some related issues since 2016, which don’t seem to have a clear resolution – https://github.com/docker/for-win/issues/132 and https://github.com/docker/for-win/issues/303

In addition to my work account, I also have my personal MSA (Microsoft Account) account as an alternate account on this machine. It didn’t have Administrator rights, but figured it was worth a shot. I entered my MSA email address and password and lo and behold… It worked! The C drive checkbox stayed and Docker was able to mount some local volumes. Due to the non-Administrative nature of that account, I did find I had to add on some additional file sharing at subfolder needed during a Docker build, but otherwise I was good to go.

The end result is if you are having problems turning on the Docker for Windows shared drives feature, you may need to use or create an alternative local account.

Azure Containers, SSH Keys and Windows

When working with containers on Azure there are a couple things to keep in mind around key management. I’ll use Azure Container Service (AKS) for the context here, but in the end, keys are keys.

You have two options when creating a cluster on AKS:

az aks create --resource-group YourRG --name YourCluster --generate-ssh-keys

az aks create --resource-group YourRG --name YourCluster --ssh-key-value \PATH\TO\PUBLIC\KEY

With –generate-ssh-keys, Azure will automatically create the necessary key for you named id_rsa and id_rsa.pub in the $HOME\.ssh folder, of the machine that created the cluster. If there are already keys with that name there, it will re-use those.

Once your cluster is created, you’ll use

az aks get-credentials --resource-group YourRG --name YourCluster

to download an access token to set the current context for your session, manage the cluster and deploy containers.

If you happen to work from more than one machine, or expect other people to also access this cluster or make other clusters using the same keys, you need to share these auto-created keys appropriately. I work from two different machines, wasn’t paying attention and ended up with two different “default” sets of keys. I awkwardly discovered this when creating a cluster with my home machine, traveling with my laptop and then finding myself unable to access the cluster while out of town. Joys.

Using “–generate-ssh-keys” shall henceforth be known as “the lazy way” of key management.

To do this better, create your keys manually, put them in a secure location accessible by those who matter and then make your clusters using “–ssh-key-value” instead.  (Let’s call this the “thoughtful way.”) You will also need to provide the path to the key when requesting the access token. For example:

az aks get-credentials --resource-group YourRG --name YourCluster --ssh-key-value \PATH\TO\PRIVATE\KEY

As I’m a Windows user, I use PuttyGen for my key creation. I will refrain from recreating the wheel of how to do this, as there are already some pretty comprehensive posts, either in Microsoft Docs or this one by Pascal Naber.

A Note about AKS vs ACS: As of this writing, you have two different ways of creating container clusters in Azure. ACS allows you to create clusters orchestrated with Kubernetes, Docker Swarm or DC/OS. Due to the nature of the way these are created, you have full access to the master node VM. If you’ll be using Putty to connect to the master node of your ACS cluster directly, you’ll need to use a Putty-specific PPK file for your private key and specify it in your Putty session settings. If you create a Kubernetes cluster using AKS (as I did in my examples above) you won’t have SSH access to the master node.

A Note About Service Principals: In addition to automatically generating keys, AKS/ACS will automatically generate the necessary service principals needed. However, it won’t generate a new SP for each cluster. If you have a suitable SP already in your subscription, it will re-use that one. So just keep that in mind for your production clusters. You may want to provide different service principals for various clusters, etc. You can read more about setting up Azure AD SPs for AKS if you so desire.

Working with Containers while working on Windows

With all the rage with containers these days, you may be wondering how to get started and make sure you can be successful if you use a Windows on your preferred client device. One of the cool things about working with containers from a Windows machine is that you can work with both Linux and Windows containers. This post will focus on working with Linux containers, but you’ll need all these tools for working with Windows containers too.

For building containers and working with images locally, you’ll need Docker for Windows. Just go with the default installer options and you should be ready to go in short order. You will need to have machine that supports virtualization and has those features on. When you work with Windows containers, they will run on your OS. When you work with Linux containers, they will run on a Hyper-V VM you can find if you run Hyper-V Manager on your machine.

It’s worth noting if you are going to be working with persistent or shared volumes on your containers, they work a little bit differently on your windows machine. Docker recommends that you use the –mount flag with volumes  and when using them for Linux containers, it’s better to share from the Linux MobyVM and avoid using the Windows host directly.  However, if you need to use the host directly, you can by sharing the required drive via the Shared Drives feature under the Docker for Windows Settings.

For deploying containers to Azure, you will want the latest version of the Azure CLI 2.0.  You DO NOT want anything less than version 2.0.21, trust me. You will use the Azure CLI to do things like create and manage container services (either ACS or AKS), push images to Azure Container Registry, deploy containers to Azure Container Instances and get the credentials to connect to those resources.

Once you are connected to those resources (particularly if they are going to be used for Linux containers), you’ll be using the same tools as anyone working from a Linux client, such as Kubectl for deploying containers to a Kubernetes cluster.

For sanity checking purposes, I also make sure I have Windows Subsystem for Linux (aka “Bash on Windows”) installed and the latest version of Azure CLI 2.0 installed in that environment too. I usually can do everything I need from CMD, but sometimes a strange error has me double checking my work in “Linux-land”. 🙂 Speaking of WSL, if you really want to trick out your WSL setup, read this.

Once you have Linux container hosts deployed in Azure, you may want to connect to one directly using SSH – perhaps your Kubernetes master agent. I use Putty for this, because I like being able to save my connection settings in the application to use again when I’m working on a project over several days. You will need to convert your SSH keys to a PPK file type with PuttyGen before using them to connect to Linux container host.  (More to come on key management later, I promise.)

So to sum up… To get started with containers on a Windows machine, you need:

  • Docker for Windows
  • Azure CLI 2.0
  • Putty and PuttyGen

Happy Containerizing… and if you run into some “beyond the basics” challenges, let me know in the comments.

Windows containers, dockerfiles and npm

As part of my adventure with the IoTPlantWatering project, I ran into the issue of not being able to automatically launch “npm start” from within a Windows container using this command in my dockerfile, which would work just fine if this was a Linux container.

CMD [ "npm", "start"]

If I built the container without this command, connected to it interactively and typed “npm start” it worked fine. What gives? For Windows you need to use:

CMD [ "npm.cmd", "start"]

Here are a couple links that give you a little more context to why, but if nothing else, just remember – npm.CMD!

Containers with Windows and Node.js

Recently, I’ve been working on a project with one of my TE colleagues, who hails from the “developer” side of the house. One of the challenges I have being interested in infrastructure and less interested in writing applications is that I’m often lacking something to build infrastructure for. So this has been a great opportunity to have something to focus spending my Azure credits on.

So for this project, we agreed to combine some of the things she wanted to do (Internet of Things, PowerBI, Bots, etc) with some of things I wanted to learn more about, like Containers and Service Fabric. The result was an idea for a sensor that would detect soil humidity and air temperature (IoT) for plants, report that data to the cloud for collection (via IoT Hub and CosmosDB) and make that data available via PowerBI for review. Ideally, having a Bot that lets me know when my plants need watering would really help with my lack of a green thumb. 🙂

As part of this we needed to be able to deploy an API that took the data from the IoT Hub and moved it the database. We also needed a front-end web application to show the collection of information. Both of these applications were going to be written in Node.js.

Now before you start tearing apart what is clearly going to be over-kill for this size of a project, keep in mind we know we can do all of this with PaaS offerings. But that would be less “fun”! You can check out the project at https://github.com/jcocchi/IoTPlantWatering and see that we’ve listed out many of the possible architecture scenarios. However, this post is about putting one of those Node.js applications in a container.

Step 1: Get Node.js onto a Windows Server Core container

Now, you’ll find plenty of information on the Web about creating a Docker container with Node.js, particularly if you’d like to run that on Linux. Combine that with the fact that Node.js is most easily installed on Windows with the MSI and you’ll find a lot less documentation about getting it on a Windows container. However, I came across this somewhat dated documentation and sample which got me started. It’s circa November 2016, which is a lifetime ago at this point and references Server 2016 TP 3 when Microsoft had a choice between managing Windows containers with PowerShell or Docker. I edited the HybridInstaller.ps1 script to download the latest version of Node.js and then followed the rest of the instructions in the “docker-managed” section.

The key bits are to download the HybridInstaller.ps1 and dockerfile to a new folder, then run:

Docker build -t windowswithnodejs:v1 C:\YOUR\FOLDER

You’ll end up with an image tagged “windowswithnodejs:v1” that you can then use as a base for the next steps.

Step 2: Make Sure Node IS actually installed

At this point I had a local image of my container available to run and I wanted to make sure that I really did install Node.js correctly.   For that, you can find some handy instructions here for connecting interactively to a Windows container. The whole Wiki is actually very informative if you are new to Windows containers.

Step 3: Install A Node.js Application

We have two Node.js applications in our project, but I started with the simpler of the two – the RecieveHubMessages app. My project partner had nicely detailed the installation process and dependencies so I was able to clone the application code to my desktop, create the necessary .ENV file (because you don’t want your secrets in GitHub!) and put together a dockerfile to build a fresh image based off my image with Node.js already installed.   The process is exactly the same as Step #1 (above) just using DOCKER BUILD with a different docker file and folder with the right application code in it.

After this was complete, I ran a container with this new image, connected to it and confirmed that the application was running. Since our goal was to be able to deploy this application in Azure, I also created an Azure Container Registry to host the image. From there, I was able to deploy it to Azure Container Services (using Kubernetes) and Azure Service Fabric.  (More later on the differences between ACS and Service Fabric.)

Learn Containers AND party with Docker!

Have you ever met a 3 year old who didn’t like a birthday party? I didn’t think so. Like any respectable 3 year old, the fine folks over at Docker are going all out celebrate the Docker project’s big “3”.

To celebrate this milestone, the Docker community is hosting a worldwide series of events to help beginners learn how to build, ship, and run distributed apps with ease thanks to the Docker toolbox. Experienced Docker users will serve as mentors to help all beginners during this training. And of course, there will be birthday cake at the end!

For those of us who have a Microsoft focus and might just be starting out in the world of containers, it’s exciting to have these chances to learn more about these tools. If you’ve taken a look at Server 2016 TP4 (or even as early as TP3), you might already know that you can use either Docker or PowerShell to deploy and manage Windows Server containers. And let’s not forget the recently launched Azure Container Service, which also features Docker support!

If you are interested in learning more about containers (Windows or otherwise) or how we partner with Docker, check out these resources:

If you are in the Bay Area and don’t want to miss out on that cake while deploying an app with Docker, the SF event will be on 3/23 and the Silicon Valley event on 3/24. More information about the Docker Birthday #3 is available at docker.party.

Server 2016 TP3, Containers and Azure – All Together

Sometimes I think I’ll never get caught up. Every day, there are new, interesting announcements from the technology companies we use every day, plus we have to juggle the tasks, fires and projects we have at work.  It’s really hard to keep up.  I’ll bet you are feeling that way right now.
This week, it’s possible for you to check a few new things off your list – ALL AT ONCE!  (And it’s already Friday!)
  1. Try out Azure
  2. Check out Server 2016
  3. Learn about Containers

 Ready?

First make sure you have an Azure subscription or trial.  If your company has an enterprise agreement with Microsoft, you might have credits to use in Azure and not even know it.  So check there first.  If not – go to http://aka.ms/NewAzureTrial to sign up for $200 you can use for the next 30 days.
Once you’ve got access to Azure, you’ll find we have two web portals for accessing it.  The “classic” or current portal at http://manage.windowsazure.com and the preview portal at http://portal.azure.com. Depending on what you need to do, the feature set varies between portals.  But for this, it doesn’t matter.
Whichever portal you pick, you’ve opened the door to the easiest way to test out new versions of Windows Server.  No hunting around for free hardware, no downloading ISO images and practically no wait. Just take advantage of your own personal datacenter in the “cloud”. 
  
Next, look for the Server 2016 versions – there are two of them. One is the Full GUI version, listed as Windows Server 2016 Technical Preview 3.  (In the new portal, the Full GUI version can be found in the Marketplace.)  The other one is listed as “Windows Server Container Preview”.
If poking around with the new full version is your goal, spin that up and get started.  RDP to it and you are good to go.  If you need a walk-thru on how to set up a VM on either portal, you can find it here : https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-tutorial/
If your company develops software and is thinking about micro-services and “containers” are new buzzword in the office, you’ll want to spin up the Container Preview.  And even if your company doesn’t fit that description and you just want to see what this container/Docker thing is all about, spin up the Container Preview too.
Once that machine is up and running, you’ll log into to find yourself at a command prompt window and nothing else.  Containers are supported only on the Windows Core (and eventually Nano) versions. To get you started, take some time to review this documentation (http://aka.ms/windowscontainers) and dust of your command line skills.
Viola!  Now go check off that list. 🙂
Note: With the preview, there is A LOT of work to be done still, so don’t be surprised when things aren’t super polished and feature-rich yet.  And seriously, don’t try to use any of this for production.  This is just the tip of the iceberg to come.

Summer Reads!

Ah, summertime…. Vacations, relaxing on the patio, fruit salads, sparkly drinks and learning. Right? I spent some time by the beach and the pool recently and then came back to a pile of interesting things I wanted to read or try out.

There are also two new video blogs available on Channel 9 that will keep adding new content you might want to check out.