Microsoft OpenHack on Containers comes to San Francisco – May 15-17

Who?

OpenHack brings together groups of diverse developers to learn how to implement a given scenario on Azure through three days of immersive, structured, hands-on, challenge-based hacking. This scenario is focused on implementing container solutions and move them to the cloud.

What!

Join us for three-days of fun-filled, hands-on hacking where you will team up with community peers and learn how to containerize Linux and Windows based workloads to the cloud. During OpenHack you will:

  • Choose your desired tooling and technology based on Kubernetes or Azure Service Fabric.
  • Hack on challenges structured to leave you with skills and expertise needed to deploy containers and clusters in the work place.
  • Network with fellow community members and other professional developers from startups to large enterprises, as well Microsoft developers.
  • Get answers to your technology and workplace project questions from Microsoft and community experts.

Bonus

In addition to the challenge-based learning paths, a limited number of 1-hour envisioning slots will be made available on a first come, first served basis to work side-by-side with Microsoft experts on your own workplace projects.

OpenHack is FREE for registered attendees!

Food, refreshments, prizes and fun will be provided. If travelling, attendees are responsible for their own travel expenses and evening meals.

What you need:

To be successful and maximize value from the event, participants should have a basic understanding of the following concepts and technologies. You are not required to be an expert or authority, but a familiarity with each will be advantageous:

  • Docker containers
  • Cloud hosted services
  • REST Services
  • DevOps
  • IP Networking & Routing

Click here to register!

OpenHacks are invite only and space is limited. You may be put on a waitlist. When your registration is confirmed, we will follow up with additional details.

Shared Drives with Docker for Windows

I’ve mentioned in a previous post that Docker recommends that you avoid volume mounts from the Windows host, but sometimes you just have to have it. You’ll want to set up the Shared Drives feature  in your Docker for Windows settings to get that going.  You’ll only need this feature if you need to share files from your Windows host to the Linux containers.  If you are working with Windows containers, it shouldn’t be necessary, as per the Docker documentation.

Simply select the checkbox for the drive letter you want to share and you will be prompted for credentials. After that the drive letter should remain checked and you’ll be able to mount volumes under your users home directory.

In my case, the only account on my machine with Administrator privileges was my “domain” account… aka DOMAIN\username, which was prefilled in the credentials box. Upon entering my password, Docker for Windows thought at bit, reporting that it was updating the settings, cleared the checkbox and the declared itself to be finished. Leaving my C drive unshared. Grrr.

It occurred to me that maybe Docker for Windows didn’t like the DOMAIN\username format, so I tried my UPN format instead – username@domain.com. That immediately failed as an invalid account, however when I checked my account settings on my PC, the account is clearly listed at DOMAIN\username. I say this is a “Domain Account”, however the machine is not domain joined so its authenticating via Azure AD. I did some hunting around and there are some related issues since 2016, which don’t seem to have a clear resolution – https://github.com/docker/for-win/issues/132 and https://github.com/docker/for-win/issues/303

In addition to my work account, I also have my personal MSA (Microsoft Account) account as an alternate account on this machine. It didn’t have Administrator rights, but figured it was worth a shot. I entered my MSA email address and password and lo and behold… It worked! The C drive checkbox stayed and Docker was able to mount some local volumes. Due to the non-Administrative nature of that account, I did find I had to add on some additional file sharing at subfolder needed during a Docker build, but otherwise I was good to go.

The end result is if you are having problems turning on the Docker for Windows shared drives feature, you may need to use or create an alternative local account.

Working with Containers while working on Windows

With all the rage with containers these days, you may be wondering how to get started and make sure you can be successful if you use a Windows on your preferred client device. One of the cool things about working with containers from a Windows machine is that you can work with both Linux and Windows containers. This post will focus on working with Linux containers, but you’ll need all these tools for working with Windows containers too.

For building containers and working with images locally, you’ll need Docker for Windows. Just go with the default installer options and you should be ready to go in short order. You will need to have machine that supports virtualization and has those features on. When you work with Windows containers, they will run on your OS. When you work with Linux containers, they will run on a Hyper-V VM you can find if you run Hyper-V Manager on your machine.

It’s worth noting if you are going to be working with persistent or shared volumes on your containers, they work a little bit differently on your windows machine. Docker recommends that you use the –mount flag with volumes  and when using them for Linux containers, it’s better to share from the Linux MobyVM and avoid using the Windows host directly.  However, if you need to use the host directly, you can by sharing the required drive via the Shared Drives feature under the Docker for Windows Settings.

For deploying containers to Azure, you will want the latest version of the Azure CLI 2.0.  You DO NOT want anything less than version 2.0.21, trust me. You will use the Azure CLI to do things like create and manage container services (either ACS or AKS), push images to Azure Container Registry, deploy containers to Azure Container Instances and get the credentials to connect to those resources.

Once you are connected to those resources (particularly if they are going to be used for Linux containers), you’ll be using the same tools as anyone working from a Linux client, such as Kubectl for deploying containers to a Kubernetes cluster.

For sanity checking purposes, I also make sure I have Windows Subsystem for Linux (aka “Bash on Windows”) installed and the latest version of Azure CLI 2.0 installed in that environment too. I usually can do everything I need from CMD, but sometimes a strange error has me double checking my work in “Linux-land”. 🙂 Speaking of WSL, if you really want to trick out your WSL setup, read this.

Once you have Linux container hosts deployed in Azure, you may want to connect to one directly using SSH – perhaps your Kubernetes master agent. I use Putty for this, because I like being able to save my connection settings in the application to use again when I’m working on a project over several days. You will need to convert your SSH keys to a PPK file type with PuttyGen before using them to connect to Linux container host.  (More to come on key management later, I promise.)

So to sum up… To get started with containers on a Windows machine, you need:

  • Docker for Windows
  • Azure CLI 2.0
  • Putty and PuttyGen

Happy Containerizing… and if you run into some “beyond the basics” challenges, let me know in the comments.