Keeping Up with the Releases

There are a lot of great things to say about the faster release cycles we see with software these days. Bugs are fixed and features become available to us sooner, security issues are resolved quicker too. In a lot of cases, our operating systems and software packages are smart enough to check themselves and let us know updates are available or automatically install themselves.

I work between two different machines regularly and depending on my schedule sometimes favor one machine over the other for several weeks at a time. For better or for worse (mostly for the better), Windows 10 takes care of itself for me, as does Visual Studio Code and Docker for Windows. This means I often find myself sitting down at the “other” machine and once again waiting for those updates to install. While sometimes I admit to rolling my eyes in frustration every time I get an update alert, I do appreciate that I don’t have to think about those updates otherwise.

But for software that doesn’t automatically update, I will sometimes find myself wondering why demo notes I’ve drafted on one machine suddenly aren’t working when I try them on the other machine or worse, blaming documentation for being incorrect when the commands don’t work as instructed.

When it comes to documentation freshness vs software freshness… Let’s not go there today. I generally always start with docs.microsoft.com when I’m looking for information about Azure and other Microsoft products. While nothing is above being error free and sometimes out of date, more often than not my problems exist between my keyboard and monitor – in the form of some piece of software needing an update.

The top two things on my machines that I have to manually update regularly are:

  • Azure CLI 2.0Instructions for Installing or Updating Azure CLI 2.0
    • Type “az –version” at your command line to see what version you at running.  As of this writing (10/17/17) the current version is 2.0.19.
    • If you aren’t a regular Azure CLI user and just want to try it out via the Azure Portal, check out the Cloud Shell.
  • Azure PowerShellInstructions for Installing or Updated Azure PowerShell 4.4.1
    • I recommend the command line installer for this one, but if you want to do something other than that (like install within a docker container) you can find those instruction here.
    • You can check your version of Azure PowerShell by typing “Get-Module AzureRM -list | Select-Object Name,Version,Path” at the PowerShell command line, however if you don’t get any response back, you don’t have 4.4.1 installed at all.
    • Also, don’t confuse the Azure PowerShell Modules with the PowerShell that comes on your Windows machine itself.  That’s at version 5.1 right now if you have Windows 10 with your updates turned on. You can check that by typing “$PSVersionTable” at your PowerShell command line. If you want instructions running the beta version 6, you can find all that information here with the general installation instructions.

 

Advertisements

Windows containers, dockerfiles and npm

As part of my adventure with the IoTPlantWatering project, I ran into the issue of not being able to automatically launch “npm start” from within a Windows container using this command in my dockerfile, which would work just fine if this was a Linux container.

CMD [ "npm", "start"]

If I built the container without this command, connected to it interactively and typed “npm start” it worked fine. What gives? For Windows you need to use:

CMD [ "npm.cmd", "start"]

Here are a couple links that give you a little more context to why, but if nothing else, just remember – npm.CMD!

EPS Files and Office: Not Quite Better Together

Today I learned that EPS file formats have been unsupported in Office applications as of April 11, 2017.  I don’t often have a need to embed EPS files in my documents, but I do live with a designer and he loves to send EPS files for various family-related side projects.  As a Mac user, it’s not an issue for him, as Office 2011 and Office 2016 for Mac is not affected by this change.

If you’d like to read more about it, you can find out more details here – https://support.office.com/en-us/article/Support-for-EPS-images-has-been-turned-off-in-Office-a069d664-4bcf-415e-a1b5-cbb0c334a840.

Meanwhile, I’ll go ask for some JPEGs.

Containers with Windows and Node.js

Recently, I’ve been working on a project with one of my TE colleagues, who hails from the “developer” side of the house. One of the challenges I have being interested in infrastructure and less interested in writing applications is that I’m often lacking something to build infrastructure for. So this has been a great opportunity to have something to focus spending my Azure credits on.

So for this project, we agreed to combine some of the things she wanted to do (Internet of Things, PowerBI, Bots, etc) with some of things I wanted to learn more about, like Containers and Service Fabric. The result was an idea for a sensor that would detect soil humidity and air temperature (IoT) for plants, report that data to the cloud for collection (via IoT Hub and CosmosDB) and make that data available via PowerBI for review. Ideally, having a Bot that lets me know when my plants need watering would really help with my lack of a green thumb. 🙂

As part of this we needed to be able to deploy an API that took the data from the IoT Hub and moved it the database. We also needed a front-end web application to show the collection of information. Both of these applications were going to be written in Node.js.

Now before you start tearing apart what is clearly going to be over-kill for this size of a project, keep in mind we know we can do all of this with PaaS offerings. But that would be less “fun”! You can check out the project at https://github.com/jcocchi/IoTPlantWatering and see that we’ve listed out many of the possible architecture scenarios. However, this post is about putting one of those Node.js applications in a container.

Step 1: Get Node.js onto a Windows Server Core container

Now, you’ll find plenty of information on the Web about creating a Docker container with Node.js, particularly if you’d like to run that on Linux. Combine that with the fact that Node.js is most easily installed on Windows with the MSI and you’ll find a lot less documentation about getting it on a Windows container. However, I came across this somewhat dated documentation and sample which got me started. It’s circa November 2016, which is a lifetime ago at this point and references Server 2016 TP 3 when Microsoft had a choice between managing Windows containers with PowerShell or Docker. I edited the HybridInstaller.ps1 script to download the latest version of Node.js and then followed the rest of the instructions in the “docker-managed” section.

The key bits are to download the HybridInstaller.ps1 and dockerfile to a new folder, then run:

Docker build -t windowswithnodejs:v1 C:\YOUR\FOLDER

You’ll end up with an image tagged “windowswithnodejs:v1” that you can then use as a base for the next steps.

Step 2: Make Sure Node IS actually installed

At this point I had a local image of my container available to run and I wanted to make sure that I really did install Node.js correctly.   For that, you can find some handy instructions here for connecting interactively to a Windows container. The whole Wiki is actually very informative if you are new to Windows containers.

Step 3: Install A Node.js Application

We have two Node.js applications in our project, but I started with the simpler of the two – the RecieveHubMessages app. My project partner had nicely detailed the installation process and dependencies so I was able to clone the application code to my desktop, create the necessary .ENV file (because you don’t want your secrets in GitHub!) and put together a dockerfile to build a fresh image based off my image with Node.js already installed.   The process is exactly the same as Step #1 (above) just using DOCKER BUILD with a different docker file and folder with the right application code in it.

After this was complete, I ran a container with this new image, connected to it and confirmed that the application was running. Since our goal was to be able to deploy this application in Azure, I also created an Azure Container Registry to host the image. From there, I was able to deploy it to Azure Container Services (using Kubernetes) and Azure Service Fabric.  (More later on the differences between ACS and Service Fabric.)

The Network “Hack” that Wasn’t To Be

Sometimes the idea looks great on paper but doesn’t really work out when you try to configure it. And often, the only way to be sure is to break out the good old scientific method and try. So I tried. And it didn’t work, so I’m putting here in case you get a similar wild idea in near future.

The goal was to start with a primary VNET in Azure for some VMs. This network was going to act as a collection point for data coming in from a number of remote physical sites all over the world. In addition, some machines on the primary network would need to send configuration data to the remote sites. Ultimately, we were looking at a classic hub and spoke network design, with an Azure VNET in the center.BasicNework

There are several ways you can do this using Azure networking, VNET peering between Azure VNETs, Site-to-Site (S2S) VPNs, and even ExpressRoute. ExpressRoute was off the table for this proof of concept, and since the remote sites were not Azure VNETs, that left Site-to-Site VPN.

The features you have available to you for Site-to-Site VPN depend on the type of gateway devices you use on each end for routing purposes. For multi-site connections, route-based (aka dynamic) routing is required. However, the remote sites were connected to the internet using Cisco ASA devices. The Cisco ASA is a very popular Firewall/VPN that’s been around since about 2005, but it only uses policy-based (aka static) routing.

So while we could easily use a static route to connect our primary site to any SINGLE remote network using the S2S VPN, we couldn’t connect to them all a simultaneously. And since we couldn’t call this a “hack” without trying to get around that very specific limitation, we tried to figure out a way to mask the static route requirement from the primary network. So how about VNET Peering?

VNET Peering became generally available in Azure in late 2016. Prior to its debut, the ability to connect any network (VNET or physical) required the use of the VPN gateways. With peering, Azure VNETs in the same region can be connected using the Azure backbone network. While there are limits to the number of peers a single network can have (default is 10, max limit is 50) you can create a pretty complex mesh of networks in different resource groups as long as they are in the same region.

So our theory to test was…. What if we created a series of “proxy” VNETS to connect to the ASA devices using static routing but then used the VNET Peering feature to connect all those networks back to the primary network?ProxyNets

We started out by creating several “proxy” VNETs with a Gateway Subnet and an attached Virtual Network Gateway. For each corresponding physical network, we created a Local Network Gateway. (The word “local” is used here to mean “physical” or on-prem if you were sitting in your DC!) The Local Network Gateway is the Azure representation of your physical VPN device, and in this case was configured with the external IP address of the Cisco ASA.

The we switched over to the VNET Peering configuration. It was simple enough to create multiple peering agreements from the main VNET to the proxy ones. However, the basic setup does not account for wanting to have traffic actually pass through the proxy network to the remote networks beyond. There are a couple notable configuration options that are worth understanding and are not enabled by default.

  • Allow forwarded traffic
  • Allow gateway transit
  • Use remote gateways

The first one, allow forwarded traffic, was critical. We wanted to accept traffic from a peered VNET in order to allow traffic to pass through the proxy networks to and from the remote networks. We enabled this on both sides of the peering agreement.

The second one, allow gateway transit, allows the peer VNET to use the attached VNET gateway. We enabled this on the first proxy network agreement to allow the main VNET to direct traffic to that remote subnet beyond the proxy network.

The third one, use remote gateways, was enabled only on the main VNET agreement. This indicates to that VNET that it should use the remote gateway configured for transit.

PeeringNet1

One this was all set up on our first proxy network, it worked! We were able to pass traffic all the way through as expected. However, connecting to just one network with a static route was doable without all the extra things. We needed to get a second proxy and remote network online!

We flipped over to the configuration for the peer agreement to the second remote network. There we found we COULDN’T enable the “Use Remote Gateways” because be we already had a remote gateway configured with the first peering agreement. Foiled! 😦

PeeringNet2

Using a remote gateway basically overrides all the cool dynamic-ness (not an official technical term) that comes with VNET peering. It’s not possible with the current feature set of VNET peering to mask the static S2S VPNs we were trying to work around. It may be possible if we wanted to explore using a 3rd party VPN device in Azure or consider ExpressRoute, but that was outside of the scope of the project.

Still, it was fun to try to get it to work and learned a bunch about some new Azure networking features.  Sometimes, the learning is worth loosing the battle.

Cognitive Services, IoT Hubs and Azure Functions… on Mars?!?

Are you interested in getting your feet wet with Azure IoT Hubs, Cognitive Services or Azure Functions?  If so, don’t miss this change to get hands on with some Mars themed challenges in a city near you!

MISSION BRIEF
At 05:14 GMT, the Joint Space Operations Network lost all contact with the Mission Mars: Fourth Horizon team as they were conducting routine sample collections on the Martian surface. The cause of the interruption is still unknown.

Your mission is to join us in reestablishing communications between the Earth and Mars.

In this free hands-on event you’ll learn the full capabilities of the Microsoft Development platform while sharpening your skills in a fun, fast-paced environment. Meet our experts, develop your skills, and a get chance to put your development abilities to the test.

Microsoft experts will be on hand to take you through the following topics during the event:

  • Azure IoT Hubs – Learn how to establish bi-directional communications with billions of IoT devices.
  • Azure Functions – Dive into the event driven, compute-on-demand experience that extends the existing Azure application platform.
  • Cognitive Services – Build multi-platform apps with powerful algorithms using just a few lines of code.

Sound good? Find a city near you and accept the mission at http://missionmars.microsoft.com

Azure VM Deployments with DSC and Chocolatey

I kinda love deploying servers. Really I do. It’s one of the consistent parts of my job as Sysadmin over the years and generally it has resulted in great amounts of satisfaction. As a technical evangelist, I still get to deploy them all the time in Azure for various tests and a projects.  Of course, one of the duller parts of the process is software installation.  No one really enjoys watching progress bars advance, when really you want to get to the more useful “configuration” part of whatever you are planning.

Not that long ago, Sysadmins utilized a not quite magical process of imaging machines to speed this up.  The process still required a lot of waiting.  If one was doing desktop deployments, the process was only made slightly more bearable by looking at the family photos and other trinkets left on people’s desks. Depending on the year one might have been working, this imaging process was also known by the brand name of a popular software – “ghosting.” If you look up the definition of imaging or ghosting in the dictionary, you’d find that it basically meant spending hours installing and capturing the perfect combination of software only to find one or more packages out of date the next time the image is used.

At any rate, fast forward to now and for the most part, we still have to install software on our servers to make them useful. And without a doubt, there will always be another software update. But at least we have a few ways of attempting to make the software installation part of the process a little less tedious.

I’ve been working on a project where I’ve been tackling that problem with a combination of tools for deployment of a “mostly ready to go” server in Azure. The goal was to provide a single server to improve the deployment process for small gaming shops – in particular, allow for the building of a game to be triggered from a commit on GitHub. Once built, Jenkins can be configured to copy the build artifacts to a storage location for access.  For our project, we worked with the following software requirements to support a Windows game, but there is nothing stopping you from taking this project and customizing it to your own needs.

  • Windows Server with Visual Studio
  • Jenkins
  • Unity
  • Git

I’m a big fan of ARM Template deployments into Azure, since they can be easily kicked off using the Azure CLI or PowerShell. So I created a basic template that would deploy a simple network with the default Jenkins port open, a storage account and VM. The VM would use an Azure supplied image that already include the current version of Visual Studio Community. (Gotcha: Before deploying the ARM template, confirm that the Azure image specified in the template is still available. As new versions of Visual Studio are released, the image names can change.)

The template also takes advantage of the DSC extension to call a DSC configuration file to install the additional software and make some basic OS configuration changes. The DSC extension call the package from our GitHub repo, so if you plan to customize this deployment for yourself, you may want to clone our repo as a starting point.

You can find our working repo here and the documentation is a work in progress at the moment.   The key files for this deployment are:

  • BuildServerDSCconfig.ps1.zip
  • StartHere.ps1
  • buildserverdeploy.json

Use the StartHere.ps1 PowerShell file to connect to your Azure account, set your subscription details, create a destination resource group and deploy the template.  If you are more an Azure CLI type of person, there are equivalent commands for that as well.

Once you deploy the buildserverdeploy.json template, the BuildServerDSCconfig.ps1.zip is automatically called to do the additional software installations.  Because the additional software packages come from a variety of vendors, the DSC configuration first installs Chocolatey and then installs the community maintained versions of Jenkins, Unity and Git. (Creating the DSC configuration package with the BuildServerDSCconfig.ps1 is another topic, stay tuned.)

Once the deployment is complete, all that remains is for the final configuration to be set up to meet the needs of the developers.  This includes connecting to the proper GitHub repo, providing the necessary Unity credentials and licensing and creating the deployment steps in Jenkins.

Congrats!  You’ve now created an automated CI/CD server to improve your development process.