EPS Files and Office: Not Quite Better Together

Today I learned that EPS file formats have been unsupported in Office applications as of April 11, 2017.  I don’t often have a need to embed EPS files in my documents, but I do live with a designer and he loves to send EPS files for various family-related side projects.  As a Mac user, it’s not an issue for him, as Office 2011 and Office 2016 for Mac is not affected by this change.

If you’d like to read more about it, you can find out more details here – https://support.office.com/en-us/article/Support-for-EPS-images-has-been-turned-off-in-Office-a069d664-4bcf-415e-a1b5-cbb0c334a840.

Meanwhile, I’ll go ask for some JPEGs.

Advertisement

Containers with Windows and Node.js

Recently, I’ve been working on a project with one of my TE colleagues, who hails from the “developer” side of the house. One of the challenges I have being interested in infrastructure and less interested in writing applications is that I’m often lacking something to build infrastructure for. So this has been a great opportunity to have something to focus spending my Azure credits on.

So for this project, we agreed to combine some of the things she wanted to do (Internet of Things, PowerBI, Bots, etc) with some of things I wanted to learn more about, like Containers and Service Fabric. The result was an idea for a sensor that would detect soil humidity and air temperature (IoT) for plants, report that data to the cloud for collection (via IoT Hub and CosmosDB) and make that data available via PowerBI for review. Ideally, having a Bot that lets me know when my plants need watering would really help with my lack of a green thumb. 🙂

As part of this we needed to be able to deploy an API that took the data from the IoT Hub and moved it the database. We also needed a front-end web application to show the collection of information. Both of these applications were going to be written in Node.js.

Now before you start tearing apart what is clearly going to be over-kill for this size of a project, keep in mind we know we can do all of this with PaaS offerings. But that would be less “fun”! You can check out the project at https://github.com/jcocchi/IoTPlantWatering and see that we’ve listed out many of the possible architecture scenarios. However, this post is about putting one of those Node.js applications in a container.

Step 1: Get Node.js onto a Windows Server Core container

Now, you’ll find plenty of information on the Web about creating a Docker container with Node.js, particularly if you’d like to run that on Linux. Combine that with the fact that Node.js is most easily installed on Windows with the MSI and you’ll find a lot less documentation about getting it on a Windows container. However, I came across this somewhat dated documentation and sample which got me started. It’s circa November 2016, which is a lifetime ago at this point and references Server 2016 TP 3 when Microsoft had a choice between managing Windows containers with PowerShell or Docker. I edited the HybridInstaller.ps1 script to download the latest version of Node.js and then followed the rest of the instructions in the “docker-managed” section.

The key bits are to download the HybridInstaller.ps1 and dockerfile to a new folder, then run:

Docker build -t windowswithnodejs:v1 C:\YOUR\FOLDER

You’ll end up with an image tagged “windowswithnodejs:v1” that you can then use as a base for the next steps.

Step 2: Make Sure Node IS actually installed

At this point I had a local image of my container available to run and I wanted to make sure that I really did install Node.js correctly.   For that, you can find some handy instructions here for connecting interactively to a Windows container. The whole Wiki is actually very informative if you are new to Windows containers.

Step 3: Install A Node.js Application

We have two Node.js applications in our project, but I started with the simpler of the two – the RecieveHubMessages app. My project partner had nicely detailed the installation process and dependencies so I was able to clone the application code to my desktop, create the necessary .ENV file (because you don’t want your secrets in GitHub!) and put together a dockerfile to build a fresh image based off my image with Node.js already installed.   The process is exactly the same as Step #1 (above) just using DOCKER BUILD with a different docker file and folder with the right application code in it.

After this was complete, I ran a container with this new image, connected to it and confirmed that the application was running. Since our goal was to be able to deploy this application in Azure, I also created an Azure Container Registry to host the image. From there, I was able to deploy it to Azure Container Services (using Kubernetes) and Azure Service Fabric.  (More later on the differences between ACS and Service Fabric.)

The Network “Hack” that Wasn’t To Be

Sometimes the idea looks great on paper but doesn’t really work out when you try to configure it. And often, the only way to be sure is to break out the good old scientific method and try. So I tried. And it didn’t work, so I’m putting here in case you get a similar wild idea in near future.

The goal was to start with a primary VNET in Azure for some VMs. This network was going to act as a collection point for data coming in from a number of remote physical sites all over the world. In addition, some machines on the primary network would need to send configuration data to the remote sites. Ultimately, we were looking at a classic hub and spoke network design, with an Azure VNET in the center.BasicNework

There are several ways you can do this using Azure networking, VNET peering between Azure VNETs, Site-to-Site (S2S) VPNs, and even ExpressRoute. ExpressRoute was off the table for this proof of concept, and since the remote sites were not Azure VNETs, that left Site-to-Site VPN.

The features you have available to you for Site-to-Site VPN depend on the type of gateway devices you use on each end for routing purposes. For multi-site connections, route-based (aka dynamic) routing is required. However, the remote sites were connected to the internet using Cisco ASA devices. The Cisco ASA is a very popular Firewall/VPN that’s been around since about 2005, but it only uses policy-based (aka static) routing.

So while we could easily use a static route to connect our primary site to any SINGLE remote network using the S2S VPN, we couldn’t connect to them all a simultaneously. And since we couldn’t call this a “hack” without trying to get around that very specific limitation, we tried to figure out a way to mask the static route requirement from the primary network. So how about VNET Peering?

VNET Peering became generally available in Azure in late 2016. Prior to its debut, the ability to connect any network (VNET or physical) required the use of the VPN gateways. With peering, Azure VNETs in the same region can be connected using the Azure backbone network. While there are limits to the number of peers a single network can have (default is 10, max limit is 50) you can create a pretty complex mesh of networks in different resource groups as long as they are in the same region.

So our theory to test was…. What if we created a series of “proxy” VNETS to connect to the ASA devices using static routing but then used the VNET Peering feature to connect all those networks back to the primary network?ProxyNets

We started out by creating several “proxy” VNETs with a Gateway Subnet and an attached Virtual Network Gateway. For each corresponding physical network, we created a Local Network Gateway. (The word “local” is used here to mean “physical” or on-prem if you were sitting in your DC!) The Local Network Gateway is the Azure representation of your physical VPN device, and in this case was configured with the external IP address of the Cisco ASA.

The we switched over to the VNET Peering configuration. It was simple enough to create multiple peering agreements from the main VNET to the proxy ones. However, the basic setup does not account for wanting to have traffic actually pass through the proxy network to the remote networks beyond. There are a couple notable configuration options that are worth understanding and are not enabled by default.

  • Allow forwarded traffic
  • Allow gateway transit
  • Use remote gateways

The first one, allow forwarded traffic, was critical. We wanted to accept traffic from a peered VNET in order to allow traffic to pass through the proxy networks to and from the remote networks. We enabled this on both sides of the peering agreement.

The second one, allow gateway transit, allows the peer VNET to use the attached VNET gateway. We enabled this on the first proxy network agreement to allow the main VNET to direct traffic to that remote subnet beyond the proxy network.

The third one, use remote gateways, was enabled only on the main VNET agreement. This indicates to that VNET that it should use the remote gateway configured for transit.

PeeringNet1

One this was all set up on our first proxy network, it worked! We were able to pass traffic all the way through as expected. However, connecting to just one network with a static route was doable without all the extra things. We needed to get a second proxy and remote network online!

We flipped over to the configuration for the peer agreement to the second remote network. There we found we COULDN’T enable the “Use Remote Gateways” because be we already had a remote gateway configured with the first peering agreement. Foiled! 😦

PeeringNet2

Using a remote gateway basically overrides all the cool dynamic-ness (not an official technical term) that comes with VNET peering. It’s not possible with the current feature set of VNET peering to mask the static S2S VPNs we were trying to work around. It may be possible if we wanted to explore using a 3rd party VPN device in Azure or consider ExpressRoute, but that was outside of the scope of the project.

Still, it was fun to try to get it to work and learned a bunch about some new Azure networking features.  Sometimes, the learning is worth loosing the battle.

Cognitive Services, IoT Hubs and Azure Functions… on Mars?!?

Are you interested in getting your feet wet with Azure IoT Hubs, Cognitive Services or Azure Functions?  If so, don’t miss this change to get hands on with some Mars themed challenges in a city near you!

MISSION BRIEF
At 05:14 GMT, the Joint Space Operations Network lost all contact with the Mission Mars: Fourth Horizon team as they were conducting routine sample collections on the Martian surface. The cause of the interruption is still unknown.

Your mission is to join us in reestablishing communications between the Earth and Mars.

In this free hands-on event you’ll learn the full capabilities of the Microsoft Development platform while sharpening your skills in a fun, fast-paced environment. Meet our experts, develop your skills, and a get chance to put your development abilities to the test.

Microsoft experts will be on hand to take you through the following topics during the event:

  • Azure IoT Hubs – Learn how to establish bi-directional communications with billions of IoT devices.
  • Azure Functions – Dive into the event driven, compute-on-demand experience that extends the existing Azure application platform.
  • Cognitive Services – Build multi-platform apps with powerful algorithms using just a few lines of code.

Sound good? Find a city near you and accept the mission at http://missionmars.microsoft.com

Azure VM Deployments with DSC and Chocolatey

I kinda love deploying servers. Really I do. It’s one of the consistent parts of my job as Sysadmin over the years and generally it has resulted in great amounts of satisfaction. As a technical evangelist, I still get to deploy them all the time in Azure for various tests and a projects.  Of course, one of the duller parts of the process is software installation.  No one really enjoys watching progress bars advance, when really you want to get to the more useful “configuration” part of whatever you are planning.

Not that long ago, Sysadmins utilized a not quite magical process of imaging machines to speed this up.  The process still required a lot of waiting.  If one was doing desktop deployments, the process was only made slightly more bearable by looking at the family photos and other trinkets left on people’s desks. Depending on the year one might have been working, this imaging process was also known by the brand name of a popular software – “ghosting.” If you look up the definition of imaging or ghosting in the dictionary, you’d find that it basically meant spending hours installing and capturing the perfect combination of software only to find one or more packages out of date the next time the image is used.

At any rate, fast forward to now and for the most part, we still have to install software on our servers to make them useful. And without a doubt, there will always be another software update. But at least we have a few ways of attempting to make the software installation part of the process a little less tedious.

I’ve been working on a project where I’ve been tackling that problem with a combination of tools for deployment of a “mostly ready to go” server in Azure. The goal was to provide a single server to improve the deployment process for small gaming shops – in particular, allow for the building of a game to be triggered from a commit on GitHub. Once built, Jenkins can be configured to copy the build artifacts to a storage location for access.  For our project, we worked with the following software requirements to support a Windows game, but there is nothing stopping you from taking this project and customizing it to your own needs.

  • Windows Server with Visual Studio
  • Jenkins
  • Unity
  • Git

I’m a big fan of ARM Template deployments into Azure, since they can be easily kicked off using the Azure CLI or PowerShell. So I created a basic template that would deploy a simple network with the default Jenkins port open, a storage account and VM. The VM would use an Azure supplied image that already include the current version of Visual Studio Community. (Gotcha: Before deploying the ARM template, confirm that the Azure image specified in the template is still available. As new versions of Visual Studio are released, the image names can change.)

The template also takes advantage of the DSC extension to call a DSC configuration file to install the additional software and make some basic OS configuration changes. The DSC extension call the package from our GitHub repo, so if you plan to customize this deployment for yourself, you may want to clone our repo as a starting point.

You can find our working repo here and the documentation is a work in progress at the moment.   The key files for this deployment are:

  • BuildServerDSCconfig.ps1.zip
  • StartHere.ps1
  • buildserverdeploy.json

Use the StartHere.ps1 PowerShell file to connect to your Azure account, set your subscription details, create a destination resource group and deploy the template.  If you are more an Azure CLI type of person, there are equivalent commands for that as well.

Once you deploy the buildserverdeploy.json template, the BuildServerDSCconfig.ps1.zip is automatically called to do the additional software installations.  Because the additional software packages come from a variety of vendors, the DSC configuration first installs Chocolatey and then installs the community maintained versions of Jenkins, Unity and Git. (Creating the DSC configuration package with the BuildServerDSCconfig.ps1 is another topic, stay tuned.)

Once the deployment is complete, all that remains is for the final configuration to be set up to meet the needs of the developers.  This includes connecting to the proper GitHub repo, providing the necessary Unity credentials and licensing and creating the deployment steps in Jenkins.

Congrats!  You’ve now created an automated CI/CD server to improve your development process.

DevCamps for Azure and O365 for public sector companies

Do you work in the health, government, public safety or education industries and are looking to learn more about Azure, Office365 and DevOps?  This free training from Microsoft is 300-level and geared toward partners who want to learn more about coding on these Microsoft platforms using .NET, Node.js, and Java.

This course are crafted as a combination of lecture and lab that put you on track to explore new Microsoft integrations, Modern Cloud Apps, Infrastructure as Code, Azure Active Directory, and much more!

You can register here for the one upcoming in Silicon Valley on January 24 and 25th.

 

Hack Day at the SF Reactor – 1/10/17

Start the New Year off right by learning how you can improve your company’s workflow. Sit down with Microsoft technical experts who will help you automate your most repetitive tasks through open source technology. Spend the afternoon with us coding side by side where we will show you how you could automate your build process, create a chat bot to answer frequently asked questions, or scale your existing app through Azure.

As part of the hackfest you will hear from the Microsoft Technical Evangelism team who will show you how, you could:

    • Build great bots that converse where your users are
    • Build an automated, continuous integration Jenkins pipeline to help get your applications to market faster
    • Build Docker containers and scale through Azure app services

1:00 PM-5:00 PM, Microsoft Reactor, San Francisco, CA 94107 — A detailed agenda will be shared prior to the event.

You *must* RSVP by Friday, January 6th to dxhacksfest@microsoft.com.

We’ll help implement your existing project or create a new one. Our goal is to help you with your project so you feel empowered to keep working to refine your scenarios at work.

REMINDER: Bring your laptop and create a production ready prototype or proof of concept by the end of the day. An Azure subscription is required, we can help you get started with a free trial if needed.

 

 

PacITPros November Meeting Resources

This month at the PacITPros meeting, I covered several of the ways you can create virtual machines in Azure and get started doing Infrastructure as Code. For a quick summary of the various ways to do this check out this bit of Azure documentation.

If you need to get started with Azure and don’t have a subscription yet, break out your Microsoft Account (aka LiveID, Hotmail account, etc) and pick one of these options:

Since I dove deeper on doing these deployments with ARM templates and JSON, here are some of the resources you might want to revisit (or check out if you missed the meeting).

Here is an example of the PowerShell command for getting an updated list of images for Windows Server or Ubuntu Server in a particular region:

Get-AzureRmVMImageSku -Location "west us" -PublisherName Microsoftwindowsserver -Offer WindowsServer 
Get-AzureRmVMImageSku -Location "West US" -PublisherName Canonical -Offer UbuntuServer

Finally, here is my “cheat sheet” of PowerShell commands to connect to your Azure account and deploy a template.

Login-AzureRmAccount
Get-AzureRmSubscription

$VSSubID = "7ebd5b2e-7d4b-49xe-bab6-4d593ed76x41" # copy the sub-id from the previous command
Set-AzureRmContext -SubscriptionID $VSSubID

# get the "raw" URL from GitHub for the template you want to deploy and break it up as follows:

$assetLocation = "https://raw.githubusercontent.com/techbunny/Templates/master/multiple_nano_server/" 
$templateFileURI  = $assetLocation + "nanoslabdeploy.json"  # deployment template
$parameterFileURI = $assetLocation + "SVCC.parameters.json" # parameters file

$RGName = "PacITPros"  # set your resource group name
New-AzureRmResourceGroup -Name $RGName -Location "West US"  # create your resource group

# deploy the template!
New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateParameterUri $parameterFileURI -TemplateUri $templateFileURI -verbose

Deconstructing JSON: Tale of Two VNETs (Linked templates with VNET peering!)

The last month or so has been packed with announcements and training! I’ve been to Ignite in Atlanta, as well as some internal training and some fun community events.  Finally, I’ve had some time to sit down and work on trying out a few things.  If you missed the announcement around Ignite time, Azure VNET Peering is now generally available.   With peering, you can now link virtual networks together without having to set up multiple VNET gateways.

This peering feature can be set up using the Azure Portal, but what fun is that, right?  Let’s do it with some ARM templates instead.  My goal was to create two VNETs with different address spaces (you can’t peer networks with an overlapping address space) then peer them together.  I could do this with one big template, but I wanted to also take some time to try out linking templates together – where one parent template calls others.  I also wanted to take advantage of parameters files to create the different VNETs.

For this example, I ended up with five JSON files:

  • azuredeploy.json – The deployment template for one VNET
  • vnet1.parameters.json – The parameters file for VNET1
  • vnet2.parameters.json – The parameters file for VNET2
  • peeringdeploy.json – Template to peer together the networks once created
  • parentdeploy.json – The template used to manage the complete deployment

Within the parentdeploy.json file we only need to define the schema and resources sections and the only resource I’m calling is the “Microsoft.Resources/deployments” type.  Within that, you’ll need to define a name, mode, template link (located in a repo or blob storage) and an optional parameters link.  For this deployment, I’m calling the deployment resource three times – once for each vnet, plus a final time to peer them. In the snippet below, you can see that I called the azuredeploy.json file and the vnet1.parameters.json file.

 { 
     "apiVersion": "2015-01-01", 
     "type": "Microsoft.Resources/deployments", 
     "name": "linkedTemplateA", 
     "properties": { 
       "mode": "Incremental", 
       "templateLink": {
          "uri":"https://raw.githubusercontent.com/techbunny/Templates/master/two_vnets_same_region/azuredeploy.json",
          "contentVersion":"1.0.0.0"
       }, 
       "parametersLink": { 
          "uri":"https://raw.githubusercontent.com/techbunny/Templates/master/two_vnets_same_region/vnet1.parameters.json",
          "contentVersion":"1.0.0.0"
       } 
     } 
  },

The second resource uses the same template link and uses the vnet2.parameters.json file.  Finally, the last one will only call the peeringdeploy.json template with no parameters file needed. Each deployment resource needs it’s own unique name and you can’t reference anything directly that isn’t included in the template itself.   There are also ways to share state between linked templates to be even more “elegant” in your template creation.

Within the peeringdeploy.json template we also only need to define resources which link the newly created VNETs together. In the snippet below, you can see BigNetA (created with vnet1 parameters) being connected to BigNetB (created with vnet2 parameters).

 {
    "apiVersion": "2016-06-01",
    "type": "Microsoft.Network/virtualNetworks/virtualNetworkPeerings",
    "name": "BigNetA/LinkToBigNetB",
    "location": "[resourceGroup().location]",
  
    "properties": {
    "allowVirtualNetworkAccess": true,
    "allowForwardedTraffic": false,
    "allowGatewayTransit": false,
    "useRemoteGateways": false,
        "remoteVirtualNetwork": {
        "id": "[resourceId('Microsoft.Network/virtualNetworks', 'BigNetB')]"       
}

Finally, as with all my previous templates, I can deploy the whole thing with just one line of PowerShell!

 New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateUri $templateFileURI -verbose