Service Fabric, Containers and Open Networking Mode

In case you haven’t noticed, deploying applications in containers is the way of the future for a lot of workloads.  Containers can potentially solve a lot of problems that have plagued developers and operations teams for decades, but the extra layer of abstraction can also bring new challenges.

I often deploy Windows containers to Service Fabric, not only because it’s a nifty orchestrator, but it also provides a greater array of options for modernizing Windows workloads since you can run Service Fabric on-prem as well as in Azure to support hybrid networking and other business requirements.

You can quickly create a Service Fabric cluster in Azure with the portal and Visual Studio can get you started with deploying existing containers to a Service Fabric cluster pretty quickly with the project wizard, but as with anything in the technology space, what comes out of the box might not do exactly what you need.

In the case of a recent project, I wanted to be able to deploy more instances of a container than I had nodes in my cluster.  By default, Service Fabric will deploy one instance of application to each node until you’ve placed that application on all nodes.  However, depending on what your container does, you might want to double or triple up.  This is accomplished with two things: open networking and partitions.

You can get the majority of the way with this documentation about container networking nodes on Service Fabric –  You’ll need to make some changes to your Service Fabric deployment template, including parts to issue each node in your VM Scale Set additional IP addresses on your subnet.  Each container deployed with get one of these IP addresses. Then you will need to make some changes to your application and service manifest files, which include specifying the networking mode to “open” and adjusting how you handle port bindings.

Because your application is really a container, its deployed as a stateless service.  Most of the Service Fabric documentation talks about partitions in relation to stateful services and it’s a bit unclear how to apply that to stateless ones.

Within your application manifest, you’ll need to edit your service instance to use either the named or ranged partition type, instead of “SingletonPartiton” which is the default.  I prefer using the ranged version as it’s much easier to adjust the partition number, but admittedly don’t really have a good understanding on how the low and high keys apply to the containers when they aren’t actually using those ranges to distribute data.

Named Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
             <Partition Name="one" />
             <Partition Name="two" />
             <Partition Name="three" />
             <Partition Name="four" />

Ranged Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
           <UniformInt64Partition PartitionCount="4" LowKey="1" HighKey="10" />

Once you’ve made all these changes, Service Fabric will deploy containers equal to the number of instances multiplied by the partition count, up to the available number of IP addresses.  So two instances of four partitions will be eight containers and eight IP addresses.  Keep in mind that if a deployment exceeds the number of IP addresses you have available, you will have errors.  Based my testing so far, I don’t recommend trying to max out your available IP addresses, there seems to be a need for a little wiggle room for scaling operations.


The Network “Hack” that Wasn’t To Be

Sometimes the idea looks great on paper but doesn’t really work out when you try to configure it. And often, the only way to be sure is to break out the good old scientific method and try. So I tried. And it didn’t work, so I’m putting here in case you get a similar wild idea in near future.

The goal was to start with a primary VNET in Azure for some VMs. This network was going to act as a collection point for data coming in from a number of remote physical sites all over the world. In addition, some machines on the primary network would need to send configuration data to the remote sites. Ultimately, we were looking at a classic hub and spoke network design, with an Azure VNET in the center.BasicNework

There are several ways you can do this using Azure networking, VNET peering between Azure VNETs, Site-to-Site (S2S) VPNs, and even ExpressRoute. ExpressRoute was off the table for this proof of concept, and since the remote sites were not Azure VNETs, that left Site-to-Site VPN.

The features you have available to you for Site-to-Site VPN depend on the type of gateway devices you use on each end for routing purposes. For multi-site connections, route-based (aka dynamic) routing is required. However, the remote sites were connected to the internet using Cisco ASA devices. The Cisco ASA is a very popular Firewall/VPN that’s been around since about 2005, but it only uses policy-based (aka static) routing.

So while we could easily use a static route to connect our primary site to any SINGLE remote network using the S2S VPN, we couldn’t connect to them all a simultaneously. And since we couldn’t call this a “hack” without trying to get around that very specific limitation, we tried to figure out a way to mask the static route requirement from the primary network. So how about VNET Peering?

VNET Peering became generally available in Azure in late 2016. Prior to its debut, the ability to connect any network (VNET or physical) required the use of the VPN gateways. With peering, Azure VNETs in the same region can be connected using the Azure backbone network. While there are limits to the number of peers a single network can have (default is 10, max limit is 50) you can create a pretty complex mesh of networks in different resource groups as long as they are in the same region.

So our theory to test was…. What if we created a series of “proxy” VNETS to connect to the ASA devices using static routing but then used the VNET Peering feature to connect all those networks back to the primary network?ProxyNets

We started out by creating several “proxy” VNETs with a Gateway Subnet and an attached Virtual Network Gateway. For each corresponding physical network, we created a Local Network Gateway. (The word “local” is used here to mean “physical” or on-prem if you were sitting in your DC!) The Local Network Gateway is the Azure representation of your physical VPN device, and in this case was configured with the external IP address of the Cisco ASA.

The we switched over to the VNET Peering configuration. It was simple enough to create multiple peering agreements from the main VNET to the proxy ones. However, the basic setup does not account for wanting to have traffic actually pass through the proxy network to the remote networks beyond. There are a couple notable configuration options that are worth understanding and are not enabled by default.

  • Allow forwarded traffic
  • Allow gateway transit
  • Use remote gateways

The first one, allow forwarded traffic, was critical. We wanted to accept traffic from a peered VNET in order to allow traffic to pass through the proxy networks to and from the remote networks. We enabled this on both sides of the peering agreement.

The second one, allow gateway transit, allows the peer VNET to use the attached VNET gateway. We enabled this on the first proxy network agreement to allow the main VNET to direct traffic to that remote subnet beyond the proxy network.

The third one, use remote gateways, was enabled only on the main VNET agreement. This indicates to that VNET that it should use the remote gateway configured for transit.


One this was all set up on our first proxy network, it worked! We were able to pass traffic all the way through as expected. However, connecting to just one network with a static route was doable without all the extra things. We needed to get a second proxy and remote network online!

We flipped over to the configuration for the peer agreement to the second remote network. There we found we COULDN’T enable the “Use Remote Gateways” because be we already had a remote gateway configured with the first peering agreement. Foiled! 😦


Using a remote gateway basically overrides all the cool dynamic-ness (not an official technical term) that comes with VNET peering. It’s not possible with the current feature set of VNET peering to mask the static S2S VPNs we were trying to work around. It may be possible if we wanted to explore using a 3rd party VPN device in Azure or consider ExpressRoute, but that was outside of the scope of the project.

Still, it was fun to try to get it to work and learned a bunch about some new Azure networking features.  Sometimes, the learning is worth loosing the battle.

Deconstructing JSON: Tale of Two VNETs (Linked templates with VNET peering!)

The last month or so has been packed with announcements and training! I’ve been to Ignite in Atlanta, as well as some internal training and some fun community events.  Finally, I’ve had some time to sit down and work on trying out a few things.  If you missed the announcement around Ignite time, Azure VNET Peering is now generally available.   With peering, you can now link virtual networks together without having to set up multiple VNET gateways.

This peering feature can be set up using the Azure Portal, but what fun is that, right?  Let’s do it with some ARM templates instead.  My goal was to create two VNETs with different address spaces (you can’t peer networks with an overlapping address space) then peer them together.  I could do this with one big template, but I wanted to also take some time to try out linking templates together – where one parent template calls others.  I also wanted to take advantage of parameters files to create the different VNETs.

For this example, I ended up with five JSON files:

  • azuredeploy.json – The deployment template for one VNET
  • vnet1.parameters.json – The parameters file for VNET1
  • vnet2.parameters.json – The parameters file for VNET2
  • peeringdeploy.json – Template to peer together the networks once created
  • parentdeploy.json – The template used to manage the complete deployment

Within the parentdeploy.json file we only need to define the schema and resources sections and the only resource I’m calling is the “Microsoft.Resources/deployments” type.  Within that, you’ll need to define a name, mode, template link (located in a repo or blob storage) and an optional parameters link.  For this deployment, I’m calling the deployment resource three times – once for each vnet, plus a final time to peer them. In the snippet below, you can see that I called the azuredeploy.json file and the vnet1.parameters.json file.

     "apiVersion": "2015-01-01", 
     "type": "Microsoft.Resources/deployments", 
     "name": "linkedTemplateA", 
     "properties": { 
       "mode": "Incremental", 
       "templateLink": {
       "parametersLink": { 

The second resource uses the same template link and uses the vnet2.parameters.json file.  Finally, the last one will only call the peeringdeploy.json template with no parameters file needed. Each deployment resource needs it’s own unique name and you can’t reference anything directly that isn’t included in the template itself.   There are also ways to share state between linked templates to be even more “elegant” in your template creation.

Within the peeringdeploy.json template we also only need to define resources which link the newly created VNETs together. In the snippet below, you can see BigNetA (created with vnet1 parameters) being connected to BigNetB (created with vnet2 parameters).

    "apiVersion": "2016-06-01",
    "type": "Microsoft.Network/virtualNetworks/virtualNetworkPeerings",
    "name": "BigNetA/LinkToBigNetB",
    "location": "[resourceGroup().location]",
    "properties": {
    "allowVirtualNetworkAccess": true,
    "allowForwardedTraffic": false,
    "allowGatewayTransit": false,
    "useRemoteGateways": false,
        "remoteVirtualNetwork": {
        "id": "[resourceId('Microsoft.Network/virtualNetworks', 'BigNetB')]"       

Finally, as with all my previous templates, I can deploy the whole thing with just one line of PowerShell!

 New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateUri $templateFileURI -verbose


Microsoft Cloud Networking – The Poster!

If there’s one thing I’ve learned while doing IT, it’s that nothing is really simple.  And with the addition of the “cloud” when it comes to providing services for your business or your customers, things just keep getting more complicated.  Remember the days where you had one nice T1 coming into your office and all was right in the world?  (Ah… the good ol’ days….)

Anyway, sometimes when you are looking at changing the networking for your company to support more cloud offerings like Office365 or moving workloads into Azure, you could really use a cheat sheet to get you pointed in the right direction and give you some insight into what you might need to research further as you work on your design.  Enter the “Microsoft Cloud Networking for Enterprise Architects” poster!

This printable, 8-page poster (it fits on legal paper) gives you the skinny on the design components and considerations for networking in the cloud age.  Topics cover are:

  • Evolving your network for cloud connectivity  Cloud migration requires changes to the volume and nature of traffic flows within and outside a corporate network.
  • Common elements of Microsoft cloud connectivity  Integrating your networking with the Microsoft cloud provides optimal access to a broad range of services.
  • Designing networking for Microsoft SaaS (Office 365, Microsoft Intune, and Dynamics CRM Online)  Optimizing your network for Microsoft SaaS services requires careful analysis of your Internet edge, your client devices, and typical IT operations.
  • Designing networking for Azure PaaS  Optimizing networking for Azure PaaS apps requires adequate Internet bandwidth and can require the distribution of network traffic across multiple sites or apps.
  • Designing networking for Azure IaaS  Optimizing networking for IT workloads hosted in Azure IaaS requires an understanding of Azure virtual networks (VNets), address spaces, routing, DNS, and load balancing.

There are several other posters that might be of interest as well, focusing on identity, security and storage. Find them with some other handy resources for IT architecture.

The Imperfect Lab: Azure Networking – Two Ways

Around this time last year, I kicked off my “Imperfect Lab” and used it as a story to play around in Azure and get more comfortable with PowerShell. And then I got busy with some other work priorities (as we all do) and I shut down those VMs, with the hopes of dusting them off in the future to continue with more learning.

At any rate, with all the changes to Azure in the last year, it’s really time to reboot the Imperfect Lab and give it a new shine, using some of the fresh new tools – particularly the *new” Portal, Azure Resource Manager (ARM), Azure PowerShell 1.0 and Templates.

Let’s recap what I have to start with (all in “classic” Azure Service Manager)

  • A cloud service and related virtual network
  • Two domain controllers (one using the minimal interface and one running core)
  • One member server that runs the AD Sync service
  • Traditional AD synced to Azure AD

So now where to begin?

When using ARM, it’s no longer possible for the creation of a VM resource without a virtual network, so it seemed fitting for me to start with the network.  It’s also not possible to mix ASM and ARM resources, I’ll be using this network to deploy all the lab VMs I’ll be using in ARM going forward. For those of you who aren’t familiar with old-school Azure, the classic mode (aka Azure Service Manager or ASM) made it possible to create resources in a cloud service without an user-manageable virtual network.

One of the other tasks that was difficult using ASM was programmatically creating and updating networking. It required downloading and editing an XML file and I found that generally distasteful. With ARM, you’ve got two options – straight up PowerShell or an ARM Template.

If you don’t know where to begin with an ARM Template, you can check out this repository of Azure Quickstart Templates. To create a basic network with two subnets, I used this one –

You can deploy this template using the Azure portal (which will allow you to adjust the parameters to your liking) or you can edit the template to your meet your needs or you can deploy it as is via PowerShell. If you want more details on the ways you can deploy templates, I recommend reading this –

The other option is use just vanilla PowerShell from the command line or via ISE. I used the following, which is using PowerShell 1.0:

$vnetName = "ImperfectRMNet"
$RGroup = "ImperfectRG"
$Location = "West US"
New-AzureRmResourceGroup -Name $RGroup -Location $Location
$subnet1 = New-AzureRmVirtualNetworkSubnetConfig -Name SubNet6 -AddressPrefix ""
$subnet2 = New-AzureRmVirtualNetworkSubnetConfig -Name SubNet7 -AddressPrefix ""
New-AzureRmVirtualNetwork `
 -Name $vnetName `
 -ResourceGroupName $RGroup `
 -Location $Location `
 -AddressPrefix "" `
 -Subnet $subnet1, $subnet2

Take note that with PowerShell 1.0, there is no “Switch-AzureMode” cmdlet and all of the “New” commands include “RM” in the cmdlet somewhere to differentiate between creating classic Azure resources.  There is nothing else to this basic network, no external IP address or load balancer that would normally come default with a cloud service in ASM.

Config Manager and IPv6 (…not together!)

I got word of a couple new things floating around the web that you might want to catch…

First off, my good friend and all-around tech nerd, Ed Horley, just released his first Pluralsight course on… you guessed it… IPv6!  Read more about it on his blog and if IPv6 is on your radar, I can’t think of a better person to get you started.

Second, the fine folks at LearnIT! are teaming up with Wally Mead for a webinar on Wally’s Favorite Configuration Manager Features.  It’s on July 23rd at 9am Pacific Time, the full blurb is below!

You have a Configuration Manager 2012 environment installed and running. All is going good as things are working OK. However, have you really been able to gain the best you can from it? Do you just get by using the handful of features you need to use?

In this session, we’ll discuss the features that I love in Configuration Manager 2012 R2, and why. Hopefully this will give you some insight to some features that you may not be using, and why you should look into them further.

Instructor Bio: Wally Mead
Wally is a Principal Program Manager at Cireson. He currently focuses on the improvement and development of products, and drives further innovation between Configuration Manager and Cireson apps. He also consults on Configuration Manager as a key part of the Cireson team.
In addition, Wally continues to provide education for the community through presenting webinars, speaking at conferences (such as TechEd, System Center Universe, Midwest Management Summit), and conducting training courses, as well as assisting the community on the Microsoft TechNet forums.

The Imperfect Lab: Connecting a VNET to another VNET

As I mentioned yesterday, I’m struggling with setting up the “perfect” lab environment for myself. So instead of trying to make it perfect, I’m just going to start by simply getting started and letting in evolve.  Because starting is most of the battle, right?  Most environments grow and change and become a bit messy, so I am just going to embrace a little chaos!

My starting goal is to create two networks in Azure (in two different regions) and connect them.  To start I’ll need two VNETs in Azure. I also created two corresponding storage accounts in each region, so that when I’m building my servers, everything is as neat an organized as I can make it.

In each of the networks, I carved out a few subnets, because I don’t know exactly what I’m doing with them yet. Keep in mind you will need to make at a small Gateway subnet in each. Also, as soon as you put a VM in a subnet, you can no longer edit it.

  • ImperfectNet – (West region)
  • AnotherNet – (East region)
Because I want to connect them together with site-to-site networking, I have to create corresponding “local” networks in Azure to sort of trick each network into thinking its connecting to a physical network.  So under the “Local Networks” tab, I created “ImperfectLocal” and “AnotherLocal” with the same IP address ranges as the virtual networks. Be sure to put in a fake VPN Gateway Address as a placeholder here, you’ll update it later after Azure gives you a real gateway address.

In each network, I threw the ticky-box under Site-to-Site Connectivity, selected the correct “local” network and then created the Gateway subnet.  After everything was finished configuring, when you return to the dashboard page of each network, you will see the remote network showing.  Azure will tell you that “the gateway was not created”.
Click “create gateway” at the bottom. For VNET to VNET connectivity, you have to go with Dynamic Routing.  Do this for each network and wait for it to complete.  (Creating gateways actually takes a while, this might be a good time to get lunch.)
Once your gateways are created, write down the IP addresses carefully and then edit those “local networks” with the fake VPN gateways to the correct ones Azure just assigned you.
Finally, you have connect the networks together with shared key.  There isn’t any way to do this in the portal, so pop over to PowerShell and use the following code to hook them together.  You have to run the command twice with the corresponding network names and the SAME shared key. Please make your key longer then the sample I put in here.

Set-AzureVNetGatewayKey -VNetName YourVNETName -LocalNetworkSiteName TheOppositeLocalNet -SharedKey abc123xyz

Set-AzureVNetGatewayKey -TheOtherVNetName YourVNETName -LocalNetworkSiteName TheOtherLocalNet -SharedKey abc123xyz

So now I’ve got two connected networks in Azure, albeit empty of servers.  Next up… starting to build out my “imperfect” domain.

One more thing… if you want the offical “Azure” instructions for this, complete with images, go to  

Modernizing Your Infrastructure with Hybrid Cloud – Week 3 Has Arrived!

This week’s focus is on networking. It starts out with Kevin Remde and Keith Mayer continuing the series on “Modernizing Your Infrastructure with Hybrid Cloud” and in today’s episode they discuss various options for networking. Tune in as they go in depth on what options are available for hybrid cloud networking as they explore network connectivity and address concerns about speed, reliability and security.

  • [2:46] What components are involved in Hybrid Cloud Networking?
  • [5:30] What are some of the technical capabilities of Hybrid Cloud networking?
  • [9:25]  Which VPN gateways are supported with Microsoft Azure?
  • [11:28]  What are some of the common scenarios that customers are implementing for Hybrid Cloud networking?
  • [15:40]  Besides Site-to-Site IPSec VPNs, are there any other connectivity options for Hybrid Cloud networking?
  • [20:10] DEMO: Can you walk us through the basic steps for setting up a Hybrid Cloud network?
Check back at as the week progresses for some related blog posts:
  • Tuesday: Step by Step: Setting a Static IP address on your Azure VM by Brian Lewis
  • Wednesday: Building Microsoft Azure Virtual Networks by Matt Hester
  • Thursday: Cross-premises connectivity with Site-to-Site VPN by Kevin Remde
  • Friday: Cross-premises connectivity with ExpressRoute by Keith Mayer

Making Sense of DNS Queries: Recursive or Iterative?

I’ve been dealing with DNS for a pretty long time.  It’s always been a key component of keeping Active Directory and all the clients on your network happy and connected.  But for the life of me I can never seem to remember which is which when it comes to recursive or iterative DNS queries.  It’s like a trivia question that has just gone wrong in my brain.

Today at work, I was was asked one of those “Hey, do you know who would do X?” questions by a colleague and as I was hunting down the answer, I realized it was just like DNS queries!  
Simply put, if you ask your manager a question and he/she comes back with the complete answer, you have just performed a RECURSIVE query.  You asked and someone else took on the responsibility of locating the correct answer.  
If you ask your manager a question an he/she comes back with a referral directing you to ask a different person, that is an ITERATIVE query.  You are responsible for walking the tree of your organization, theoretically getting closer and closer to the answer with each query you make.  Iterative starts with I and “I” do the work to find the answer. 
Sometimes it just takes a real life example to make concepts stick. However if you want technical details go here:

Reserve Public IPs in Azure? Maybe Not…

Recently Microsoft announced the general availability for VIP reservations in Azure. VIP reservation now generally available; Virtual Machines instance-level public IPs are in preview.

“You can now reserve public IP addresses and use them as virtual IP (VIP) addresses for your applications. Reserve up to five addresses per subscription at no additional cost and assign them to the Azure Cloud Services of your choice. In addition, you can now assign public IP addresses to your virtual machines, so they become directly addressable without having to map an endpoint to access them directly.” 

When Azure IaaS was first introduced, you could not ensure that public facing IP address of your VM or cloud service would remain the same, particularly if you shut down all the machines within a cloud service. What Azure would retain for you was the DNS name you created within the domain. The recommended practice was to use DNS to locate your services, instead of relying on a specific IP address.

I know, we all love the comfort of knowing our IP address. Over the past decade or so, I lovingly handed out the easiest internal and external addresses we had to servers I accessed frequently. Stable IP addressing was a must – changes often meant re-configuring firewalls, routers and even some applications, which could lead to downtime and complaints. Even Azure’s long term lease for IP addresses if your cloud service was active, wasn’t comforting enough for many who had been burned the past by a hard-coded application or some other IP address nightmare.

But it’s not 1998 anymore. The Internet isn’t a quaint little place you go to read text and your “mobile” phone isn’t hard wired into your car. IPv4 addresses are exhausted at the top levels, it’s just a matter of time before your internet service provider won’t have anything to give you when you ask. For a while I firmly believed that IANA would open up that special “Class E” space to buy extra time, but nope, it didn’t happen.

So yes, if you have a legitimate business need to have reserved public IPs you can go reserve some public IP addresses in Azure to meet your needs. The first five are free if you are actively using them.  But think hard about what your business needs are. Do you have an application that needs a static public IP address? Maybe it’s time address that requirement within the application itself.  Do you update applications by swapping IP addresses?  Maybe you should look more closely at the options within Azure to swap staging and production deployments.

But if you aren’t thinking about IPv6 and just want to try to buy some time in the IPv4 world, you might want pause before hunting down the necessary PowerShell to get that done. This is why name services existing in the first place – so you don’t have to learn and remember IP addresses and don’t need to latch onto them for all time. Once IPv6 is fully deployed across all the major players (cloud providers, ISPs, etc) you won’t even bother trying to remember a 128-bit address. Unless you are trying to impress people at bars.

No, I’m pretty sure there are better ways to impress people at bars.

So don’t bother with hoarding up IPv4 addresses, just embrace FQDNs, DNS, and start preparing for IPv6 so that when it comes to you, you’ll be ready. In the great words of my preschooler as she dances around singing Disney songs, “Let It Go”. FQDNs are the future and the exhaustion of the IPv4 address space will make that so.