Service Fabric, Containers and Open Networking Mode

In case you haven’t noticed, deploying applications in containers is the way of the future for a lot of workloads.  Containers can potentially solve a lot of problems that have plagued developers and operations teams for decades, but the extra layer of abstraction can also bring new challenges.

I often deploy Windows containers to Service Fabric, not only because it’s a nifty orchestrator, but it also provides a greater array of options for modernizing Windows workloads since you can run Service Fabric on-prem as well as in Azure to support hybrid networking and other business requirements.

You can quickly create a Service Fabric cluster in Azure with the portal and Visual Studio can get you started with deploying existing containers to a Service Fabric cluster pretty quickly with the project wizard, but as with anything in the technology space, what comes out of the box might not do exactly what you need.

In the case of a recent project, I wanted to be able to deploy more instances of a container than I had nodes in my cluster.  By default, Service Fabric will deploy one instance of application to each node until you’ve placed that application on all nodes.  However, depending on what your container does, you might want to double or triple up.  This is accomplished with two things: open networking and partitions.

You can get the majority of the way with this documentation about container networking nodes on Service Fabric – https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-networking-modes.  You’ll need to make some changes to your Service Fabric deployment template, including parts to issue each node in your VM Scale Set additional IP addresses on your subnet.  Each container deployed with get one of these IP addresses. Then you will need to make some changes to your application and service manifest files, which include specifying the networking mode to “open” and adjusting how you handle port bindings.

Because your application is really a container, its deployed as a stateless service.  Most of the Service Fabric documentation talks about partitions in relation to stateful services and it’s a bit unclear how to apply that to stateless ones.

Within your application manifest, you’ll need to edit your service instance to use either the named or ranged partition type, instead of “SingletonPartiton” which is the default.  I prefer using the ranged version as it’s much easier to adjust the partition number, but admittedly don’t really have a good understanding on how the low and high keys apply to the containers when they aren’t actually using those ranges to distribute data.

Named Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
         <NamedPartition>
             <Partition Name="one" />
             <Partition Name="two" />
             <Partition Name="three" />
             <Partition Name="four" />
        </NamedPartition>
    </StatelessService>
</Service>

Ranged Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
           <UniformInt64Partition PartitionCount="4" LowKey="1" HighKey="10" />
     </StatelessService>
</Service>

Once you’ve made all these changes, Service Fabric will deploy containers equal to the number of instances multiplied by the partition count, up to the available number of IP addresses.  So two instances of four partitions will be eight containers and eight IP addresses.  Keep in mind that if a deployment exceeds the number of IP addresses you have available, you will have errors.  Based my testing so far, I don’t recommend trying to max out your available IP addresses, there seems to be a need for a little wiggle room for scaling operations.

Microsoft OpenHack on Containers comes to San Francisco – May 15-17

Who?

OpenHack brings together groups of diverse developers to learn how to implement a given scenario on Azure through three days of immersive, structured, hands-on, challenge-based hacking. This scenario is focused on implementing container solutions and move them to the cloud.

What!

Join us for three-days of fun-filled, hands-on hacking where you will team up with community peers and learn how to containerize Linux and Windows based workloads to the cloud. During OpenHack you will:

  • Choose your desired tooling and technology based on Kubernetes or Azure Service Fabric.
  • Hack on challenges structured to leave you with skills and expertise needed to deploy containers and clusters in the work place.
  • Network with fellow community members and other professional developers from startups to large enterprises, as well Microsoft developers.
  • Get answers to your technology and workplace project questions from Microsoft and community experts.

Bonus

In addition to the challenge-based learning paths, a limited number of 1-hour envisioning slots will be made available on a first come, first served basis to work side-by-side with Microsoft experts on your own workplace projects.

OpenHack is FREE for registered attendees!

Food, refreshments, prizes and fun will be provided. If travelling, attendees are responsible for their own travel expenses and evening meals.

What you need:

To be successful and maximize value from the event, participants should have a basic understanding of the following concepts and technologies. You are not required to be an expert or authority, but a familiarity with each will be advantageous:

  • Docker containers
  • Cloud hosted services
  • REST Services
  • DevOps
  • IP Networking & Routing

Click here to register!

OpenHacks are invite only and space is limited. You may be put on a waitlist. When your registration is confirmed, we will follow up with additional details.