Service Fabric, Containers and Open Networking Mode

In case you haven’t noticed, deploying applications in containers is the way of the future for a lot of workloads.  Containers can potentially solve a lot of problems that have plagued developers and operations teams for decades, but the extra layer of abstraction can also bring new challenges.

I often deploy Windows containers to Service Fabric, not only because it’s a nifty orchestrator, but it also provides a greater array of options for modernizing Windows workloads since you can run Service Fabric on-prem as well as in Azure to support hybrid networking and other business requirements.

You can quickly create a Service Fabric cluster in Azure with the portal and Visual Studio can get you started with deploying existing containers to a Service Fabric cluster pretty quickly with the project wizard, but as with anything in the technology space, what comes out of the box might not do exactly what you need.

In the case of a recent project, I wanted to be able to deploy more instances of a container than I had nodes in my cluster.  By default, Service Fabric will deploy one instance of application to each node until you’ve placed that application on all nodes.  However, depending on what your container does, you might want to double or triple up.  This is accomplished with two things: open networking and partitions.

You can get the majority of the way with this documentation about container networking nodes on Service Fabric – https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-networking-modes.  You’ll need to make some changes to your Service Fabric deployment template, including parts to issue each node in your VM Scale Set additional IP addresses on your subnet.  Each container deployed with get one of these IP addresses. Then you will need to make some changes to your application and service manifest files, which include specifying the networking mode to “open” and adjusting how you handle port bindings.

Because your application is really a container, its deployed as a stateless service.  Most of the Service Fabric documentation talks about partitions in relation to stateful services and it’s a bit unclear how to apply that to stateless ones.

Within your application manifest, you’ll need to edit your service instance to use either the named or ranged partition type, instead of “SingletonPartiton” which is the default.  I prefer using the ranged version as it’s much easier to adjust the partition number, but admittedly don’t really have a good understanding on how the low and high keys apply to the containers when they aren’t actually using those ranges to distribute data.

Named Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
         <NamedPartition>
             <Partition Name="one" />
             <Partition Name="two" />
             <Partition Name="three" />
             <Partition Name="four" />
        </NamedPartition>
    </StatelessService>
</Service>

Ranged Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
           <UniformInt64Partition PartitionCount="4" LowKey="1" HighKey="10" />
     </StatelessService>
</Service>

Once you’ve made all these changes, Service Fabric will deploy containers equal to the number of instances multiplied by the partition count, up to the available number of IP addresses.  So two instances of four partitions will be eight containers and eight IP addresses.  Keep in mind that if a deployment exceeds the number of IP addresses you have available, you will have errors.  Based my testing so far, I don’t recommend trying to max out your available IP addresses, there seems to be a need for a little wiggle room for scaling operations.

Leave a comment