Guidance on running AppFabric Cache in a Virtual Machine (VM)
One of the advantages of using AppFabric (AF) Cache is that the Hardware requirements for the server nodes are not very powerful since fundamentally, it requires is enough RAM to store the cached objects and enough processing power for the AppFabric Caching Service to make the cached objects available. The general advice is to have a dedicated server for each node (to avoid network issues and resource conflict). Hence, a natural reaction from customers after putting these two facts together is to ask about the use of Virtual Machines, which is a great question. In fact, the use of Virtual Machines can allow a fast implementation of a scale out architecture with many virtual AF Cache nodes instead of one or two powerful physical nodes.
In the following, I will address this notion and point out the advantages, disadvantages and what are the things to keep in consideration since this is not as simple as moving your AF Cache nodes to Virtual Machines.
To keep into consideration
The points below start listing the general resources available for the use of Hyper-V and for AF Cache, then the list goes into the specifics needs for AF Cache as they related to its available features and it finishes with points on supportability.
- The MSDN article on optimizing performance on Hyper-V is a must read, but since it is optimized for Biztalk, I will list and annotate the sections from this article that need the most attention, for our purpose.
- Allocate 110%–125% of CPU and Disk Resources to the Hyper-V Virtual Machines – If you are migrating from a physical to a virtual machine then this rule still applies or if you have planned for a specific type of machine. It also applies if you are using XML for your configuration store.
- Optimize Hyper-V Performance – this one applies completely.
- Optimize Performance of Disk, Memory, Network, and Processor in a Hyper-V Environment – this will also apply if you are moving from a physical to a virtual environment but since AF cache is not necessarily CPU intensive, the guideline will be to tune your CPU to where it gives you the most optimal performance stress testing should reveal this (I will leave the specifics of that topic for a different post).
- Optimize Disk Performance – Generally, the recommendation is to use Fixed-size disks to not lose performance (in comparison to a physical machine) for those times when, due to memory usage, paging may take place. However, since Windows 2008 R2 the performance difference maybe smaller, this blog does a quick comparison of the two options.
- Optimize Memory Performance – AF cache heavily relies on physical memory and hence the advice given here is very suitable.
- Use the Synthetic Network adapter
- Enabling offload capabilities for the physical network adapter driver in the root partition
- If possible, enable VLAN tagging for the Hyper-V synthetic network adapter
- Install high speed network adapter on the Hyper-V host computer and configure for maximum performance
- This MSDN article goes over some general considerations on running VMs on AF Cache, they need to be follow in addition to the points made in this article.
- Two AF cache hosts running in the same physical machine will utilize more network bandwidth than one host therefore it is advisable to have a dedicated network cards per VM. This can be achieved by creating multiple virtual network, one for each physical NIC, then configure it so that each AF node VM will use each a different virtual network. This will better repartition the network traffic and avoid the VMs competing for network resources.
- If a machine with plenty of memory is available then running a few Hyper –V instances may seem like a good idea but not if the cluster is using the High Availability (HA) feature of AF Cache. If the physical machine goes down, all its VMs along with their cache hosts will also go down. It is therefore advisable not to keep hosts belonging to the same cluster on the same physical machine. In the scenario below, each physical machine runs two cache hosts (each on its own VM). Note: for v1.0, the recommendation is to have at least three nodes cluster for a more reliable HA implementation, this is due to a product limitation – when only one node remains, after the failure of one node in a two node HA cluster, the single HA node will not allow writes to take place, the single remaining node will need to be taken out of HA mode to allow writes again.
- Often, one of the questions ask by customer is how to reduce the available physical memory use by AF Cache. The easiest way to restrict or increase memory in an AF Cache node is to use a VM. Other methods like using bcdedit /set Removememory bytes will require rebooting the physical server and will render the unused memory useless to the server however this is great for testing. NOTE: the size attribute for the host element will not be able to achieve the same effect (this is often asked in the forums) as it will simply restrict the amount of cached data consumed by the DistributedCacheService process.
- Similar to a physical machine, running 2 cache hosts on the same VM isn’t supported.
- Only Hyper-V is supported, for VMWare or other support please refer to the Windows Server Virtualization Program (SVVP) http://www.windowsservercatalog.com/svvp.aspx
The following is a sample of an existing AF Cache implementation using VMs,
- Each physical server running 4 VMs with only 1 of those VM assigned as a Cache Server and the other 3 VMs assigned to light weight app & web servers
- Each VM is bound to dedicated hardware (set of CPU & memory)
- Each physical server contains a set of network cards on which each network card is bound to each VM.
In general, the use of VMs to host AF Cache nodes is a good idea as it allows great flexibility and it extends the versatility of AF Cache. But it should not be taken lightly as there are several considerations to take into account, from how the network is been used to what features are been implemented in the AF Cache solution.
Reviewers: Rama Ramani, Paolo Salvatori, Mark Simms