International Man of Awesome's Blog – When Too Much Awesome Isn't Enough

August 5, 2011

Decreasing Exchange 2010 DAG Failover Sensitivity by Increasing Cluster Timeout Values.

Filed under: Backups, Disaster Recovery, ESX, Exchange, Microsoft, Veeam, Virtualisation, VMware, vSphere, Windows 2008 R2 — internationalmanofawesome @ 2:21 pm

Whem running an Exchange 2010 DAG over a WAN, you may run into some of the limitations of Microsoft FCS (Failover Cluster Service). This service defaults to fairly low timeouts for fast failover in LAN environments. In a WAN environment, where latency may be higher, and some packet loss may occur, you may need to tweak the timeouts for FCS. I advise to tweak most of the settings via the FCS admin tool. However, there are a few settings to tweak via the command line, and here are the maximum values you can configure to make it “less sensitive”:

Exchange 2010 DAGs use Windows Failover Clustering. By default, FCS has fairly low timeouts that are ideal for use in fast localised LAN environments

If you operate your Exchange 2010 DAGs over a WAN where issues such as latency and packet loss can occur, you may find that your email databases are failing over. By default, heartbeat frequency (subnet delay) is 1000ms for both local and remote subnets and when a node misses 5 heartbeats (subnet threshold) another nod within your DAG cluster will initiate a failover.

You can change these values to their maximums by issuing the commands below on a DAG mailbox server in a command box.

cluster /prop SameSubnetDelay=2000:DWORD

cluster /prop CrossSubnetDelay=4000:DWORD

cluster /prop CrossSubnetThreshold=10:DWORD

cluster /prop SameSubnetThreshold=10:DWORD

You can check that the properties have been applied by executing the following command on a DAG mailbox server in a command box.

cluster /prop

If you virtualise your Exchange 2010 mailbox servers, this may also assist in preventing failover when backing up your VMs using backup products that take snapshots of your VMs like Veeam Backup and Replication. Note that doing backups in this manner is NOT supported by Microsoft at this time.

Reference – Configure Heartbeat and DNS Settings in a Multi-Site Failover Cluster – http://technet.microsoft.com/en-us/library/dd197562(WS.10).aspx

November 5, 2010

vSphere PowerCLI multipath policy script examples

Filed under: ESX, iSCSI, Storage, Virtualisation, VMware — internationalmanofawesome @ 4:43 pm

Since PowerCLI is the way to go with VMware ESXi, I’ve needed to do some investigation of changing my config script to PowerCLI. Here are some examples of setting multipath policies for iSCSI storage.

 

#Shows the multipath policy for all LUN’s connected to all hosts in Cluster CL01

Get-Cluster CL01 | Get-VMHost | Get-ScsiLun -LunType disk

 

#Shows the multipath policy for all LUN’s connected to host host4

Get-VMHost host4 | Get-ScsiLun -LunType disk

 

#Sets the multipath policy for all HDS LUN’s connected to all hosts in Cluster CL01 to roundrobin

Get-Cluster CL01 | Get-VMHost | Get-ScsiLun -CanonicalName “naa.600*” | Set-ScsiLun -MultipathPolicy “roundrobin”

 

#Sets the multipath policy for all EMC LUN’s connected to host host4

Get-VMHost host4 | Get-ScsiLun -CanonicalName “naa.600*” | Set-ScsiLun -MultipathPolicy “roundrobin”

 

# Output the CanonicalName for all the devices on Cluster Primary that are HDS  type (naa.600*)

Get-Cluster Primary | Get-VMHost | Get-ScsiLun -CanonicalName “naa.600*” | fl -show CanonicalName | out-file c:\shared\naa-id.txt

 

Awesome.. But I can’t take the credit. I mangled these from here, http://runningvm.wordpress.com/2010/08/31/vsphere-powercli-multipath-policy-script-examples/

September 21, 2010

Moving Veeam Replication Job Data to New Storage without effecting Changed Block Tracking for VMWare vSphere.

Filed under: Backups, Disaster Recovery, ESX, Storage, Veeam, Virtualisation, VMware, vSphere — internationalmanofawesome @ 12:23 pm

We currently use Veeam Backup and Replication 4.1.2 to replicate our VMWare vSphere 4.1 VM’s to our second data centre. We then backup those replicas to another location so that they can be backed up to tape. The Veeam replications take advantage of the Changed Block Tracking that is available in VMWare vSphere 4 and greater, which only backs up the blocks of the VM disks that have changed. This means we can back up a 500GB VM in around 15 minutes.

We also recently upgraded our production iSCSI SANs to newer Hitachi AMS 2100’s from our legacy Hitachi WMS100’s. Now rather than just throw out 2 x 14TB SANs (28 x 500GB SATA drives), I figured that I could use them as the replication repositories, leaving the newer AMS2100s to operate as the production VM Storage. So I reconfigured the WMS100s with two 5.4TB RAID 6 (12 disk + 2 Parity) arrays, each with 2 x 2TB (-64KB! Because of VMWare limitations) and 1 x 1.35TB LUNs. This gives us a total of 10.7TB of medium\low performing disk storage.

I had been replicating using Veeam to LUNs on the AMS2100s, so I needed a way to move the replicated data to the new storage, without affecting the CBT. If you use Storage VMotion, it rolls up the Replica Snapshot info so you lose the CBT, which will make the replication job fail and you will have to perform a new Full Replication of the VM.

Luckily, the Veeam tool gives us gives us the answer, at least in part. Here is the process;

1.       In your Veeam client, ensure that the replica job is not currently running. If it is not, disable the replica job so that one does not fire off whilst you are undertaking this procedure.

2.       In the vCenter, review the existing replica details, particularly, the VMs name (including whatever additions to the name you’ve used to identify it as a replica, Veeam by default tags _replica on the end of the VM name) the Host it resides on, and the datastore it currently resides on. Note that currently I believe that Veeam requires the replicas to have all of the vmdks of the VM in the one directory, but I need to confirm this. Either way, in our setup each VM has all of its vmdks in a single directory.

3.       Note the Datastore that you want to move it to, ensuring that you have enough current space for the replica, plus allowing for growth depending on your growth patterns and number of replicas rollback points you keep.

4.       Using the VMWare Datastore Browser tool, open the datastore that the replica is on.

5.       Back in vCenter, Right Mouse Click on the VM Replica, and Remove it from Inventory.

6.       Back to the Datastore Browser. If there is not a VeeamBackup directory already on your destination datastore, you can just move (or copy and paste) the VeeamBackup directory as this will relocate all subdirectories below it. However, if a VeeamBackup directory already exists on your destination, you need to move the replica VM folders themselves.

7.       Wait… This move\copy can take some time, depending on your SAN load\performance etc. I moved a 1TB VM, and it took 24 hours, due to the LUN raid being created and formatted at the same time. Best to have the storage online when first creating the replication job!

8.       Once the copy\move is complete, within the Datastore Browser Root\VeeamBackup\VMName(vm-number)\ find the vmname.vmx file, right mouse click on it and select Add to Inventory

9.       Run through the Add to Inventory wizard, remembering to name the VM exactly the same as it was previously, e.g. VMName_replica

10.   Once it is in the inventory, note the host that the VM is now residing on as if you are adding to a cluster, it will not end up on the same host as it was previously.

11.   Back to the Veeam Backup and Replication console. Find the replication job, edit the properties, and change the target to the new host and datastore. Note that Veeam will alert to be sure that you have done the process above, but in a much briefer method.

12.   Finish up the properties changes, and that is it. It will now replicate the VM to the new datastore, and it will maintain the index of Changed Block Tracking so a full replication is not required at next backup.

13.   Close out the Datastore Browser windows if required.

14.   Awesome!

Blog at WordPress.com.