Posted on:
Categories: Hyper-V;SharePoint;SQL;Storage
Description:

​The recommended practice for measuring disk performance issues in a guest vm is to monitor the activity on the host and isolate the traffic. This post highlights the reasoning behind this recommendation.
Disk performance issues on a windows server are diagnosed using performance monitor (perfmon). The definitive measure of disk performance issues is identifying if the server is experiencing any read/write latencies and typically in a SharePoint database hosting environment the maximum tolerable latency is 20 ms for data and 5-10 ms for log files: (http://technet.microsoft.com/en-us/library/dd723635(v=office.12).aspx).
On a well-equipped virtualized SQL server (8 vcpus & 32Gb RAM) and dedicated disk aggregates for data, logs and tempdb, I was seeing between 5-7 disk read and write latency alerts and these alerts ranged from 50ms to 200ms – which is concerning. The SAN folks confirmed that the disks were idle (avg 98 percent idle) and the network folks didn’t see any traffic latency between the hosts and SAN. So the issue had to be the virtualization layer. To get a better understanding of perfmon and the recoding of events, I read the disk performance post from the Windows Performance Team titled Measuring Disk Latency with Windows Performance Monitor (http://blogs.technet.com/b/askcore/archive/2012/02/07/measuring-disk-latency-with-windows-performance-monitor-perfmon.aspx) and it outlined that the perfmon provider time stamps a transaction as it leaves “Partition manager” and then stamps it when it surfaces the stack. I described my findings to good friend (Kevin Gould) and he immediately pointed me in the direction of a time/tick synchronization event that is triggered at the host level that synthesizes the time on all the guest servers. Upon reviewing the VMware whitepaper titled “Timekeeping in VMware Virtual Machines” (http://www.vmware.com/files/pdf/techpaper/Timekeeping-In-VirtualMachines.pdf); it all made sense.
Basically, the VMware host server synthesizes the time keeping function on all the guest servers. This is done because the guest servers don’t have acecss to a physical Time of Day (TOD) and therefore the virtualized tick event device may run faster or slower. The synchronization event keeps everything aligned and working optimally. Any time sensitive performance counters such as disk alerts etc need to be looked at very carefully. In my case, here’s the breakdown:

  • IO leaves the partition manager and it is time stamped
  • Host triggers a synchronization event and clock is deemed to be running slower and it is adjusted
  • IO arrives up the stack and it is time stamped
  • “Response Duration” = end time – start time (doesn’t compensate for synchronization event) and therefore may trigger threshold alert

Moral of the story is, look at the storage metrics, the stuff in between and take the disk performance threshold breaches/alerts with grain of salt. Measure performance from the host, focused on the guest and establishes a baseline. Disk  latencies alerts are expected on a virtual server as long as there are less than 5-7 events per 10 minute interval – I can say this based our infrastructure – your mileage will vary (establish a baseline!). On the same note, the fact that I’m writing this, means we didn’t hit a negative time IO (response duration is less than 0 i.e. a negative number)  – a potential worm hole opens up and all our infrastructure could have been sucked in including this blog.

Happy performance hunting!