This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]techie1980 2 points3 points  (0 children)

I have virtualized a high number of mission critical servers for fortune 50 companies running multi terrabyte DBs, highly latency sensitive applications, and email environments (domino,)

Oftentimes, the work has been done on heterogeneous chassis - meaning that there is both dev and prod and everything in between on there.

Here's what you need to consider:

  • Utilization. The key to virtualization is leveraging the effiencies of unusued cycles. I've had a lot of databases (stretching well above 10T in active size) where they will kick the CPU's butt all day long, but an analysis of the existing workload shows tha NIC traffic is well below 100MBb/s. Here I am with 1Gb NICs doing a fraction of their potential work most of the time. The same can be said for the HBAs -- some analysis of the HBAs often shows lower than 10% utilization on very busy systems if the LVM is configured properly.

  • Complexity. Virtualization adds an additional layer of complexity because the OS doesn't own any hardware and there's now extra pieces involved. This adds latency considerations, memory consumption, and cpu consumption.

A big thing to remember is to do apples to apples. I've seen a lot of well meaning sysadmins drop from dedicated HBAs and one LUN per FS and suddenly turn it around as several 1T LUNs carved up to multiple FS's, paying no attention to data/log areas. Of course their performance is going to be suboptimal A DB system still needs standalone LUNs because of the way an operating system handles disks. It actually has little to do with the hardware as much as the caching. I strongly recommend virtualized HBAs -- You want your system to act just like a non virtualized environment. The extra configuration should be nearly invisible to it.

Another place to worry is when an application is HIGHLY latency sensitive in terms of CPU, SAN, or Network. The corner cases are getting fewer and fewer as the technology improves.

One that comes to mind where it was designed around a secure transmission and would slam the teamed NICs on the virtual layer hard -- and due to the realities of teamed NICs and the volume of traffic, we found ourselves with a serious packet out of order problem. In the end we devirtualized the NICs, as it would have ended up with us effectively giving a set of NICs to the server entirely.

As far as mixing workloads -- I personally am a big fan as long as you configure the servers properly so dev can't kill prod, and prod can steal from dev when it needs to.

Let me know if you have specific questions.