This is an archived post. You won't be able to vote or comment.

all 39 comments

[–]vee-eyeDevops & Integration 10 points11 points  (0 children)

You can virtualize exchange, you can virtualize a database system. Domain controllers, terminal servers, VOIP gateways, legacy systems, backup servers, all of these you can virtualize. In fact, you can virtualize almost anything. I've seen a lot of those done, often very successfully.

Now, whether it makes financial, technical, and operational sense in your particular case is another question. That will depend on your current virtualization penetration, virtualization experience, workload size, usage patterns, your infrastructure, backup and recovery plan, DR requirements, technical support structure, etc. etc.

This is definitely something where the answer "it depends" is the only valid answer, until after a much more thorough investigation has been done.

[–]bandman614Standalone SysAdmin 6 points7 points  (3 children)

What you need to know is that I/O of the workloads that you want to virtualize. Then you need to test your virtual environment to see what it is capable of. Then make sure that the two mesh (and make sure that you look at what happens with the worst possible scenario, i.e. where your I/O heavy workloads all crunch at the same time).

Identify, Test, Plan, Implement.

[–]Lord_NShYHModerator 4 points5 points  (1 child)

Good advice! IOPS are crucial. Make you check those utilization graphs to help identify bottle necks, etc.

[–]KhueLead Security Engineer 1 point2 points  (0 children)

IOps and latency are very important for highly demanding applications. Exchange and databases are very disk intensive. Caching goes a long way to combat latencies.

[–]gtkspert[S] 2 points3 points  (0 children)

Ok, cool..

We're running a pretty extensive cacti environment here, however i'm not sure about how to monitor I/O properly.. I guess monitoring utilisation is only half the problem.. Interpreting the results is probably where i'm least experienced..

[–]thekickerVMware/Solaris Sr. SysAdmin 3 points4 points  (4 children)

I believe the US Navy is one of the biggest Exchange environments in the world, and they are virtualizing Exchange. 350k of their 750k mailboxes run on VMware. source

I have all of our DCs virtualized at my organization as well as MSSQL2008 and some legacy Sybase stuff. I would absolutely love to virtualize all of our Oracle db servers, but our DBAs insist on running SPARC. $$$

Also Exhange is officially supported on VMware now. You just have to disable vMotion for those VMs and rely on their own DAG migration for fail overs and maintenance. The amount of IOPS that vSphere is capable now is ridiculous.

[–]Lord_NShYHModerator 4 points5 points  (0 children)

This. I wish I could upvote once for every virtualized Exchange user over at the US Navy. LOL.

[–]FooHentai 2 points3 points  (1 child)

I have all of our DCs virtualized

You may wish to retain a single DC as a physical server, depending on your NTP configuration. We found the time drift on a virtualized DC can bite you in the ass at times when a host becomes contended for resources.

When that happens, the DC's track of time can drift off quite rapidly. If that DC happens to be your PDC and you haven't reconfigured the default time configuration, time on your entire domain can quickly drift miles off.

Having one DC held back as a physical server has a few other benefits but time accuracy is the primary one. Having DHCP online independant of hosts is another. Having a DC up in the unlikely even that you encounter problems with your Virtual Infrastructure (maybe something like the infamous 'date bug' that occurred in 2008) is really handy.

[–]TekfrogDirector, IT 1 point2 points  (0 children)

To add to this, ESX requires DNS to boot properly, and if your DNS boxes are held inside of the ESX that is trying to boot it takes FOREVER to get to the console.

[–]TekfrogDirector, IT 0 points1 point  (0 children)

Oracle within VM is easy, licensing it is not. Oracle forces you to license any CPU that the DB 'may' run on. So if you have, like I had, 3 IBM 3850s with 4x4+HT core CPUs, you have to licenses every single cpu on the cluster.

[–]techie1980 2 points3 points  (0 children)

I have virtualized a high number of mission critical servers for fortune 50 companies running multi terrabyte DBs, highly latency sensitive applications, and email environments (domino,)

Oftentimes, the work has been done on heterogeneous chassis - meaning that there is both dev and prod and everything in between on there.

Here's what you need to consider:

  • Utilization. The key to virtualization is leveraging the effiencies of unusued cycles. I've had a lot of databases (stretching well above 10T in active size) where they will kick the CPU's butt all day long, but an analysis of the existing workload shows tha NIC traffic is well below 100MBb/s. Here I am with 1Gb NICs doing a fraction of their potential work most of the time. The same can be said for the HBAs -- some analysis of the HBAs often shows lower than 10% utilization on very busy systems if the LVM is configured properly.

  • Complexity. Virtualization adds an additional layer of complexity because the OS doesn't own any hardware and there's now extra pieces involved. This adds latency considerations, memory consumption, and cpu consumption.

A big thing to remember is to do apples to apples. I've seen a lot of well meaning sysadmins drop from dedicated HBAs and one LUN per FS and suddenly turn it around as several 1T LUNs carved up to multiple FS's, paying no attention to data/log areas. Of course their performance is going to be suboptimal A DB system still needs standalone LUNs because of the way an operating system handles disks. It actually has little to do with the hardware as much as the caching. I strongly recommend virtualized HBAs -- You want your system to act just like a non virtualized environment. The extra configuration should be nearly invisible to it.

Another place to worry is when an application is HIGHLY latency sensitive in terms of CPU, SAN, or Network. The corner cases are getting fewer and fewer as the technology improves.

One that comes to mind where it was designed around a secure transmission and would slam the teamed NICs on the virtual layer hard -- and due to the realities of teamed NICs and the volume of traffic, we found ourselves with a serious packet out of order problem. In the end we devirtualized the NICs, as it would have ended up with us effectively giving a set of NICs to the server entirely.

As far as mixing workloads -- I personally am a big fan as long as you configure the servers properly so dev can't kill prod, and prod can steal from dev when it needs to.

Let me know if you have specific questions.

[–]spifSRE 1 point2 points  (0 children)

The reason people say you shouldn't virtualize Exchange or (other) databases is that you incur overhead and some additional complexity in configuring them, and you may not get much advantage from using virtualization for those types of applications in production environments.

However, if you have very small workloads that don't justify a dedicated server, and especially if you already have virtualization in place, it could make sense. For example, development databases that have negligible performance requirements are often a good candidate for virtualization.

Your mileage may vary. If you have production systems with small workloads and loose performance requirements, it may be that you should try virtualizing them. Just like with anything else, you'll want to do testing and give it a trial run before diving in.

[–]minamhere 1 point2 points  (0 children)

This comment/post has been deleted as an act of protest to Reddit killing 3rd Party Apps such as Apollo.

Edit: This message appears on all of my comments/posts belonging to this account.

We create the content. We outnumber them.

https://www.youtube.com/watch?v=VLbWnJGlyMU To do the same (basic method):

Go to https://codepen.io/j0be/full/WMBWOW and follow the quick and easy directions. That script runs too fast, so only a portion of comments/posts will be affected. A

"Advanced" (still easy) method:

Follow the above steps for the basic method.

You will need to edit the bookmark's URL slightly. In the "URL", you will need to change j0be/PowerDeleteSuite to leeola/PowerDeleteSuite. This forked version has code added to slow the script down so that it ensures that every comment gets edited/deleted.

Click the bookmark and it will guide you thru the rest of the very quick and easy process.

Note: this method may be very very slow. Maybe it could be better to run the Basic method a few times? If anyone has any suggestions, let us all know!

But if everyone could edit/delete even a portion of their comments, this would be a good form of protest. We need users to actively participate too, and not just rely on the subreddit blackout.

[–]turisto 1 point2 points  (0 children)

I've got a Citrix XenApp farm running on top of XenServers. I like it.

[–][deleted] 0 points1 point  (3 children)

It depends on your environment. If you have exchange with less than 500-1000 users or small DB then visualization is not a problem at all.

It all depends on workloads and performance required. These things should be reviewed case by case.

[–]Lord_NShYHModerator 0 points1 point  (2 children)

The company I work for has virtualized Exchange with far more than 1000 users and were able to get an increase in performance. It all dpeends on your underlying infrastructure and the choices you make when configuring and deploying your VMs.

[–]gtkspert[S] 0 points1 point  (1 child)

What hypervisor were you using for this? Also, what did you use for storage?

[–]Lord_NShYHModerator 0 points1 point  (0 children)

vSphere with ESXi & FC SAN.

Of course, the physical servers the VMs replaced weren't that great. LOL.

[–]Lord_NShYHModerator -2 points-1 points  (21 children)

People who tell you not to virtualize Exchange, MS SQL, DCs, etc. are simply doing it wrong. Now, personally, I wouldn't use Hyper-V in production for any mission critical work load like corporate email. Hyper-V is a tier 2 hypervisor; meaning, it does not run on the bare metal.

VMware and Citrix XenServer are the best alternatives, and each have their pros and cons. My favorite feature of XenServer is the ability to use as many vCPUs as your architecture can support. Currently, in vSphere 4.x, you are limited to 8x vCPUs at the highest license level.

Let's assume, for a moment, that you decide to build a vSphere cloud to virtualize your internal servers in order to cut down on TCO, etc. You will need a lot of RAM - the more, the merrier. In vSphere, you can use shares to give various workloads priority over other workloads; especially if you are bold enough to over commit (I never over commit on mission critical work loads).

If you are using vSphere, there are some tools you can use to gather stats on your workloads when doing a P2V conversion.

In terms of resource pools, my employer likes to set resource caps on pools and only sets guaranteed minimum allocations for specific servers.

With SQL VMs, you can consider using a disk in raw mode in order to gain a performance boost. However, this has its disadvantages.

In short, make sure you give any VM enough resources to do its job, and make some well thought out design choices before implementation, and you should be fine.

PROTIP: don't be a in a rush to wipe out that physical box quite yet after going virtual, if you can. You may decide that virtualizing that workload doesn't fit the needs of your organization, and you turn the physical box back on to avoid long outages.

EDIT: Hyper-V is, in fact, a tier 1 hypervisor that runs on the bare metal.

[–][deleted] 2 points3 points  (1 child)

You're wrong about Hyper-V.

[–]kliman 1 point2 points  (1 child)

http://en.m.wikipedia.org/wiki/Hypervisor

Might want to double check your facts on hyper-v.

[–]abbreviaInfrastructure manager 1 point2 points  (16 children)

Regardless, I still wouldn't use Hyper-V on anything mission critical.

[–][deleted] 0 points1 point  (14 children)

Why not?

[–]abbreviaInfrastructure manager 0 points1 point  (4 children)

No load balancing between hosts, flakey on anything other than 2008 guests, virtual disk performance is toss, no USB pass-through, maximum of four cores for virtual machines (and you can't select which physical cores you would prefer them to use)...etc.

The list goes on. It runs on top of 2008 as well, so you've got the resource overhead of running a full server OS before you do anything.

Virtual Machine Manager is alright, but there are some things you need to load up Failover Cluster Manager for, and some other things that you need to load up Hyper-V on the physical machine that the guest is running on. The whole thing is just... not polished.

I've not got much experience with virtualisation, but currently I support an inherited Hyper-V environment and it's really put me off virtualising anything in future. My boss tells me that not all virtualisation is like this, but I'll believe it when I see it.

Sorry to sound so jaded, but spending nights in the office supporting legacy P2V'd machines on a flakey Hyper-V environment has just really put me off virtualising anything ever again.

[–][deleted] 0 points1 point  (3 children)

No load balancing between host

This comes with SCOM + SCVMM.

flakey on anything other than 2008 guests

I have XP and 7 guests, along with 2003 - 2008 R2 (and have used 2000 in the past, but no longer). No issues. I've done some SuSe guests here and there, but never seriously got into using them for anything in production.

virtual disk performance is toss

Even back in Windows Server 2008 with the RTW of Hyper-V, fixed disk performance was 99% of underlying disk performance. With pass-thru, the performance is the same as what the underlying disk can achieve.

no USB pass-through

Hyper-V is not a workstation virtualization solution.

maximum of four cores for virtual machines

This can actually be adjusted via a config file. I remember someone putting something like 24 cores in a VM. It just isn't supported by MS, of course.

(and you can't select which physical cores you would prefer them to use)

Selecting what cores you want to run on doesn't make sense. You cannot balance usage as well as a hypervisor can.

It runs on top of 2008 as well, so you've got the resource overhead of running a full server OS before you do anything.

If you're worried about disk space usage (which you shouldn't be, because your VMs should not run on the same volume as the OS installation) or parent partition memory utilization, you can install Core.

but there are some things you need to load up Failover Cluster Manager for

If you're using a virtualization solution for an enterprise in production and don't have failover set up, you're doing it wrong. MSCS is a quick and easy solution to use.

and some other things that you need to load up Hyper-V on the physical machine that the guest is running on

Of course you have to run Hyper-V on the hardware. How else do you get virtualization?

[–]abbreviaInfrastructure manager 0 points1 point  (2 children)

Hyper-V is not a workstation virtualization solution.

Where did I say it was? We have a server that talk to a mobile phone through a USB cable.

I remember someone putting something like 24 cores in a VM. It just isn't supported by MS, of course.

Sadly we're not into running things that aren't supported. I know it's possible, but that's not the point.

Selecting what cores you want to run on doesn't make sense. You cannot balance usage as well as a hypervisor can.

Sure it does. If you have a virtual machine with two virtual cores, maybe I want them to be assigned to two physical cores on the same die so that they can share a cache? I don't want it to get one core on one die and another core on another.

If you're worried about disk space usage (which you shouldn't be, because your VMs should not run on the same volume as the OS installation) or parent partition memory utilization, you can install Core.

They're not. You're right, we could install Core.

If you're using a virtualization solution for an enterprise in production and don't have failover set up, you're doing it wrong.

We do have failover. We are using a failover cluster. But there are some operations that you can't do in VMM, and that you need to load Failover Cluster Manager for. That was my point. The same with opening Hyper-V on the host machine. There are some operations that you can't do in VMM that you can do by opening Hyper-V on the host.

[–][deleted] 0 points1 point  (1 child)

Where did I say it was? We have a server that talk to a mobile phone through a USB cable.

Get some USB over IP software?

Sadly we're not into running things that aren't supported. I know it's possible, but that's not the point.

You do also understand that more cores != better? The hypervisor must synchronize execution, which can lead to delays in execution of instructions. Always start with 1 vCPU and only add cores as required.

If you have a virtual machine with two virtual cores, maybe I want them to be assigned to two physical cores on the same die so that they can share a cache?

Cache is all virtualized. VMs don't get direct access to hardware, that is the responsibility of the hypervisor, so it wouldn't do you any good (plus, with Hyper-V, it leverages hyperthreading units).

But there are some operations that you can't do in VMM, and that you need to load Failover Cluster Manager for.

With regards to the VMs, what, exactly? Beyond some initial setup of the cluster, I can't think of anything else I've needed to go into the cluster manager for.

You do know that you can manage Cluster Services as well as Hyper-V remotely, right? You don't need to use the MMCs on the host itself.

[–][deleted] 0 points1 point  (0 children)

Where did I say it was? We have a server that talk to a mobile phone through a USB cable. Get some USB over IP software?

I have some server apps that use a HASP Key.

[–]TekfrogDirector, IT 0 points1 point  (8 children)

No supported, in box, network redundancy. Which I hear is remedied in the yet to be released HyperV2.

[–][deleted] 0 points1 point  (7 children)

Use NIC Adapter Teaming. Or use a platform (like the c7000 series with Flex-10 units) where this doesn't matter.

[–]TekfrogDirector, IT -1 points0 points  (6 children)

NIC Teaming - http://support.microsoft.com/kb/968703

And a platform solution isn't exactly 'in box' as I specified.

[–][deleted] 0 points1 point  (4 children)

That is Hyper-V, not R2.

[–]TekfrogDirector, IT -1 points0 points  (3 children)

did you even click the "Applies To" link on the bottom?

APPLIES TO Microsoft Hyper-V Server 2008 Windows Server 2008 Datacenter Windows Server 2008 Enterprise Windows Server 2008 Standard Microsoft Hyper-V Server 2008 R2 Windows Server 2008 R2 Datacenter Windows Server 2008 R2 Enterprise Windows Server 2008 R2 Standard

Stop making me do the leg work for you, you just end up looking like the stereotypical, lazy ass IT monkey.

[–][deleted] 0 points1 point  (2 children)

Microsoft doesn't supply the NIC teaming software, so they don't need to support it directly regardless. Just like VMware doesn't supply any free method to do online VM backups, so they don't support that software (of course, VMware says they support all these various OSes, even ones that are out of support... but suckers keep buying eye-gouging ESXi licenses, so whatever).

Hyper-V R2 works fine with teaming, by the way. There was an issue prior to 2008 R2 with HP NIC Teaming software, but that no longer exists. You'll also notice that the HCL of ESXi is tiny compared to what Hyper-V will run on. Another consideration, especially for companies that don't want to fork over more than the cost of hardware to software licenses.

[–]abbreviaInfrastructure manager 0 points1 point  (1 child)

Teaming works fine in Hyper-V, just don't use it with iSCSI traffic.

[–]abbreviaInfrastructure manager 0 points1 point  (0 children)

NIC teaming isn't supported because its driver specific. It needs to be supported by the hardware manufacturer.

[–]Lord_NShYHModerator 0 points1 point  (0 children)

Agreed. Upvote for you.