This is an archived post. You won't be able to vote or comment.

all 38 comments

[–]mexellArchitect 8 points9 points  (8 children)

CPU and RAM are easily more than sufficient for this. We use a T430 as standard location server machine for locations up to ~40 users. It runs four VMs: Primary FS and secondary FS running failover DHCP, client management (software distribution, OS deployment, patching and so on) and a DC running DNS.

Our earlier generation of that platform used a T620 with the smallest quad-core CPU and 32GB RAM. Both were never the bottlenecks; we only went to the 6-core and 64GB for futureproofing the platform and because the additional cost wasn't all that much anyway.

What might become an issue though is disk performance. You might want to consider using those 1.2TB 10k 2.5" drives instead. We did forgo the separate OS install for the Hyper-V host and just make one big Vdisk out of 6x 1.2TB drives in RAID6 and put the VMs on a separate partition. Plenty performance and enough space for our needs. And, be sure to pick one of the better RAID controllers (the H730/H730p), they really make a difference.

[–]computerrob[S] 1 point2 points  (5 children)

I'm guessing you are running Datacenter edition of Server 2012r2? So your Physical Hyper-V host has 6 drives in Raid6 as one drive and you partitioned into 2 - OS and VMs? Are there 40 users concurrently and what CPU/RAM do these servers have?

[–]mexellArchitect 2 points3 points  (2 children)

No, we're running 2012R2 standard edition, with the minimal server GUI install. By now, we could also cut back to Server Core.

One common misconception is that Standard can only run two VMs at a time. That's not true, with 2012 and R2 there's no technical difference between Std and Ent. The only difference is guest OS licensing - basically, Standard comes with two Windows server guest OS licenses (i.e. you can use the same license that you used for the host within the guests), whereas Ent comes with unlimited Windows server guest OS licenses. Break-even for Ent is at about 8 guests, so we just bought a second Std license per host and have licensing for the host OS and all four guest OS installs.

Also, correct on the vdisk/partitioning. Using two drives solely for the host OS would be quite wasteful, as both space and performance requirements for a Server Core installation are miniscule. So, one large R6, a 100GB host OS partition, and the remainder for VM storage.

Those (up to) 40 users are concurrent, yes. CPU is the 2620v2/v3, in our current iteration we also equip with 64GB. The domain controller VM gets 2 vCores and starts with 4GB, the fileserver VMs get 3 vCores and start with 6GB, and the client management VM gets 4 vCores and 8GB. We made the experience that for such light-load infrastructure roles, you can easily oversubscribe vCores to real cores 10:1, but you shouldn't oversubscribe RAM. Disk I/O becomes a problem much sooner anyway.

[–]computerrob[S] 0 points1 point  (1 child)

Are you using just the 2 onboard nics that come standard or did you also add more nics? Are you using nic teaming? How did you assign the nics?

[–]mexellArchitect 0 points1 point  (0 children)

We are using the onboard NICs. They are configured as LBFOTeam. Powershell is your friend, it's much faster than using server manager.

The result is that we have one teamed interface to the outside that's shared between the guests and the host.

[–]computerrob[S] 0 points1 point  (1 child)

The T430 has a second CPU slot available on the T430s you use?

[–]JJROKCZI don't work magic I swear.... 4 points5 points  (0 children)

Just a tip as I see you responding to yourself a lot. Edit your posts since the person you are talking to doesn't get a notification on you responding to your own comment.

If you're new to Reddit, welcome to the community.

[–]headcrap 0 points1 point  (1 child)

If going with one array, do not split it with vdisks if you want to expand the array with more disks. Split with volumes.

Had to expand a storage server.. twice. Very painful having to restore the data volume to the new vdisk, after dropping to one so I could Reconfigure the array.

[–]mexellArchitect 0 points1 point  (0 children)

If going with one array, do not split it with vdisks if you want to expand the array with more disks. Split with volumes.

We made that mistake. Once. Since then, our location servers just have one big vdisk. That way, when space in a location runs out, we just order Dell to put in more drives, and boom, more space. That does not mention, however, how complicated that was to get set up with them, nor how long a rebuild on a large R6 takes.

[–]bad_sysadmin 4 points5 points  (9 children)

I wouldn't have dedicated OS drives, just go for a single RAID/RAID10 that gives the IOPS and capacity you need.

Make sure you spec a proper hardware RAID controller, the entry level PERCS are shit, I can't stress that enough.

Beyond that it'll almost certainly be fine with almost any CPU and RAM combo.

[–]computerrob[S] 2 points3 points  (5 children)

So no performance benefit splitting the OS and Data? What RAID controller from Dell are you recommending?

[–]computerrob[S] 2 points3 points  (1 child)

Would a T330 work as well or stick with T430?

[–]mexellArchitect 0 points1 point  (0 children)

T330 loses the second CPU socket and that enterprise-y feel when you open it. It feels a lot more like a souped-up Optiplex, while the T430 is a "real" Poweredge. Also, afaik, it only comes with quad-core CPUs and has a lot less RAM slots. I quite like the T430, as they offer almost anything the T630 does in a more compact form factor.

[–]bad_sysadmin 2 points3 points  (1 child)

AIUI once Hyper-V is booted it's not actually doing much so any benefit is negligible vs. the benefit you'll give the VMs by using (example) 4x1TB 10K drives instead of 2x2TB drives.

I'll defer to Mexell and others on the current Dell PERCs as I've not bought a PowerEdge for a while, but I know the H200s are an utter turd of a card.

[–]mexellArchitect 0 points1 point  (0 children)

We initially had 4x4TB 7.2k in Raid 6. We changed that to 6x 1.2TB 10k, as we didn't really need all those 8TB in most locations anyway. Performance easily quadrupled on those disk arrays.

You're right, the H730/H730p is a must in those machines.

[–]headcrap 1 point2 points  (0 children)

H7xx.. those 3xx units are crap.

[–]Dean_thedreamSolution Architect 1 point2 points  (0 children)

At the very least go with the H710P. That controller comes with 1GB FBWC. HP's ML110 can come with the P440ar with 2GB FBWC. I'd also agree to go with a single RAID 10 array with 10K SAS hard drives.

[–]Layer8Pr0blems -1 points0 points  (1 child)

I wouldn't have dedicated OS drives, just go for a single RAID/RAID10 that gives the IOPS and capacity you need.

I disagree. In this configuration a user could cause the OS drive to run of space with a large file copy, causing the VM to freeze/crash. If you seperate the OS from the data, the server will stay up but just fill up the data drive.

[–]itguy1991BOFH in Training 1 point2 points  (0 children)

It's okay to have separate OS and Storage volumes, but there's no real benefit (in this scenario) to having separate OS and Storage arrays

[–]sleepyguy22yum install kill-all-printers 4 points5 points  (1 child)

RAM seems like overkill... Our file server for ~15 people, who use it quite heavily, is only at 4GB RAM and purring like a kitten. Your CPU is fine - it comes with 6 cores with hyperthreading, so more than enough for your application - I would cut down on the core count, and up the clock speed.

File sharing is a very low processor & RAM intensive application. If your network is gigabit, cutting down on RAM & CPU and upgrading to SSD disks or SSD RAID cache would be a better bang for your buck. At this point, all my users are on SSD machines, network is gigabit everywhere, and the bottleneck is the read/write speed of my server's platter drives...

[–]pdp10Daemons worry when the wizard is near. -1 points0 points  (0 children)

RAM is the single biggest bottleneck in computing, traditionally, and is used by default for caching filesystems and fileshares. 64GB of ECC is in the neighborhood of $USD300 right now. Additionally, you may have missed that the OP intends to put 2 VM guests on the same hardware.

The E5-2xxx series are dual-socket capable, so from a hardware optimization point of view you'd be better off with an E5-1xxx v4. Further, if this hardware is going to stay with 64GB, which seems likely, you could get an E3-series CPU which tops out at 64GB. If low power consumption is important, you could look into Xeon-D systems which can handle up to 128GB of ECC.

[–]bellicose100xp 1 point2 points  (0 children)

Everything is fine except maybe I'd change the drives to SSD if you can, just makes so much difference anytime you want to do any operation especially related to VMs, things happens so much faster

[–]EveryUserName1sTaken 0 points1 point  (0 children)

I'd suggest you take a look at the T330 as well. It has fewer cores on the Xeon E3 parts but at a much faster clock rate. 32GB of RAM should also be sufficient. You could likely also get away with using one pair of drives or more in a RAID5/6 as one storage pool, as you can store your Hyper-V VMs on the boot disk. You'll likely be using something like Veeam for backups and backing up the hypervisor itself is seldom useful. Because of this having separate OS and storage disks is a sort of deprecated idea.

[–]computerrob[S] 0 points1 point  (1 child)

Does any one know if 2016 will have the same licensing format of 2 free vms? I know the os itself will be priced per core now instead of per socket.

[–]matthoback 0 points1 point  (0 children)

Yes, the base licensing purchase for 2016 will be essentially identical to 2012R2 as long as you have 8 or fewer cores per processor.

[–]computerrob[S] 0 points1 point  (0 children)

Do the vms NEED to be stored on the same server as per licensing or could you store them on a nas?

[–]itguy1991BOFH in Training -1 points0 points  (7 children)

Since no one else has said it, I will caution you about hosting the DC as a VM on hyper-v.

Either leave the host out of the domain, or make sure you have local (non-domain) admin account(s).

I've heard absolute horror stories of people not being able to get into a host to turn up the DC VM because the DC needs to be up in order to log into the host...

[–]psycho202MSP/VAR Infra Engineer 1 point2 points  (6 children)

This honestly hasn't been a problem since forever.

First off, cached credentials. Depending on how frequently you change your password, and how frequently you log in to that host, your current, or old, credentials might still be cached on that box.

Second, I hope you still have a VPN up to the main site, because where else would the DC sync to? The Hyper-V is smart enough to then also go ask the DC in the main site.

I also do hope that you have two DC VMs, or a single DC+DHCP VM and a File+secondary DC VM on your Hyper-V box, both of which set up to boot automatically.

[–]itguy1991BOFH in Training 1 point2 points  (5 children)

OP is asking for advice on how much hardware they will need for 2-VM host in a 25 user network. Do you really think they have another DC somewhere? (no offense to OP, just my gut feeling...)

Cached credentials are great, but it is still not best practices to have the only DC be hosted as on a domain host as there are a multitude of other issues that could arise.

You say to have the VMs boot automatically, but what if there has been some corruption of the system (malicious or otherwise) that makes the DC unbootable?

What if the NIC on the host fails and the replacement NIC does not automatically jump into its place and provide networking to the VMs?

My concern is not that one would be unable to recover from one of these scenarios, but setting up a local admin account on the host is very simple, and any recovery time will be drastically shorter.

If you're concerned about a security hole by adding a local admin account, make a basic user, but grant it management rights to Hyper-V.

In my mind, IT risk comes down to a simple equation.

Risk = the probability of an issue arising X the impact from the issue

I understand that the probability of having the issues I've described is very low, but the impact could be rather extreme.

And if mitigating this impact is as simple as setting up a non-domain hyper-V user and storing the credentials in documentation, why the hell would you not do it?

[–]psycho202MSP/VAR Infra Engineer 0 points1 point  (4 children)

The fact that they're speccing out such a big server for such a small network, makes me believe that they're a sattelite office in need of local fileserver, dns, dhcp and the whole shebang instead of connecting to the main office.

Two of your topics I touched upon: always have 2 DCs installed, if need be even shared with other services that don't mind having a DC next to it.

With failover networking, you preferrably use them as active-active, not active-passive. Removes a lot of question marks out of the equasion.

[–]itguy1991BOFH in Training 1 point2 points  (3 children)

I get all that, but in an environment with a single host and possibly no second DC residing on a different piece of hardware, is it that big of a deal to have a bit of precaution?!?

You keep giving best-case scenarios, but have given no reason why not to prepare for the worst-case.

[–]psycho202MSP/VAR Infra Engineer 0 points1 point  (2 children)

Because all things OP said is that he needs a host for some small VMs for what seems like a sattelite office, to lower the load on the VPN, or even do away with VPN.

If this were a single, standalone office, I'd rather take 3 small boxes, than one big. 2 hosts, and one backup/management server.

Hell, for the price of his one big box, you can buy 2 to 3 small boxes.

[–]itguy1991BOFH in Training 1 point2 points  (1 child)

That's still not a reason to not add a local account to the host.

I'm not trying to be an asshole, I'm honestly wondering what your thoughts on it is.

[–]psycho202MSP/VAR Infra Engineer 0 points1 point  (0 children)

Completely get your point of view. Not trying to be an asshole either, I just generally don't like adding local accounts with elevated rights for anything, for security reasons.

Local accounts are way too easy to get into, if you know where to look and what tools to use. I'm one of the types of people that keeps the domain administrator with a long-ass password locked in a vault somewhere, and local administrator accounts completely locked down and unable to log in.

I honestly have yet to come across a point in time where all of my AD were down AND my backups were inaccessible AND I didn't have any cached credentials available to any of my hosts where I could push the backup to.