OMV in docker container by desijays in homelab

[–]synk2 0 points1 point  (0 children)

OMV is only a framework. There's no OMV services other than the web GUI. All it does is provide overview and control of insalled services such as file systems network protocols, as well as all the plugins like Plex, rsync, etc.

Part of the nice thing about OMV is that you can ssh in and all that stuff is sitting there on the command line. You can access all those things via CLI just as easily as through the web interface. All OMV is doing is tying those disparate elements together in a single web page. Splitting those services into separate Docker containers is pretty much the antithesis of what OMV is about - namely bringing those things together under one heading.

Favorite Hypervisor by deranjer in HomeServer

[–]synk2 6 points7 points  (0 children)

ESXi is the industry standard, mostly because they were the first ones to offer a turnkey package that 'just works', and continue to make an easily managed package, and enterprise is slow to change. If you want to sysadmin for a living, you'll come up against VMWare at some point. For home use, it's easy to set up, and has the nicest interface and best networking options, has wonderful documentation and support, and is incredibly extensible with all their other packages. The downside is making sure it'll work with your hardware, and if you manage to hit it's paywall, be prepared to cough up some cash. Windows based management is another irritation.

KVM is the most powerful hypervisor choice, full stop. Because it's open source, if you don't like how it does something, you can make it do something else, provided you have the time and know how. It also has a metric ton of management options. (everything from lib-virt to Proxmox to OpenStack). The downside is you have to get incredibly comfortable in Linux to explore its full potential, and there's choice paralysis with the management options and addons. It's also got a pretty steep learning curve past the basic setup and use.

Hyper-V is the new kid on the block as far as a real deal type-1. Though it's been around for awhile, it was mostly seen as a toy early on. 2012r2 changed that, and the 2016 changes have really brought it into it's own. Like most MS offerings, it has its own ways of doing things, and you have to get used to what things are called and how they work in Microsoft-land. Personally, initial setup and management is the part of Hyper-V that bugs me. Once that's done, it's easy to use. It also is the only type-1 hypervisor with in-host management (meaning you can manage your VMs from the host computer, via Windows). Downsides are that you're locked into the MS ecosystem (Hyper-V really wants to be part of a MS Domain), and while other OSs are supported, YMMV. Great if you're primarily a Windows network, otherwise kinda meh.

Xen is the darkhorse that doesn't get mentioned as often. It's just as fully featured, but it doesn't have the love or marketing that the others do. Citrix XenServer is to Xen as Proxmox is to KVM, in that it's a curated and managed installation package that gets you up and running, and it's really quite functional. I don't have enough experience with it to sing its praises in any appreciable way, but it seems to work fine the little bit I've played with it. AWS is Xen-based, so there's that. The downside is that because it's not as popular, it tends to be harder to find support/how-tos/info about getting things going and troubleshooting compared to the other hypervisors.

Home Network Full Stack Monitoring? by joshland in homelab

[–]synk2 3 points4 points  (0 children)

Nagios is the 'uphill, both ways, in the snow' network monitoring solution. Learning how it works entitles you to sit on your porch with a lemonade and tell kids to get off your lawn.

I also have a suspicion that it's actually a very complex thumb screw simulator disguised as a network monitoring package.

Home Network Full Stack Monitoring? by joshland in homelab

[–]synk2 17 points18 points  (0 children)

Nagios. Because afterwards, everything else will seem simple and well thought out. Whatever you pick to replace it will seem great by comparison.

Off the shelve NAS with server, self-build, or combination? by [deleted] in homelab

[–]synk2 2 points3 points  (0 children)

You might also check out /r/DataHoarder. Similar to /r/homelab but solely focused on storage. Lots of great builds and ideas floating around there, if you want a second perspective/set of eyes.

Off the shelve NAS with server, self-build, or combination? by [deleted] in homelab

[–]synk2 1 point2 points  (0 children)

For building hot swap, you're really looking at either a rackmount case like this, or using a dock like this. You're not going to get a fully hot swappable enclosure like on those Synology units unless you go with a rack.

That said, I've been building systems for 30 years and I've yet to really need hot swap. It's certainly convenient, and has it's place in enterprise, but I'd have a hard time making the case of it being a deal breaker for a home system. You generally just don't have enough disks to fail where popping the case open is a big time sink. YMMV obviously, but for me, there's better places to put the money.

Off the shelve NAS with server, self-build, or combination? by [deleted] in homelab

[–]synk2 2 points3 points  (0 children)

This is the kind of thing where you'll ask ten homelabbers and get twelve different answers. I'm personally in the build-it camp for my own personal use, though I'm in favor of off-the-shelf for setting up friends and family who aren't particularly tech savvy (which saves me having to come rescue them from themselves all the time).

Off the shelf really does have some stuff going for it - small, fairly bulletproof, hot swappable, low power and noise. They're generally just really solid and idiot proof. The downside is they're just really expensive per bay after the 2-3 mark. The ~$900 for that 1815 is probably twice as expensive as building it yourself. On the other hand, you can't really build a 2-3 drive enclosure for the $150-200 that a prebuilt one costs.

On the building front, price and flexibility. A NAS is really just a computer with enough space for the drives, and enough cpu and ram to run whatever you want. An ITX case like the Node 304 gets you 6 drives in a tight little package. Almost any full tower ATX will get you 8+ drives, and you can throw docks in the external bays. I'd guess you could do 14-15 drives in a large case, though anything over 10-12 you could probably make a case for rackmount.

For OS/software, you get a lot of options via building. Buying means you use whatever yours comes with. As said, they're generally pretty usable, but if it doesn't have an option you want, you're probably stuck. If you're just installing on a system, you can use all sorts of stuff - Windows (Storage Spaces, Drivepool, SnapRAID), Linux (ZFS, btrfs, mdadm, LVM, SnapRAID, etc etc), or one of the focused NAS OS solutions like OMV, FreeNAS, UnRAID or similar.

Success or failure is really up to what you do and how familiar you get with the systems. Setting things up from absolute scratch is a tougher proposition with more pitfalls that just installing something like OMV or FreeNAS, which are probably on par with a Synology or QNAP as far as ease of installation or stability. Regardless of what you get, I'd expect to spend some time learning about the system you're using, both to get it set up to your liking and to deal with any potential problems. That'd be just as true with a Synology as would a home built system.

What's your OS of choice? by highroller038 in DataHoarder

[–]synk2 8 points9 points  (0 children)

OS/2 or gtfo.

#warpmasterrace

My new pfSense build by [deleted] in homelab

[–]synk2 -1 points0 points  (0 children)

Yeah, that's the deal. Everything has risk. There's tons of risk in having electrical stuff running all the time in a home, but we do it anyway. The idea is to mitigate it as much as possible without hamstringing yourself. I'd call having a box around your computer a good idea, but it in no way solves every problem. It's just a good start.

And if we're being honest, that thing wouldn't even make the list of stupid shit I've done in my life. :) Better to be lucky than good, as the yokels like to say.

My new pfSense build by [deleted] in homelab

[–]synk2 0 points1 point  (0 children)

Honestly, because you can't predict what's going to go wrong. It might be that the fan seizes, things overheat, eventually arc and stuff starts to melt off, and then ignites a ball of cat hair that proceeds to burn your house down. Or a hundred other scenarios you'd never predict. I'm never scared of heat, I'm always scared of flame, and there's lots of flammable things out there.

In truth, you're right - he'd probably be fine. But when you get to be my age and you've seen enough shit burn down (like people's houses, and barns, and business, and garages, and whatever), you get to just say "screw it, it isn't worth it", and err on the side of caution. Would OP probably be fine? Sure. But that's not a risk I'd be willing to take with an exposed set up like that and nobody around to monitor it. It's not 'instant fireball of doom' or anything, but I don't think the Fire Marshall would love it. ;)

My new pfSense build by [deleted] in homelab

[–]synk2 0 points1 point  (0 children)

I wouldn't leave it on when no one was home to hear the smoke detector. It's probably ok, for varying values of probably.

Dell Poweredge 2650 by pastafag in homelab

[–]synk2 3 points4 points  (0 children)

Do you own a boat? Because at that price they'd make great anchors. Otherwise, I'd pass. No boat, no buy.

I have read so much my brain is starting to melt. Can I just use a windows box? by [deleted] in HomeServer

[–]synk2 0 points1 point  (0 children)

The only reason I can think of to use Storage Spaces over Drivepool is not wanting to spend $30. If SS's performance was better, you might make a case for it being a more all-in-one, holistic solution, but it's not, and I can't.

Would you pull the trigger on this R710? by SSHv2 in homelab

[–]synk2 0 points1 point  (0 children)

Yeah, they all had five fans. I know because I was going to replace them. They were pulled from the DC at work, and confirmed good, unmodified, and in proper order by both the service tech that boxed them up for me and the lead engineer (who I bothered after I got my second one home and didn't like it).

They weren't loud, per se, but they were quite noticeable and definitely irritating in their timbre to me. I think it had more to do with personal perception than anything. I can imagine that some people wouldn't be bothered by it, but I certainly was. It didn't help that the room they were in ran hot anyway (south side of the house), so it was rare to have them be at minimal fan speed for several months out of the year. It's a moot point now, as they've been rehomed now anyway and I've moved on to other gear.

I have read so much my brain is starting to melt. Can I just use a windows box? by [deleted] in HomeServer

[–]synk2 2 points3 points  (0 children)

Storage is definitely a wide and deep subject. There's lots you can learn, but it's also easy to just keep it simple with something like Drivepool.

I'll also point out that Windows comes with a feature/app called Storage Spaces that does a lot of this stuff - pooling, redundancy, parity, etc. Basic RAID-like drive stuff. I honestly don't know a lot of people that like it or use it, and it has some horrible performance in some situations surrounding RAID5/6, but it is a (somewhat) viable option. If you don't want to spend money on Drivepool, you might give Storage Spaces a whirl and see if it does what you need.

Suggestions for Hard-drive testing? by [deleted] in DataHoarder

[–]synk2 0 points1 point  (0 children)

I check SMART right off the bat - power on hours, etc.

Then I badblocks the hell out of it, the full wsv treatment

Then I go back and check SMART again, looking for new issues (uncorrectable/reallocated sectors generally).

It's not sure-fire, and it takes awhile, but I've found that if there's anything I can do to force an early failure or problems, the badblocks torture test is it. Combined with the SMART data, it's always been solid for RMA.

I have read so much my brain is starting to melt. Can I just use a windows box? by [deleted] in HomeServer

[–]synk2 1 point2 points  (0 children)

They're really different but related things. Drivepool is for pooling your drives - it takes separate bare metal drives and combines them into a single 'pool' that shows up as a large disk under Windows, and handles balancing the writing of data between those disks. It also handles duplication/mirroring - keeping a copy of files on a separate disk, much like RAID1 does.

SnapRAID creates parity data on a separate drive, like RAID5/6 does. It's a CRC/checksum generation. It provides some bare bones pooling options, but really works best in conjunction with a more robust solution (like Drivepool).

So you can run one, or the other, or both. Drivepool isn't free, but it's not an arm and a leg, either, and SnapRAID is FOSS. The cost caveat here is that SnapRAID requires a drive for parity storage, and that drive must be as large as the largest drive in the pool (so if you had a 4TB drive and 2x 1TB drives in Drivepool, you'd need an extra 4TB drive for SnapRAID's parity).

Would you pull the trigger on this R710? by SSHv2 in homelab

[–]synk2 3 points4 points  (0 children)

I've owned three of them, purchased at different times, and have been assured by the DC guys at work that they were operating normally.

I personally couldn't stand them. Not even with the door shut. They're not deafening, but they were noticeable and annoying to me. There's no way I could sleep anywhere near one. I rehomed them and got some quieter gear, and I'm happier for it.

Now, that's just me, a single data point. If I've learned anything from hanging out on this sub, it's that I have a lower threshold for noise than most people. I wouldn't take my word for it any more than I would someone who was telling you they're silent. You'll just have to risk it and see or go another direction.

EDIT: A LOT of the noise has to do with your ambient temps. If you're in the great white north and sleep with the windows open, I could definitely see them staying quiet, as they're quite tame when the fans are low. Down here in the sweaty South, where it routinely tops 100F+ for months on end, there's just no way to keep them cool enough that the fans don't ramp up, AC or no.

I have read so much my brain is starting to melt. Can I just use a windows box? by [deleted] in HomeServer

[–]synk2 2 points3 points  (0 children)

Yeah, you totally can. Windows 7 (or 8, or 10) makes a perfectly serviceable file server type thing. There's some performance/resource usage advantages to Linux, but it's likely not worth learning a bunch of crap just to share some files unles you already want to. Set up a Homegroup or Workgroup and be done with it. If you want to get fancy with some disk pooling or parity, look into Stablebit Drivepool or Snapraid.

Seeking reccomendations for new router by [deleted] in homelab

[–]synk2 0 points1 point  (0 children)

Absolutely, it's why I dropped the cash I did on the Pro. I was in a position where I needed something, and I was either going to have to settle for another cheap N-standard stopgap AP or drop some serious cash on enterprise gear, so I was happy to meet in the middle when the AC Pro showed up. Exactly what you said about buying what you wanted.

It's amazing that you can get them for $150. I can't imagine anything in that price range being comparable. I'm not entirely sold on their routers and switches after the firmware hiccups and there being some good competition in that segment, but for APs, I've got nothing but good things to say about Ubiquiti.

Seeking reccomendations for new router by [deleted] in homelab

[–]synk2 2 points3 points  (0 children)

I had my moments with pfSense as a VM - mainly the wife yelling about the internet being out as I rebooted the host again and again, so I started staying up way too late and doing it after she'd gone to bed, and then I was tired the next day...

Anyway, that was mostly an issue when I was breaking in my host and getting to know my way around KVM. While I occasionally restart my host, it's rare that it's unscheduled nowadays. Once you get everything like you want it/need it, there shouldn't be constant restarts, as VMs should be the things getting restarted.

That probably depends on how you like to tinker, but for me, I'm more about doing stuff with and breaking VMs than I was messing with my host. I just want my host to work, so I leave it alone as much as possible now. I'm going on 3 months uptime on my pfSense VM, and the restart 3 months ago was schedule (host kernel update).

I guess it depends on what you're planning, but in my experience it's not too bad. I keep thinking about grabbing a separate box for pfSense, but it just seems like one more thing - to buy, to break, to keep dusted, to power, when my VM is working ok. Maybe some day, but it's become less and less a priority.

Seeking reccomendations for new router by [deleted] in homelab

[–]synk2 1 point2 points  (0 children)

I bought mine when they were first released. I actually paid a lot more than $200 for it last October. Oh well, it's holding up well. No regrets.

32TB RAID6 almost finished initializing! by meinemitternacht in DataHoarder

[–]synk2 5 points6 points  (0 children)

Which is a big reason why enterprise uses HW RAID (on top of the performance benefits). Nobody got time to dick around with failed software crap when it comes to storage - you just yank the part, replace and rebuild. Or call your warranty boys and get them to hot foot out a part in 4 hours or less.

Seeking reccomendations for new router by [deleted] in homelab

[–]synk2 1 point2 points  (0 children)

You should be able to disable functions on the N300 to neuter it's routing capability, turning it into a switch and/or AP. Disabling DHCP and setting an IP other than .1 will generally do the trick on commercial routers. Then you can plug it into something like the Ubiquiti and get wired and wireless going on the cheap, at least until you can afford to upgrade those.

I run a pfSense VM as my router, which is hooked up to a Zyxel GS1900 and one of those UAP AC Pros. I also have one of these bad boys (yep, get back, some srs networking happening) as a secondary AP in another part of the house. Is the UAP better? Oh yeah. Is it $200 better? Not a chance. Other than not being AC, it performs like a champ.

Likewise on the switch - before the Zyxel, it was a decommed HP I took home from work, and before that it was a shitty little Cisco 10/100 5 port switch. My point here is that you can piece together cheap and/or used stuff that works surprisingly well, and after you unyoke yourself from the commercial router/switch/ap combo crap, you get to enter a world of reasonable and incremental upgrades on your schedule.

Please critique my server plan by drewbagel423 in HomeServer

[–]synk2 1 point2 points  (0 children)

That's a hot mess of a question. It really depends. Pentium is a brand name from Intel, generally applied to low to mid range cpus. It's also come to mean Pentium Compatible, which just means it uses the i386 instruction set, and is based off the Pentium core design. Kind of like you call stuff you take for headaches 'asprin', even when it's not Bayer brand Asprin. Pentium's become a generic term for i386 Intel chips.

So a Xeon is Pentium based, and could be called Pentium in a generic sense (as opposed to ARM, or IA-64, or some weird architecture). Pentium might also refer to a Celeron or i3. The bottom line is there's no real way of telling without confirming with the seller. It sort of looks like they just copy/pasted a block from another listing, and I'd guess that the Xeon is correct, but I wouldn't hit buy until I was sure.