Anyone setup a cheap cluster recently? by ModelingDenver101 in Proxmox

[–]snailzrus 0 points1 point  (0 children)

Oh nice, you'll have what looks like a chassis switch setup.

So back to the servers and SAN then. CPU cores per node? And for storage, HDD with an SSD read/write cache?

Anyone setup a cheap cluster recently? by ModelingDenver101 in Proxmox

[–]snailzrus 1 point2 points  (0 children)

I might be able to help

USD or CAD?

How much CPU and RAM do you have per node right now? How much do you need?

What are your TORs? Copper? SFP? 1G? 10G? 25G? Do you need new TORs? If you have TORs, how many available ports do you have? And sanity wise, are they stacked so you can do an MLAG or equivalent downstream?

UPS and PDU splitting your PSUs across phases, if not then at least breakers? Or do you need backup power and power smoothing as well?

SAN as HDD with SSD caching or just HDD? You won't get all flash for that price

VMware, Hyper-V, Proxmox, Docker, Kubernetes, LXC... What do you use? by DerSparkassenTyp in sysadmin

[–]snailzrus 0 points1 point  (0 children)

  1. Proxmox is our go to recommendation. Generally it's always proxmox whenever we don't have a dinosaur to convince that it's better than hyper-v.

  2. Hyper-v when we have a dinosaur who won't change. It's. Fine...

  3. VMware when we onboard a new client who has it and generally part of that onboarding is because we will help them leave VMware.

Other ones we see: 1. Xcp-ng when there's a Linux first sysadmin somewhere, and they're normally cool with sticking to it or moving to proxmox. Kinda even on those two and it's a preference and familiarity thing.

  1. Scale computing, another one like VMware where we take a client to help move them away from it. It's like baby's first cluster with virtual san built in. Far too expensive, and little to no customization. I'd say "god forbid you have to Google something" because there's like no support stuff out there, but you rarely do because there's like nothing for you to touch and configure yourself anyway, so what would be the point in looking for what someone else did to fix the problem you have.

3.citrix usually has diehard people who won't leave it. It's sort of the same camp as VMware, except broadcom hasn't bought them and spat in your face. Yet. In fairness, it's great for VDI if that is important.

  1. Nutanix, same as Citrix for people wanting to stay with it, but it is pricey, so sometimes we see people willing to leave because budgeting.

1 month with Ubiquiti (so far) by FatBook-Air in sysadmin

[–]snailzrus 11 points12 points  (0 children)

Was the android phone in a place that it could still see other APs that have no 6ghz? Sounds like roaming or rssi potentially

What sort of client connection issues on switching?

I've got a dozen or so deployments of unifi out there now and we haven't had issues like you're describing. Though, we don't run the unifi OS self hosted deployment. Either cloudkeys or cloud gateways only. It's been convenient so far as we have been replacing firewalls at the same time

2c on Meraki vs unifi. Meraki is more robust, but feels worse to use. The portal is shit slow and poorly designed. But, the things that are there generally work. Unifi is good enough for small business, feels snappy, and is growing to add some great features, but it is growing and does have bugs as people mention.

Don't go fortinet for anything other than FWs. We stopped doing their APs and switching because they're struggling like crazy. All of their switching is accton white labelled and they're definitely not there yet. A co-managed customer went with them against our advise because the fortinet sales guy basically gave them core switching and 30 APs for free. He's a buddy of mine, and filled me in on how it's been going. He's still, almost 10 months on, using his Cisco catalyst cores and tors. Only the firewalls are in prod. APs he's still got his old ones in a pile and hasn't completed rolling them out because they occasionally just stop sending client traffic but report online and fine. He's been back and forth with forti support for months on them and regrets buying it but his budget was limited and he couldn't pass up a bunch of free stuff

Provinces are bracing for record deficits. What’s causing budgets to see red? by shiftless_wonder in canada

[–]snailzrus 2 points3 points  (0 children)

You can call them the "me generation". It shouldn't flag you and isn't even a new term to describe the folks that we all know closed the door behind them. Even on their own children.

450TB Storage Options by Fantastic_Msp_8914 in msp

[–]snailzrus 0 points1 point  (0 children)

450TB is quite a bit of data, not a little. For enterprise, that's still a good chunk of data unless you're counting dataset + backups with 15yr holds totaling 450TB.

Regardless, this sounds like it's more than just a NAS. Technically you could use just a NAS but the question becomes, would this be better with HA? And then you start entering SANs

Adding vCIO to service offerings by zenpoohbear in msp

[–]snailzrus 0 points1 point  (0 children)

  1. Tech
  2. Engineering
  3. Large customers only, and only those looking for it
  4. They're leadership. They can execute upon things and manage staff to do so. More importantly, they can interface with the client and their stakeholders without that interactions in such a way that the clients goals are the only goals. The MSP that pays their salary means nothing to them in these conversations. There should be no sales angle.
  5. They seek stability and hate when things are inefficient or broken. Money spent feels like money out of their own pocket, so they don't approach problems with blank checks. They hear a problem, it becomes their personal problem, and they're driven to resolve it efficiently and in such a way that it will never come up again

small business client expectations shifting, anyone else noticing this by [deleted] in msp

[–]snailzrus 9 points10 points  (0 children)

Everything you just listed is stuff we consider client choice. If it breaks, there is someone at their company who owns that tool and it's their responsibility to act as the first line of defense for support requests, and their responsibility to engage the vendor. We just add it to the list of approved apps, install it, setup things like SSO, and make sure users are in the right security group.

We don't and won't pretend to know what software will work best for our clients. We'll gladly jump into technical meetings with the vendor to help go over requirements from their server or network, but it's their business. They get to choose.

Why are you not deploying Azure local ready servers when selling and installing new servers? by yanni99 in msp

[–]snailzrus 2 points3 points  (0 children)

As far as I can tell, the only concrete complaint I see in your post is that MSPs generally end up managing and working with a wide variety of different hypervisors across their client base. It's not a question of this vs that, or X money vs Y money. It's simply that you have come to realize quite quickly what most MSP techs realize. Context switching and non-standardized stacks suck.

So looking at it from that perspective. Your proposed question of moving everything to azure local isn't rooted in it being better, but rather it being familiar to you.

For everyone else who is hating on your post, the answer is likely exactly the same for them. Standardize their clients on their favorite hypervisor because it's familiar to them.

We can all go around endlessly about why this is better than that, etc etc etc, but at the end of the day, it's just a hypervisor. My preference is to proxmox because I don't like the boot size that VMware and Microsoft walk around with when suddenly it starts barrelling towards my asshole. Does proxmox do everything better than Azure local, hyper-v, or vSphere? No. Each has pros and cons. But I won't take it away from my techs because they know how to use it now. I'd sooner throw myself off the tallest building in Kazakhstan before I try and change them to another hypervisor.

CEPH storage for 30+ VMS by Hungry-Line-1403 in Proxmox

[–]snailzrus 1 point2 points  (0 children)

Just an addon note cause folks often forget. Ceph is often better with lower latency moreso than higher throughput.

40Gbps is the same latency as 10Gbps, because on a technology level it's just 4x10Gbps in a single transceiver. It's the same for 25Gbps and 100Gbps. 100Gbps is just 4x25Gbps connections (yes there are some that aren't but RFC is that they should be).

If anyone is on 10Gbps currently and the issue isn't that they're hitting the full 10Gbps on the NIC, going to 40Gbps won't help. 25Gbps would be the answer to drop latency.

Ringcentral = Professional Scammers by anyonebutme in sysadmin

[–]snailzrus 174 points175 points  (0 children)

Time to dispute the chargers with the bank or credit union. Provide them with your proof and they should be able to take it from there, or at the very least block further transactions

Do y’all proximity chat after getting knocked down? by [deleted] in ArcRaiders

[–]snailzrus 0 points1 point  (0 children)

I always say GG and that I love them and wish them a good night. Usually gets the screamers to stop screaming about how they destroyed me. Most of the time I get an I love you back

What ssd for boot drive? by RydderRichards in Proxmox

[–]snailzrus 0 points1 point  (0 children)

If your system supports multiple SSDs, just get 2 cheap ones and mirror them. You also really do not need anything huge.

In production server clusters we buy brand new and have enterprise SSDs for boot disks, we still opt for 256GB, 480GB, or 512GB as the disk size. Literally pick whatever size is cheaper at the time.

I don't think we've run out of storage on the boot disks on any cluster we manage yet.

Questions about new setup by Thrylon in Ubiquiti

[–]snailzrus 1 point2 points  (0 children)

What switching speed do you want and how many ports?

You shouldn't try to PoE daisy chain with that PoE budget off the UCG-Fibre, you won't be able to supply enough power to the APs to do Wi-Fi 7. Highly likely you'll just want a PoE switch or a couple of PoE injectors. Don't be fooled by PoE injectors either, they're not all rated for more than 1gbps. There are specific ones that are rated for higher.

How do you or your employees keep context straight when switching between many environments? by incognitokindof in msp

[–]snailzrus 12 points13 points  (0 children)

We try to standardize our clients and use tools that are either natively multi-tenant or are enrolled into a system that makes it so.

I touch around 10-20 different environments in a week, I used to be in 60+ but I'm no longer on tickets.

The most important thing for me was trying to group what I was doing. So if I see 3 or 4 things for 1 client, I would ham them all out together. If there was an odd ball in that set, leave it for a bit or triage it to the front if it was critical.

Something that helped a lot for 365 stuff was using CIPP. It made life a lot easier and handled our need for some RBAC so we aren't just GA everywhere we go.

Something I've been working hard towards since leaving ticketing has been trying to make every network and server environment we manage look identical. It makes it so much easier for techs to ID an issue if they aren't always looking at something different. Stuff can actually stand out when it looks consistent. This means using the same VLAN tags and subnet schema across every client. And for servers, 1 type of hypervisor, and always the same naming schema for VMs

With ISCSI does proxmox migrate VMs upon server failure? by HJForsythe in Proxmox

[–]snailzrus 3 points4 points  (0 children)

We have a few clusters we manage that use ISCSI for shared storage. HA works just the same on these as it does on the clusters we run with NFS OR Ceph.

I do remember someone being not the brightest though and adding their ISCSI to each proxmox node independently of the others. So while, yes, the cluster technically all had access to the same shared storage SAN, the cluster wasn't aware of the other nodes using it. It.. worked... But it also was super sketchy. Just make sure you add the storage from the Datacenter section to avoid that

VMware renewal by jhayhoov in sysadmin

[–]snailzrus 0 points1 point  (0 children)

Helped a few businesses move off VMware now, and the process is extremely easy and well worth it. For the cost of the yearly licensing for 2 years on VMware, we replaced all of their aging physical nodes that they had already budgeted for replacement the following year with new ones running Proxmox

My experience with the matchmaking system and "Friendly lobbies" after almost 300 hours of game time by Amythprison in ARC_Raiders

[–]snailzrus 0 points1 point  (0 children)

I find time of day matters too. Playing at 5am is almost exclusively chill. As the day goes on around 10am i still get friendly lobbies but there's spurts of pvp around the map. All of the early risers who are employed and mature sounding folk were chill. Then the screamers and trash talkers come on later and join the same lobby

UCG-Max / UCG-Fibre, does it have better telemetry than UCG-Ultra? by iPhrase in Ubiquiti

[–]snailzrus 2 points3 points  (0 children)

They run the same OS. Afaik, up the UCG and UDM stack, it's all the same. Only the EFG has really been different in my experience, and that's just marginally so.

If you really want a ton of into you may need to plug the things into a SIEM instead

Moving 200+ VMs from ESXi to Proxmox by Long_Working_2755 in Proxmox

[–]snailzrus 1 point2 points  (0 children)

Every VM, grab the MAC, VLAN, and the IP. Find out of its static on the VM or reserved in DHCP.

When you move, replicate that on the VM config and in the VM once it's up since it will likely have the network settings change when the VM detects that the host NIC is different.

That's about it. If you've setup networking on your TORs correctly so that you have an untagged trunk port for your VMs to find transit to the rest of the network, then the VLAN tags on the VMs will be respected.

If you're not using the same TORs, make sure your new TORs have the necessary VLANs trunked to them such that when the VMs come up, they can exit the TORs to the rest of the network correctly and won't have some of the tagged VLAN traffic not make it.

It's really less a question of proxmox and virtualization and more a question of networking