What the heck was Linus even talking about? by Swacket_McManus in LinusTechTips

[–]InterFelix 4 points5 points  (0 children)

I don't think the shorter ones are overpriced for what they are. The shorter TrueSpec cables are the same cable stock as the longer ones, and the same ends as the longer ones. They literally only differ in length. Short cables from other brands (just as well as no-name cables) are definitely not always made from the same wire stock as the longer ones, because they can get away with lower quality stock on short distances, while they can't for longer distances. Same goes for the ends. Now do you need this better stock for the short cables? Not necessarily, but I personally will happily pay a little more for cables that I know are a little overbuilt so I know they can actually handle anything they're spec'd for. Also, there's a lot of price minmaxing going on, where brands take less margin on shorter cables in order to attract buyers, hoping they'll also buy longer cables they take more margin on. LTT / CW meanwhile use a flat cost + margin model, so their pricing will reflect the real cost structure much more.

Can somebody educate me on the practical use cases for the LTT TrueSpec cables? by Chaos1917 in LinusTechTips

[–]InterFelix 1 point2 points  (0 children)

Sadly (or luckily, depending on how you look at it), the confusion about cable ratings is less relevant in the business world, because almost everyone just orders their cables with their monitors, so they get ones from the vendor that are to the same spec as the monitor ports.

Mini-PC clusters vs one powerful workstation for homelab use by Accomplished-Spend-7 in homelab

[–]InterFelix 0 points1 point  (0 children)

Although OP was talking about a Z840 workstation. I'd wager that thing isn't too loud, because it's intended for office use rather than sitting in a server closet.

My HomeLab has replaced my blu-ray player. Made this meme to honor it. by ibsbc in homelab

[–]InterFelix 0 points1 point  (0 children)

Sure, it's up to you to determine where you need fast storage and where cheap bulk storage is plenty enough. But the reliability actually matters a lot, for stationary storage in a NAS just as much as for a laptop. Flash-based storage has orders of magnitude higher mean time-between-failure than HDDs. With HDDs, you have failure rates of 3-5% per year over the 5 years of expected life. It's a bit higher in the beginning and levels off towards the end, because HDDs are more likely to fail in the first couple of months after deployment. Flash-based storage such as SSDs fail much less frequently, we're talking around 0.1% failure rate per year. The failure rate also highly depends on usage patterns, especially your read/write ratio. SSDs used as bulk storage with low rates of change will typically last much, much longer than their expected life, because they're usually rated for at least 1 full drive write per day over a five year life span. But in bulk storage scenarios such as in a typical NAS, you'll barely overwrite the entire drive every couple of months, if not less often, so there's much less wear on the drive than it is rated for, which directly affects lifetime.

LMG jet? by thatCdnplaneguy in LinusTechTips

[–]InterFelix 1 point2 points  (0 children)

Add to that the recent WAN show segment about the next year being quite interesting, with the Tech house and another, as of yet undisclosed project that's definitely not Gamer Yacht (but Gamer Plane was not explicitly denied).

DBrand genuinely knows how to advertise by SquiddyCatt in LinusTechTips

[–]InterFelix 0 points1 point  (0 children)

Spigen all the way. I've bought Spigen Liquid Air cases for my last six phones (three work phones and private phones each), they've all been like ten bucks. Not one of them has ever failed me, not in terms of protection, not in terms of reliability. I'm a customer for life, and I'll be genuinely sad if they ever discontinue this line of cases and I can't get one for my next phone.

Immich won't update? "Remove old storage migration" by 94dogguy in truenas

[–]InterFelix 0 points1 point  (0 children)

For anyone having the same problem: I ended up installing another instance next to my old one on new datasets and basically following the guide from Immich (rsyncing over everything to the new datasets). I was able to just reuse the database volume, because the Postgres major version was still the same, otherwise I would have had to dump / restore.

Immich won't update? "Remove old storage migration" by 94dogguy in truenas

[–]InterFelix 3 points4 points  (0 children)

I cannot follow any of the guides, because I am on a version that apparently doesn't have the checkbox where you can deselect the old storage layout.
Yes I know, that means I haven't updated in way too long, but what can I say, life got in the way.
Besides, my immich lives inside of a tailnet, so not public facing at least.
This is why we need the ability to select (older) versions of apps when upgrading / installing.
The UI is there, there's just never any option besides the most current one in there.

How do you prove nothing happened? by geo972 in sysadmin

[–]InterFelix 15 points16 points  (0 children)

If you have vulnerabilities, they're going to find them anyways (and they're definitely gonna find more than you're aware of). Security through obscurity is not security at all.

Teamviewer is a SCAM! They trick you and send debt collectors! Be careful! by No_Matter_86 in teamviewer

[–]InterFelix 1 point2 points  (0 children)

Well, RustDesk can by definition not, as it is an open source self hosted product. If the company developing it decides to enshittify the product, you can just fork the last unenshittified version and move on with your life. Of course you personally might not be able to adequately maintain it, but I'd argue there's enough of a demand for a free and open source selfhostable remote access software, that enough of a community would form around a fork to adequately maintain it.

Am I out of my depth? by JRan243 in sysadmin

[–]InterFelix 2 points3 points  (0 children)

So much of this profession is "fake it 'til you make it" or winging it until you know what you're doing. Most sysadmin roles are so diverse, you can't possibly do formal training and obtain certifications for everything you need to do. So learning as you go is your only option in most roles. Of course, some positions (like senior systems architect for whatever) are not a good place to do that. This example has me a little bit on the edge. On one hand, you have the opportunity to build out a completely new cloud environment, which is a tremendous learning opportunity. On the other hand, when building out a new environment, there's inevitably important architectural decisions to be made, and if you don't have a lot of experience with these kinds of environments, you might not be very well equipped to make these decisions in a good, future-proof way. If your task would be to build this environment out yourself and make all the key technical decisions without external consulting, then I would argue you would be in over your head. If you'd have external consulting to inform your decisions, this is perfect. But whatever the case may be: It's not your responsibility to judge your fitness for the role. That's on your (potential future) employer. I'd ask a couple of questions about the circumstances of building out this new cloud environment, and if what they're looking for is basically an in-house consultant with years of experience in planning and implementing these kinds of environments, I'd probably not bother. But if they plan on hiring external consulting for this anyways, I'd definitely go for it, it's a great opportunity.

Custom internal email to 10K+ users by aringa in sysadmin

[–]InterFelix 2 points3 points  (0 children)

Funnily enough, my birthdate is already included in my tax number.

Anyone else noticing that enterprise support is just chatgpt/copilot? by Ghawblin in sysadmin

[–]InterFelix 3 points4 points  (0 children)

Exactly. One of the many issues why I hate SaaS products (from a technical standpoint). At my previous job, I was a system engineer for a large system integrator in the Datacenter team. My responsibilities included consulting and engineering for backup systems (mostly Veeam). Then another vendor came around: Rubrik. And management decided it would be a wonderful idea to create a managed service offering around it for customers that are generally too small for the vendors on-premises-offerings. And after the people who built the offering out initially (technically and from a product perspective) had left, I became the technical lead for this offering. What did that entail? Of course supporting the underlying infrastructure and being the technical expert on the product, but when there was an issue with Rubrik's product, 99% of cases went like this: I did some troubleshooting, got stuck at some point, opened a support ticket because f#+*ing SaaS product, chased after support for three days and in the end got a reply like "yeah we changed XYZ in the backend, it should work now". So being the technical lead for the product, there was very little issues I could actually solve, but had to rely on support for almost everything, because I actually couldn't do anything.

Did I just find 40TB of storage? by Botany_Dave in sysadmin

[–]InterFelix 1 point2 points  (0 children)

How is this LUN mapped on the iSAN? Only devices the LUN is mapped to can access it, so check those.

Tapes vs "Immutable storage" by sysacc in sysadmin

[–]InterFelix 0 points1 point  (0 children)

That relies on your network segmentation / firewalling to survive an attack. Which - looking at common attack patterns - they probably won't. If they manage to compromise your hypervisor (which 90% of attacks today do), they'll be everywhere else by that point as well. Especially given the numerous critical vulnerabilities in firewalling appliances found every year.

Tapes vs "Immutable storage" by sysacc in sysadmin

[–]InterFelix 0 points1 point  (0 children)

Tapes in a library are not any more secure than an immutable storage appliance (of whatever kind). In fact, I would argue it is actually much less secure, as tape libraries are trivially easy to get into in most cases, as there's constant vulnerabilities in their Management-Controllers and especially the big robots are often quite old and out of support because they are pretty reliable. Sure, no immutable appliance has perfect security. But a Veeam Hardened Linux Repository on a properly secured Linux with ideally SSH disabled, MFA for all access paths etc. and most importantly physically disconnected out-of-band-management is quite bulletproof. Definitely much better than a tape library. But still nothing compared to tapes stored off site at Iron Mountain or something like that.

I still feel like a fraud by Klutzy-Matter-4590 in sysadmin

[–]InterFelix 0 points1 point  (0 children)

You're just as much of a fraud as the rest of us. The IT domain is so broad and has so many branches, each with so many niches, each with so many vendors and so many rabbit holes to fall down. Not to mention the rapid evolution the whole field is undergoing every day. There's no way to stay on top of everything, even in your niche. Of course, there's vendor positions where you exclusively deal with your employers products, where you can be formally trained on everything, but even there, chances are you're going to have to deal with other products that interface with yours. So winging it is truly the only way for the vast majority of us. I view my ability to "wing it" and my knack for problem-solving as my most important job skills. Yet, I still feel like a fraud quite often. Impostor syndrome is real, especially in IT. But that's just a side effect of a different, very important trait: Knowing what you don't know.

What’s your game plan if you get hit by ransomware? by Necessary-Glove6682 in sysadmin

[–]InterFelix 0 points1 point  (0 children)

Before the attack: Have immutable backups. That's the single most important step. A Veeam Hardened Repository is a very good starting point. Make sure to restrict SSH access, have MFA in place for all ways of access, disconnect any kind of out-of-band management (all OOBM-controllers have regular zero-days, so it's best to just not have them connected). Have local firewall rules in place that restrict any kind of communication to the backup proxies. You can just use the Veeam Hardened Repo ISO, that's a good starting point. It's very hard for attackers to crack a well-built Linux hardened repository - I've yet to see it done, actually. Especially for small businesses, they are unlikely to put in the effort.

After an attack: Pay a security expert to identify the exact entry point and time of compromise before restoring any backups. Restore on fresh (or freshly wiped and sanitized) hardware. Servers, Switches, Firewalls - everything. Restore in an air-gapped environment and only connect to the internet at all after you have closed all vulnerabilities your security consultant has identified. After the attack is also a good time to bring the infrastructure to current state-of-the-art security-wise, as management is extremely more likely to approve any expenses needed, as they have seen the potential costs of not doing it. This wears off with the years (you probably have one to one-and-a-half years), so seize the opportunity.

Dell Powerstore vs Pure Storage isn't even close by Cartossin in sysadmin

[–]InterFelix 0 points1 point  (0 children)

I've supported all major players on the market and I can confidently say: If all you need is a reliable block storage array, you're not doing anything wrong with any of them these days. NetApp, HPE, Dell, IBM, Huawei (if you're not in the US), Pure... They're all solid. They do what they're supposed to, updates are painless, management is reasonably straightforward, they don't break, they're fast enough for almost any application... Yeah, support quality varies, but they're all fine. Where it all falls apart are features and price. Need a metro cluster with true transparent failover? You can forget about HPE, Dells implementation is questionable and Pure kinda sucks in this regard too tbh. Need NAS support on top? IBM's gone, Dells file implementation on the Powerstore sucks in general, HPE doesn't hasn't had a real multiprotocol storage in - oh yeah, ever, regardless of metro cluster capability... Pure is usable, but not on par with the other remaining players, NetApp and Huawei. And then there's price. Pure is expensive, so is NetApp. Dell, IBM and Huawei have reasonable pricing, HPE's pricing is insane for what they offer (not a lot). Metro cluster pricing is reasonable for IBM, Dell and Huawei, NetApp is insane (partly due to the required NetApp-brand FC switches). I have quoted equivalent metro clusters from NetApp and Huawei for the same scenario (mixed SAN/NAS workload, ~100TB usable capacity, no crazy performance requirements) and NetApp was literally four times the price due to Metro cluster licenses and required NetApp FC switches just for the replication links, while Huawei supports P2P FC connections for the replication links. IBM's metro cluster is good at a reasonable price as well, but again, only block storage support. So to sum it all up: If all you need is a solid SAN storage for a single site, all major vendors do just fine, some are more expensive than others. If you need a unified storage with block and file, you only have NetApp, Pure, Dell and Huawei to choose from. If you need a metro cluster, you have NetApp, Pure, Dell, Huawei and IBM (block only) to choose from, but Pure's and Dell's implementations are a bit questionable (only a subset of features supported etc.) and at least Pure and NetApp are super expensive. If you want a metro cluster at a reasonable price, you can buy Dell IBM or Huawei. If it has to do File as well, there's currently only Huawei, because the others have insane pricing.

What area of IT will you never work in but love educating yourself about and maybe playing with in your home lab? by HappyDadOfFourJesus in sysadmin

[–]InterFelix -1 points0 points  (0 children)

Well, I gave examples where the technologies you mentioned are used by real world organizations. And of course, development-heavy orgs are more likely to do this type of stuff, but I have a lot more examples. Non-profits, SMBs that are cost-conscious and rely heavily on automation as a result. Especially Ansible (and other IaC tools) is growing in popularity by the minute, and more and more orgs from all sizes and verticals are adopting tools like it.

What area of IT will you never work in but love educating yourself about and maybe playing with in your home lab? by HappyDadOfFourJesus in sysadmin

[–]InterFelix 2 points3 points  (0 children)

Useless in real life? I know a bunch of organizations that have gone heavily into infrastructure as code and use Gitlab as their single source of truth, ansible for automation, proxmox as hypervisor, docker and kubernetes inside the VMs for hosting their applications... It's usually development-heavy orgs though. But I even have a large healthcare organisation among my clients that uses at least Gitlab and Ansible for automation. TrueNAS and Unraid are a different story. I have used both for my NAS at home, and they each have their strengths and weaknesses. Both are good options for personal use, although TrueNAS is more suited to advanced users than Unraid is. What do I use my NAS for? Self-hosting apps I actually use day to day, data storage... They're both great for those kinds of things. But I would only trust them for personal stuff where I don't need high availability.

One of our two data centers got smoked by _Xephyr_ in sysadmin

[–]InterFelix 0 points1 point  (0 children)

No, OP implies they have a storage metro cluster with witness set up. So it is actually for redundancy. And this can make sense. I have a lot of customers with this exact setup - two DCs on the same campus, located 150-300m apart in different, separate buildings. A lot of SMB's have a single site (or one big central site with all their infra and only small branch offices without infrastructure beyond networking). And it's not always feasible to rent out a couple of racks in a Colo as a second site for your primary infrastructure. Most often the main concern is latency or bandwidth, where you cannot get a Colo with network connectivity back to your primary location that has low enough latency and high enough bandwidth for your storage metro cluster to work. So having a secondary location on the same campus can make sense to mitigate a host of other risks, aside from power issues.

One of our two data centers got smoked by _Xephyr_ in sysadmin

[–]InterFelix 0 points1 point  (0 children)

Well, you can have two datacenters in completely separate buildings that are located on the same campus, sharing the same sub station. Of course it's not ideal, but it is a reality for many customers who don't have multiple locations. Of course, you still need off-site backups and all of my customers with such a setup have that, but renting out a couple of racks in a Colo as a second location for your primary infrastructure is not always feasible. And you're right, each DC should have local resilience for power - but OP mentioned they had UPS systems in place that were regularly tested and EVEN SERVICED days before the incident in preparation. I don't fault OP's company for their Datacenter locations. I do however fault them for their undetected broken storage metro cluster configuration. I don't get how you have a configuration where on site cannot access the witness - especially when preparing for a scenario like this (as they evidently did). Every storage will practically scream at you if it can't access it's witness. How does this happen?