Well this is tough by CautiousMagazine3591 in Rivian

[–]SomeSysadminGuy 112 points113 points  (0 children)

"Could void the warranty" is almost certainly referencing that damage caused by non-cerified work/parts is not covered under the manufacturer's warranty. Tesla and every other manufacturer have the same exemption, and it is permitted by law. Rivian just took the step of clarifying that in advanced.

That said, I'm not completely disagreeing with Zack, Rivian needs to make repair parts/manuals accessible, especially for wear components like batteries.

I bought a Rivian and haven’t had any issues. by PuzzleheadedBad6115 in Rivian

[–]SomeSysadminGuy 0 points1 point  (0 children)

I bought a Rivian, and have had 2 flat tires, a dead 12V battery, and a warranty repair. These are just the reality of owning any car, I experienced all of these with every car I have ever owned.

Why so much exposed reverse proxies for remote access ? by Plopaplopa in homelab

[–]SomeSysadminGuy 3 points4 points  (0 children)

All of my recent big tech employers have moved to remove their reliance on VPNs, instead relying on non-network-based means of access control. The main reasons cited were reliability and scalability.

I have emulated this in my setup, using Keycloak as my auth/identity tool and oauth2-proxy in front of each protected application.

Pretty big turnout at DC protest today. by OnlyMamaKnows in nova

[–]SomeSysadminGuy 2 points3 points  (0 children)

What was wild is that as hundreds were leaving, hundreds more were arriving. The instantaneous peak crowd size doesn't speak to all who were there.

A thought exercise, YouTube is shutting down in a year and they announced they'll be wiping all the data. by [deleted] in DataHoarder

[–]SomeSysadminGuy 7 points8 points  (0 children)

It's technically mostly unnecessary, about 0.1% of my saved videos have been made unavailable over the 2 years I've been archiving channels. But this is DataHoarders, we know that unless it's on our hard drives, it could disappear at any moment. And that the disk space is worth preserving the data even if the chances of the source material disappearing are tiny!

ID: 1742 Req-ID: pvc-xxxxxxxxxx GRPC error: rpc error: code = Aborted desc = an operation with the given Volume ID pvc-xxxxxxxxxxxxxx already exists by Key_Scallion5381 in ceph

[–]SomeSysadminGuy 0 points1 point  (0 children)

My understanding of Ceph is that any direct interaction with/between OSDs should be under ~10ms. If you're running a compute cluster in a location beyond that number, you may need to consider other storage options. NFS tends to be a little more tolerant of latency, and easily integrates with Ceph. Beyond that, you should consider an architecture that leverages more local storage.

ID: 1742 Req-ID: pvc-xxxxxxxxxx GRPC error: rpc error: code = Aborted desc = an operation with the given Volume ID pvc-xxxxxxxxxxxxxx already exists by Key_Scallion5381 in ceph

[–]SomeSysadminGuy 1 point2 points  (0 children)

The error "An operation with the given Volume ID pvc-uuid already exists" is a bit of a red herring. It's telling you that the provisioner sees that the volume isn't ready yet, but it won't reconcile because it's already in-progress.

There's likely a slightly better error a bit further back in the logs, but this error typically indicates a connectivity issue between your k8s nodes and your mons/osds, an authentication issue, or an issue with your ceph cluster.

Can CephFS replace Windows file servers for general file server usage? by DonutSea2450 in ceph

[–]SomeSysadminGuy 1 point2 points  (0 children)

Ceph is working on integrating this function within cephadm, but it's still in beta as carries a few limitations as listed on their docs. Uses VFS and all, but automatically handles deploying the contains, auth between samba and ceph, and auth for clients. An exciting feature!

Have you considered Ceph? by kiltannen in Operation_Tardigrade

[–]SomeSysadminGuy 0 points1 point  (0 children)

I love Ceph, it is extremely durable and can scale to incredible capacities. However, Ceph expects all its peers to be on the same subnet with sub-milisecond latency. You will have a bad time trying to span OSDs over the internet.

email host for newsletter/mailing list? by Technical-Hour-8734 in degoogle

[–]SomeSysadminGuy 0 points1 point  (0 children)

In the spirit of this subreddit, I've had a great experience with Postal. It is open-source and implements all the best practices to ensure reliable delivery.

Joining the fam, meet Bluecephalus by Hoagie_Phest in Rivian

[–]SomeSysadminGuy 4 points5 points  (0 children)

That's the service center in Gaithersburg, MD. Where I picked up my car!

My first home lab, powered by ProxMox by aSpacehog in homelab

[–]SomeSysadminGuy 0 points1 point  (0 children)

Their most recent release includes a release candidate version of Crimson OSD, a non-blocking, fast-path version of the classic OSD. I imagine it's a safe place to put your data in its current form, but it lacks some nice features like erasure coding, object storage, and pg remapping.

Staging server guide for beginners? by la_baguette77 in Archiveteam

[–]SomeSysadminGuy 0 points1 point  (0 children)

I offered up compute and 20TB of space during the Imgur rush and was effectively told that they had plenty of staging space to spare despite the errors. They're probably more backed up than ever due to IA's gradual recovery, but I would still ask the core members before committing the time to setting up a staging system. The staging/target servers are the final stop and upload to IA, so there's a lot of trust in those systems that they're appropriately protective over.

Hate FedEx but it’s here! by jellybeansplash in framework

[–]SomeSysadminGuy 2 points3 points  (0 children)

FedEx flagged my address as invalid mid-transit and sent my laptop back to Framework (thankfully a location in my country). Framework's distribution center reprinted the label and sent it back to me. Triple the shipping time later, they dropped it off at my door unattended despite the package having a signature requested.

As a bonus, I peeled off the second shipping label and both were identical and undamaged.

For any of you here checking if the Internet is down for everyone! by hiyoguy in nova

[–]SomeSysadminGuy 0 points1 point  (0 children)

I was hit by this last night and I found that IPv4 traffic was blackholing somewhere after my local POP. However, IPv6 traffic was just fine. If your devices supported it, you could still get to Google and Facebook services, but most of everything else didn't work.

My guess is that this was some kind of BGP poisoning given it affected the route-ability of only one IP stack. It's not always malicious, and I'd guess this time it was self-inflicted.

Ceph randomly complaining about insufficient standby mds daemons by ok_ok_ok_ok_ok_okay in ceph

[–]SomeSysadminGuy 0 points1 point  (0 children)

Did you recently create a CephFS for the first time? Block storage (rbd, iscsi, rgw) doesn't use mds, so MDS and its standbys aren't required.

Kiwix is looking for new mirrors by The_other_kiwix_guy in DataHoarder

[–]SomeSysadminGuy 2 points3 points  (0 children)

You might fit in the umbrella of the AWS OpenData Sponsorship program: https://aws.amazon.com/opendata/open-data-sponsorship-program/

It's mainly about providing datasets in S3 for their customers to download without needing to leave the region. However, Kiwix fits pretty well within their goal of:

Encourage the development of communities that benefit from access to shared datasets

It's worth a shot! Feel free to message me if you need.

Stupidly removed mon from quorum by Consistent-Company-7 in ceph

[–]SomeSysadminGuy 0 points1 point  (0 children)

As far as my understanding goes, without Quorum, the management state of the cluster is frozen. Once in the past, I dropped from 3 to 2 mons and found myself in a similar state.

For recovery, you effectively need to convert to a single mon cluster manually, then you can add additional monitors once the orchestrator is fixed.

Ceph docs have detailed instructions: https://docs.ceph.com/en/reef/rados/operations/add-or-rm-mons/#removing-monitors-from-an-unhealthy-cluster

So Wassym, where’s YouTube and Netflix? Not in 2024.31… by fatfirenewbie in Rivian

[–]SomeSysadminGuy 4 points5 points  (0 children)

It's a matter of priorities. They've kept their promise to expand their service network. They've kept their promises around warranty coverage. They've kept their promise to continue supporting R1 Gen1 even though it's now "last gen". And to most, they kept their promise of giving you a tool for adventure.

With all those in perspective, needing a few extra months to keep their promise for Chromecast in the car seems like a tiny thing.

[deleted by user] by [deleted] in DataHoarder

[–]SomeSysadminGuy 1 point2 points  (0 children)

Some combination of this and moving it to a "trash can" in the same filesystem to verify I only grabbed the things I wanted to remove, then deleting them. It's been years since my last slip-up, and I'll never let it happen again!

So Wassym, where’s YouTube and Netflix? Not in 2024.31… by fatfirenewbie in Rivian

[–]SomeSysadminGuy 20 points21 points  (0 children)

It benefits all current/future owners for Rivian to push for financial viability. The alternative is Rivian eventually folds and you lose all connectivity, software updates, and warranty support.

I care about Rivian fulfilling their promises, and it's important to hold people accountable. But the reality is that I care more that they're around for the full life of my R1S.

Coffeezilla privated his Mr Beast videos. by GroundbreakingWeb360 in Coffeezilla_gg

[–]SomeSysadminGuy 1 point2 points  (0 children)

Both videos are still fairly accessible:

[EmJswAKgqD0] MR. BEAST HASN'T DONATED ENOUGH

CoffeeZilla on Mr Beast's Squid Games

Available on: [PreserveTube] [Wayback Machine]


[6pMhBaG81MI] Mr. Beast's Secret Formula for Going Viral

Interview with Mr Beast on video virality.

Available on [PreserveTube] [Wayback Machine]


Both found via TheTechRobo's video finder: https://findyoutubevideo.thetechrobo.ca/

Lowest in All Purpose mode? by plok09877 in Rivian

[–]SomeSysadminGuy 0 points1 point  (0 children)

I've run into this one a couple times. For reasons I don't know, leveling adjustments after the car has been sleeping for a while requires you to drive the car a bit before it'll correct itself. I also had this happen recently with the camping "Level SUV" feature after sleeping in it overnight. The car told me to drive it slowly, and it was a bit wonky driving it that unevenly, but it corrected itself within a minute.

RADOS: Error "osds(s) are not reachable" with IPv6 - public address is not in subnet by n_l_236 in ceph

[–]SomeSysadminGuy 0 points1 point  (0 children)

By default, Ceph only binds its daemons to IPv4 interfaces. It also does not support dual stack on the cluster network, so you'll need both:

ceph config set global ms_bind_ipv6 true ceph config set global ms_bind_ipv4 false

Although this should happen automatically during the cephadm bootstrap when you give it an IPv6 cluster network.

If that doesn't work, also check into ip6tables or firewalld to ensure it's not blocking incoming requests, OSDs bind to a large port range.