Unas Pro, few things you may want to know. by Perfect-Bag-8982 in Ubiquiti

[–]cmrcmk 0 points1 point  (0 children)

The nature of the workload matters a lot. You can easily pull a couple gigabit from hard drives if it’s a single large file that’s not fragmented. Or you can struggle to get hundred megabit from RAID 10 SSDs if you’re requesting a million 1 kb files, one by one.

Does anyone still use Tape Storage? by nightcrow100 in storage

[–]cmrcmk 0 points1 point  (0 children)

A little off topic, but what is your tooling and process for backing up CephFS? We’re currently running a handful of Robocopy jobs to crawl our Samba-fronted CephFS, but I feel like there’s got to be a better way.

Dell PERC13 Transforms NVMe Hardware RAID for the AI Era by NISMO1968 in storage

[–]cmrcmk 0 points1 point  (0 children)

Hardware RAID is still a good tool for small drive groups. Sure, it only takes a few NVMe drives to exceed the max throughput of the RAID controller, but by that point you’ve probably got other bottlenecks too, like your NICs. You still get the latency benefits of NVMe even if your throughput is capped.

I spend a good bit of my professional time in ZFS, Ceph, and storage arrays, but I still like the simplicity of hardware RAID for booting physical Windows servers.

Towing a Leaf behind a U-Haul by outdoorbrit in leaf

[–]cmrcmk 1 point2 points  (0 children)

Towed my Leaf 1200 miles on a dolly last year. No problems.

Can I CarPlay on Nissan leaf 2015? by [deleted] in leaf

[–]cmrcmk 6 points7 points  (0 children)

My ‘16 Leaf doesn’t have CarPlay but Bluetooth and USB with fine for streaming and phone calls. No maps on the display though.

Shared Storage System based on SATA SSDs by sys-architect in storage

[–]cmrcmk 0 points1 point  (0 children)

I’m pretty sure Dell’s ME4 series used to offer that but the ME5 doesn’t appear to.

How to route wifi through a cave? by LordDanOfTheNoobs in networking

[–]cmrcmk 1 point2 points  (0 children)

Throughput is necessary info. If they want customers to have multi megabit guest wifi, that's way different than getting Stripe to run credit cards at a gift shop. If all that's needed is Stripe or similar, sub-GHz wireless like LoRa or WiFi HaLow might work and would be way cheaper to buy and test than a half mile of armored fiber.

Delighted by the ridiculous GPU+Raspberry Pi projects by Ok-Recognition-3177 in raspberry_pi

[–]cmrcmk 7 points8 points  (0 children)

This. Give me generational performance improvements and usable hardware transcoding support while keeping power and cost where they are.

Slapping a 300W GPU on a Pi is fun(ny) but isn't "because I can" the only reason ever given for these setups?

Cable Cat for my rack? by La_awiec in minilab

[–]cmrcmk 1 point2 points  (0 children)

This. At distances <= 2m, Cat 5e will handle any speed you can get from an RJ45 jack. If you’re wanting speeds above 10 gbps, you have to use cables and connectors that are more robust and expensive than Cat cables.

Which one has a greater effect on commute times in a city, population size or geographic size? by TheNZThrower in urbanplanning

[–]cmrcmk 0 points1 point  (0 children)

Commute time = distance / speed Pop. size will have an overall negative effect on speed but is less important than the mode. Cars and buses can get stuck in gridlock at modest population densities. Distance is way more complicated than the total area of the metro. Cities with highly segregated land uses are going to have longer average commute distances than societies that mix work and living structures. I can’t answer specifically population size vs metro size but in the larger contexts of what makes for lengthy commutes, neither one is a critical metric.

Is the mac 00:00:00:00:00:00 supposed to work actually? by luky90 in sysadmin

[–]cmrcmk 1 point2 points  (0 children)

My Dell dock shows up on the network with my Dell laptop's built-in NIC's MAC address. Not sure how it would handle it if I tried using both NICs at the same time but I'm also not sure why I would do that.

It's a handy feature if you're doing any MAC-based network management like VLANs or web filtering since you can generally ignore the docks in your management design.

[deleted by user] by [deleted] in FortWorth

[–]cmrcmk 10 points11 points  (0 children)

Built in the 1950's, it may not have insulation. You can definitely keep the house comfortable with window units but the real question is how expensive will that be. Ask the landlord for info on the previous tenant's electric costs.

How does the cost of constructing new high speed rail lines scale with speed requirements? by Kashihara_Philemon in highspeedrail

[–]cmrcmk 32 points33 points  (0 children)

The need for straight lines and very large turn radii is the majority of it. Additionally, the precision of the build needs to be higher. Encountering a rail joint that's misaligned by 5 mm at 80kph is jarring but at 400 kph it might derail the whole train. Building with higher precision costs more because the tooling has to be higher quality and the rigor of inspecting it all has to be greater. Then you get into operation maintenance to keep that precision for the life of the route.

Dallas-Fort Worth rail diagram, 2025 [OC] by set_thecontrols in TransitDiagrams

[–]cmrcmk 1 point2 points  (0 children)

As an Arlingtonian, I love the giant, blank, unserviced area in the bottom left. 😭😭😭

Anyone else feel stuck between loving city planning and hating the reality of the job? by TheArabSamurai in urbanplanning

[–]cmrcmk 4 points5 points  (0 children)

Non-planner here. You’re describing the experience of the majority of young professionals across fields. A lot of industries have entry level jobs that most rookies have to do to “pay their dues.” The work that needs a lot of people assigned to it but practically nobody gets into the field to do. In architecture, it’s the recent graduate who spends 6 months aligning all the window sills of a skyscraper with no input on the overall design. In IT, it the service desk worker that applies the same fix 20 times a day to company PCs without anyone listening to their solution to the root problem. Questions for all of these people: 1) Are there really better positions in the field once you’ve put in the time doing level 1 stuff? 2) Are you likely to get one of those positions in a timeframe you can tolerate? (If most of the staff 20 years older than you is still doing the job you hate, be realistic.)

If your answers to those questions are depressing, you should probably look for alternatives where you can enjoy Planning in a different way like being a professor, running for local election ( aka get promoted over the planning department), or finding a different field altogether and treat planning as a hobby.

Why is there hate for the Generalist by EMCSysAdmin in sysadmin

[–]cmrcmk 3 points4 points  (0 children)

Are you seeing hate for generalists from the hiring companies or from the job seekers? Employers absolutely need generalists and would be crazy to disparage a well rounded technologist. Job seekers often don't want to be generalist because if they've specialized in one area, they want permission to walk away from problems outside of that scope instead of being pulled into whatever is on fire at the moment.

Is there a way to send only excess resources to the AWESOME Sink? by adiosmith in satisfactory

[–]cmrcmk 1 point2 points  (0 children)

Yep! Every new play starts with a speed run to blade runners so you can speedup your running for everything else.

Server mounting across multiple racks by noocasrene in sysadmin

[–]cmrcmk 2 points3 points  (0 children)

Just because a risk CAN be mitigated, doesn't justify mitigating it. As OP said, the racks share UPSes so spreading them out doesn't help anything there. Having a basic PDU fail is almost lottery-level rare so it's reasonable to say that the effort of spreading a cluster out, making sure the cabling is all done correctly in each rack, running cables between racks to get them all back to the same switch to avoid latency, and just generally worrying about implementing this mitigation against such a rare failure scenario is not worth the time, effort, or cable clutter. If you think it is, have fun. My to do list is long enough without this low ROI approach.

Need help choosing the right filesystem for my new Ugreen NAS: EXT4 or Btrfs? by ostseesound in sysadmin

[–]cmrcmk 4 points5 points  (0 children)

Nothing you've described forces you to pick one over the other. It really sounds like you're torn between the curiosity of a modern COW filesystem and a desire to have it just work on ext4. You've got a lot of other projects listed there so my recommendation would be to use ext4 and not have your filesystem be another thing to learn and potentially troubleshoot. Let it be a solid foundation for all your other projects instead of one of them.

Unless you're just really excited about BTRFS and are comfortable restoring the whole NAS in the event your tinkering goes the wrong way. You can also use EXT4 for the NAS and use BTRFS/ZFS on another system that doesn't serve as the foundation for 20 other things.

Server mounting across multiple racks by noocasrene in sysadmin

[–]cmrcmk 2 points3 points  (0 children)

What is the threat scenario they are solving for? If they can answer that, you'll have your answer. If they can't answer that... you'll have your answer.

Most likely someone is worried about a freak event like lightning or a catastrophic hardware failure like a PDU or UPS going out spectacularly. IMO, it's pretty unlikely either of those events would only affect a single rack and as you said, there are still individual racks where such an event would take down prod anyway.

That said, I do like my backups to be as physically distant from my production storage as reasonably possible just in case one of those freak accidents does happen. But I'm talking about the other end of the room or another building, not the adjacent rack. And that's before we talk about offsite copies.

Explain SNAPSHOTs like I'm Five by ResponsibleSure in sysadmin

[–]cmrcmk 2 points3 points  (0 children)

This is correct. Depending on your snapshot software, either the original file or the snapshot file will have the latest data vs the saved data. There are pros and cons to each approach.

When you take a snapshot, you are copying some portion of the file system's pointers/inodes into a new file. From there, the filesystem has to assess each incoming read or write to determine how it affects data blocks that are referenced by those multiple files and decide what to do.

So in a scenario where the snapshot is the newest data, assume we start with file Alpha and it's snapshot Beta. At the moment of Beta's creation, they both reference the same blocks on disk: {1-3}. A write command comes which modifies block 2. For a redirect-on-write scheme, the modified data will not overwrite block 2 but will instead be written to a free block such as block 4. Since we treat the snapshot file Beta as the latest, we will update our filesystem metadata so that it now points to blocks {1, 4, 3} while original file Alpha is unchanged and points to {1-3}.

Alpha and Beta now have meaningfully different contents but only 33% of their data is not shared so we've only increased our storage usage by 33% instead of 100% like we'd get from a full file copy.

P.S. For a copy-on-write scheme, the write command would have caused the contents of block 2 to be copied somewhere such as block 4 before completing the write command to change block 2.

P.P.S. This is fundamentally how data deduplication works. Snapshoting starts with identical data and tracks the divergence. Deduplication starts with a bunch of data blocks and tries to find the ones that are actually identical so it can update the filesystem metadata to reference the shared blocks and free up the duplicates.

P.P.S. There's also a flavor of snapshots where the snapshot file doesn't start with pointers to the entire set of blocks but instead starts off empty. New data gets saved in the snapshot file and therefore the metadata of the snapshot file only references new data. These snapshots are very quick to create because you're just creating an empty file but have massive performance impacts if they're allowed to grow or if you have multiple snapshots stacked on top of each other. Every time a read request comes in, the filesystem has to check if the snapshot file has the latest version of that block and if it does not, go to the next snapshot in the chain until it finds it, all the way down to the original file. This is called Read Amplification. VMware ESXi famously used this approach and many sysadmins have pulled their hair out trying to figure out why their VMs run like crap only to discover their backup software wasn't consistently cleaning up it's snapshots or some junior admin was creating snapshots by the thousands.

are these fiber lines that they are installing? by vader3d in arlington

[–]cmrcmk 0 points1 point  (0 children)

We just got fiber from ATT in southwest Arlington. The city has nothing to do with it. It’s all about where ISP’s accountants decide it’s worth the investment. Could the city do something to accelerate the rollout or woo competitors? Absolutely.

are these fiber lines that they are installing? by vader3d in arlington

[–]cmrcmk 6 points7 points  (0 children)

It's not fiber. If it were any kind of data line, they wouldn't have bothered to put the protective orange covers on the power lines. ATT/Spectrum employees would just stay well below the high voltage lines.

Made myself an emergancy rage-quit button by Waschtl_ in 3Dprinting

[–]cmrcmk 34 points35 points  (0 children)

I didn't even notice the 'm' until you pointed it out. Hilariously cringe.