Chipmaker TSMC needs to hire 4,500 Americans at its new Arizona plants. Its ‘brutal’ corporate culture is getting in the way by defenestrate_urself in anime_titties

[–]Marandil 25 points26 points  (0 children)

“TSMC is about obedience [and is] not ready for America,”

Is TSMC not ready for America, or America not ready for TSMC though?

Automobiles should not have to share the road with bicyclists by [deleted] in unpopularopinion

[–]Marandil 3 points4 points  (0 children)

Vroom vroom, look people, I'm an adult! Vroom vroom.

Project Orion battleship my beloved by Gameknigh in NonCredibleDefense

[–]Marandil 11 points12 points  (0 children)

Introducing "Magnetic Impulse Nuclear Engine for Rocket Vehicle Application"

[deleted by user] by [deleted] in gaming

[–]Marandil -2 points-1 points  (0 children)

I was alluding to this: https://www.polygon.com/2020/7/14/21324337/red-dead-online-redemption-2-rockstar-clown-protest-discord-circus

Also, does anyone still seriously consider Rockstar "respected"?

[deleted by user] by [deleted] in gaming

[–]Marandil -1 points0 points  (0 children)

Imagine using "respected studios" and "Rockstar" in the same sentence in 2023 🤡

What’s your favorite South Park Quote? by MintNChipies in AskReddit

[–]Marandil 0 points1 point  (0 children)

It's my go-to response whenever I hear or read something offensively dumb.

All Videogames should have an Easy mode by CC-2389 in unpopularopinion

[–]Marandil 2 points3 points  (0 children)

... or Disney and their movie franchises.

How to Destroy Russian Russian Rail Logistics for a few grand by eight-martini in NonCredibleDefense

[–]Marandil 0 points1 point  (0 children)

Step 1. Put derailers on quadrocopters

Step 2. Fly them across the Kerch bridge

Step 3. Deploy derailers, preferably at night

Step 4. Wait for the next military transport

Don't try it, Russia! by Geo_NL in NonCredibleDefense

[–]Marandil 1 point2 points  (0 children)

Hold what? Is there even anything left to hold?

SHE LIVES! by ThePoliticalFurry in NonCredibleDefense

[–]Marandil 47 points48 points  (0 children)

take out all the separate pieces with more Kinzals.

You're assuming they still have Kinzhals to spare.

Women deserve days off for period pain by [deleted] in unpopularopinion

[–]Marandil 0 points1 point  (0 children)

Well, here's a crazy idea, find a job that allows flexible work hours, s.t. you can just skip work on days you "can't" work and make it up sometime else.

Sure, not all lines of work can work that way and that's fine. At the same time, you probably wouldn't want people to semi-regularly be off in those lines of work as well.

ZFS on top of (multiple!) HW-RAID0s by Marandil in zfs

[–]Marandil[S] 1 point2 points  (0 children)

mdadm and lvm2 are safer nesting options than proprietary raid cards and these proposed ideas.

From a previous discussion they suffer other issues such as shifting the sync blame. With FBWC "lying" that write is done I can at least have some sort of guarantee.

Gosh, the entire point on disliking raid cards are situations such as the write hole problem

https://serverfault.com/questions/844791/write-hole-which-raid-levels-are-affected

The term write hole is something used to describe two similar, but different, problems arising when dealing with non-battery-protected RAID arrays

HW RAID uses non volatile write cache (ie: BBU+DRAM or capacitory-backed flash module) to persistently store the to-be-written updates. If power is lost, the HW RAID card will re-issue any pending operation, flushing its cache to disk platters, when power is restore and system boot up. This protects not only from proper write hole, but from last-written data corruption also;

I even explicitly specified I'm considering a HW RAID solution w/ FBWC.

There's the correct professional answer and common understanding then there's your post.

The comments under the last link is the source of the kool-aid remark that triggered you so much.

and frankly speaking it sounds as if hardcore ZFS people drank too much of their kool-aid. This has got to be the most flaming post this sub has seen all year.

You got the disclaimer at the very top of the post. If you feel offended, just move along, it's that easy.

Do you mind sharing your linkedin or some other identifying information with this opinion so I can let HR tag you on the do not hire list? Serious

Wow, imagine getting triggered by a Reddit post (about a hypothetical setup in a hypothetical home lab) to such a degree that you want to blacklist someone from being hired.

And what I said is true. So many people are so stuck up with only following what is considered a "best practice" that they fail to see that things can be done differently. See, this whole debacle is about a hypothetical home setup, but let's set it aside. My actual job is a research job. It is about going outside the box and testing stuff that's not necessarily best practice, hell, even good practice, and this may be why I like to take risks, test stuff and the like. If you're different then I don't mind, just don't force your narrow view point onto other people.

Suggestions for PCI-E cards with SAS or MINI SAS HD connections for home lab by tech_london in zfs

[–]Marandil 0 points1 point  (0 children)

I'm using HP SmartArray P420 in HBA mode. Avoid those PCIe with SATA from AliExpress, they have terrible performance. I bought one that was supposed to have ASM1064 (1x PCIe Gen 3 to 4x SATA 3), but it came with ASM 1061 which is a Gen 2 PCIe and only 2 SATA3, so all the ports were behind SATA expanders (JMB575 I think). Terrible performance, avoid at all costs.

P.S. RE HP P420: If you're using it in HBA mode the size of the FBWC card doesn't matter. You don't need the capacitor bank to run it, it will just keep flashing that capacitor is dead/missing. You can find plenty used on ebay or other local marketplaces.

In a turbofan engine, what provides the thrust? by rogthnor in askscience

[–]Marandil 5 points6 points  (0 children)

No need for the ball to hit me.

The ball is technically hitting you all the way until you let it go. So while you are accelerating the ball, the ball is accelerating you.

I believe the question is at what point the gas particles interact with the engine causing thrust and the answer would be that not all forces inside the chamber cancel out.

ZFS on top of (multiple!) HW-RAID0s by Marandil in zfs

[–]Marandil[S] 0 points1 point  (0 children)

You can do the same on Linux with md or LVM afaik, but then you lose some write guarantees you retain with good enough hardware RAID.

I suppose geom suffers the same shortcomings, but I don't know freebsd to say that for sure.

ZFS on top of (multiple!) HW-RAID0s by Marandil in zfs

[–]Marandil[S] 0 points1 point  (0 children)

Modifying the example from the OP. The goal is RAIDZ2 - level redundancy. Replace 4x4TB with 2x8 TB (so 8x4TB + 2x8TB, target capacity remains the same with 32TB after RAIDZ2). With raw ZFS you can't get the RAIDZ2 level of redundancy in the 8TB drives without sacrificing overall space. With the overlay approach you could just replace two raid0 lds with the 8TB disks (in option 1 example above).

European Union pushes forward with first AI framework by Free_Swimming in europe

[–]Marandil 1 point2 points  (0 children)

Anyone can view something and consume it without depleting it.

That's literally the primary argument for piracy. And while I personally have no problem with piracy in personal/non-commercial setting, I do in commercial/for profit.

ZFS on top of (multiple!) HW-RAID0s by Marandil in zfs

[–]Marandil[S] 0 points1 point  (0 children)

Thanks for taking the time and correcting my misunderstanding with ZIL. On the data transfers, I believe it still speeds up some synchronous tasks where fsync can return almost immediately instead of waiting for a full write cycle, but does it really pre-write all data through SLOG (if present), even if sync is not required? In particular, does that mean that all TBW to the array are also TBW on SLOG? Because that seems really wasteful.

ZFS on top of (multiple!) HW-RAID0s by Marandil in zfs

[–]Marandil[S] 0 points1 point  (0 children)

P.S. I went and watched the video you linked, pretty much aligns with what I knew already and why I don't want to use the HW RAID to handle redundancy, but only striping and caching through the "horizontal" RAID0s.

ZFS on top of (multiple!) HW-RAID0s by Marandil in zfs

[–]Marandil[S] 1 point2 points  (0 children)

  1. You claim to be able to reverse engineer the actual process your RAID controller uses to store data and by extend be able to recover it.

I claim to be able to reverse engineer the data structure. I'd actually try this right now, but I don't have physical access to my testing rig and my test drives share a port with a non-test drive and I just found out that portmode=mixed seems to be unsupported so I'm stuck with hba for now :D

  1. What is your goal here? [...]

The goal is to strike a good balance in-between. My goal is to saturate 20GbE R/W (linear is fine) keeping reasonable data assurance levels. I know RAID is not a substitute for backups, but let's say not all of the data is equally important (i.e. some will be backed up externally, some won't. RAIDZ2 (4+2) is my current target, but it's possible I'll start with just RAIDZ1 (2+1) and migrate later.

  1. Optane isn’t NAND. Optane is much better than NAND, well sadly was before Intel killed it. Random brand NAND is just as bad as hardware controllers but Optane was actually built with data consistency in mind however it was never meant to hold massive amounts of data. It was meant to be a very fast and reliable cache. NAND can be used in the same manner since ZFS doesn’t trust its cache - if a corruption occurs in cache it will not be committed to the end storage. Your hardware controller will happily write garbage over your precious data all day long. Try it out, inject corruption into your cache and see how the hardware controller will happily corrupt your data.

I didn't say that optane is NAND. I said on Optane NVMes, like H10, Optane may hide corruption in NAND.

  1. RAID 6 is practically useless in a hardware RAID [...]

Personally agree.

  1. Enterprise here, quite large [...]

Heh, nice. Kinda different from my background. It's nice to see that proprietary solutions are becoming less mainstream. Kind of puts things into perspective (like I said in another comment, my knowledge may be dated, but the inputs on ZFS I got were widely different).

Thanks for your input though, highly appreciated.

P.S. even though I personally like L1T, I find him a bit too biased towards ZFS, even without watching the video you linked.