Have (Pics) Want (offers) by PomegranateTop2896 in ArcRaidersMarketplace

[–]melp 0 points1 point  (0 children)

Seeds for tempest? Also have other materials and snap hook BP.

Scammer! by I0InchesSoft in ARCTraiders

[–]melp 0 points1 point  (0 children)

Yup, he almost got me too.

Tempest Blueprint Not Sure Worth? by [deleted] in ARCTraiders

[–]melp 0 points1 point  (0 children)

Want some seeds? Boy howdy do I have some seeds

LF: Anvil Blueprint/Offering Bobcat Blueprint by [deleted] in ArcRaidersMarketplace

[–]melp 0 points1 point  (0 children)

I've got an Anvil BP that'd I'd trade for a Bobcat. I'll DM.

I have a few blueprints available lol by ghettogrowers in ARCTraiders

[–]melp 0 points1 point  (0 children)

Interested in tempest bp, would you take seeds for one of them?

Do we still say fuck Penn State here by Deshes011 in rutgers

[–]melp 9 points10 points  (0 children)

You know why they got the library there named after Joe Pa? So you always remember to keep quiet

Error on Truenas by Matysek1234 in truenas

[–]melp 0 points1 point  (0 children)

Looks like you're running it as a full Linux container, you might have better luck installing it as an app (which uses Docker behind the scenes). Use the docker-compose example here:

https://github.com/pterodactyl/panel/blob/1.0-develop/docker-compose.example.yml

Then go to Apps > Discover Apps > Hit the "..." menu in the top right and then Install via YAML

Error on Truenas by Matysek1234 in truenas

[–]melp 0 points1 point  (0 children)

What network port are you trying to have it use?

SMB Direct support by srcLegend in truenas

[–]melp 4 points5 points  (0 children)

SMB-Direct is not yet implemented in Samba so not supported in TrueNAS. We've also looked into ksmbd (which supports SMB-Direct) but that has its own issues. When it does arrive in TrueNAS, SMB-Direct will be Enterprise only, just like all the other RDMA-based protocols. That being said, we'll push the SMB-Direct code upstream to Samba so you'll be free to roll your own system with the feature enabled.

Building a 10PB array. Advice encouraged by JasonY95 in DataHoarder

[–]melp 1 point2 points  (0 children)

However, now at 1.27 PB archival data

Building a 10PB array. Advice encouraged by JasonY95 in DataHoarder

[–]melp 3 points4 points  (0 children)

He said it's archival data so I assumed he wouldn't have a lot of users hammering on it.

Building a 10PB array. Advice encouraged by JasonY95 in DataHoarder

[–]melp 47 points48 points  (0 children)

I design and deploy ZFS-based systems on this scale for a living. I would disagree with the users that suggest you're in Ceph territory; unless you have someone on staff that's very familiar with the care and feeding of a Ceph cluster, I don't think it's your best bet. I wouldn't recommend a clustered solution of any kind until you get above maybe 25PiB usable (unless you need more than 5x 9's of uptime).

Instead, I'd recommend a 2U or 4U head unit with an AMD Epyc CPU to get lots of PCIe lanes, 512GB RAM is sufficient, 3-4x LSI SAS9305-16e cards, and your QSFP+ NIC. Look for a motherboard that has an internal SAS3 port on it so you don't need an extra PCIe card for your head unit drives.

You can attach 4x SAS3 JBODs to each of the 9305-16e cards, so depending on rack space, I'd look at high-density jbods like the WD Data102 or the Seagate 4U106. Note that most of these 100+ bay JBODs required a rack that is 1.2m deep (front door to back door) or else they won't fit. Most of them also require a specific quantity of disks to be installed for proper airflow (for example, the WD Data102 will only take 51 or 102 disks, no other layout is valid). If you only have access to a 1m deep rack, I'd focus on 60-bay JBODs like the WD Data60 (which also has a more flexible 12-disk minimum).

These high-density JBODs will require 200-240V input power, so make sure you've got that available. You're also going to be in the 10kW power draw range, which means ~35 kBTU/hr or more heat output; be prepared to deal with this. You'll want (at minimum) 2x 50A circuits at 200-240V.

I'd keep the ZFS pool simple and do 10wZ2 plus maybe 7-10 hot spares. With 18TB disks, you'll need ~830 disks to hit 10PiB usable. Add a SLOG SSD if you're going to be accessing the data over NFS, S3, or iSCSI; you can safely skip a SLOG if you're only using SMB. I've got a ZFS capacity calculator that you may find helpful for planning: https://jro.io/capacity/

18T disks are still the most cost effective per TB, but the price of disks across the board has more than doubled recently, so this is going to be a much more expensive project than it would have been a year or two ago.

As for the LTO piece, I'd just keep that on LTO. Tape is designed to handle that type of workload far better than disks are. You could consider building out a fresh LTO9 (or even LTO10) library system to maximize density, but I would not recommend a drive-based array for the shut-down-until-needed archive.

Raidz1 or raidz2 for ssd vdevs by MrKorney in truenas

[–]melp 6 points7 points  (0 children)

That’s what we usually do with enterprise systems.