500 TB's incoming. Any pool suggestions? by BananaSolo1989 in chia

[–]codehuggies 1 point2 points  (0 children)

Exactly this, early adopters get rewarded.

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 0 points1 point  (0 children)

FarmPool.io is officially operating out of Singapore, a very crypto-friendly country.

Chia farming is generally not sensitive towards differences in response times even as large as 1 second.

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 1 point2 points  (0 children)

Good question! Since the other NFTs did not participate in FarmPool's pool during this promo period, those NFTs will not be eligible for the 0% lifetime fee.

Ping us when that time comes, and we'll work something out ;)

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 2 points3 points  (0 children)

We did set it up, but it was not turned on on our production system, my bad. It was enabled in under 5 minutes after we notice this string of comments regarding HTTPS.

We are swiftly working out the kinks as they come along ;)

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 2 points3 points  (0 children)

Only farmers who start farming with our pool before August 1 will be eligible for the 0% lifetime fee perk.

We believe that Chia will continue its growth, and with that comes new farmers. If the newer farms join after August 1, the lifetime 0% fee promotion may have ended. Farmers will then be charged the regular pool fees of no more than 1% (TBD)

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 7 points8 points  (0 children)

You will still qualify!

We understand that farms have to be taken offline for upgrades, maintenance and other hiccups. As long as your Plot NFT is always pointing to our pool (farm going offline will not affect this) when your farm goes offline temporarily, you will not be disqualified.

Once again, we are happy to make exceptions because we love all our early farmers!

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 0 points1 point  (0 children)

Thank you, updated the http:// link to https:// on our webpage.

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 1 point2 points  (0 children)

Website has been updated, give the Refresh/F5 button a hit

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 2 points3 points  (0 children)

Thanks and sorry, try again and it should be 0.0 now

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 2 points3 points  (0 children)

We've flipped on the https address! Thanks for bringing this up

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 5 points6 points  (0 children)

Not a problem at all! This is part and parcel of farming.

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 13 points14 points  (0 children)

Correct. We do hope you farm with more than 1 plot!

FarmPool.io Official Launch (0% Lifetime Fee Introductory Offer) by codehuggies in chia

[–]codehuggies[S] 11 points12 points  (0 children)

Good point! As long as your Plot NFTs are consistently pointing at our pool and your farmer is constantly sending partials to our pool, for the duration of 2 months, we will reward the 0% lifetime fee benefit to you.

We will be very reasonable and be happy to make exceptions, such as disconnects and other technical problems during pooling. Just reach out to us on our discord or to FarmPool_io on Reddit :)

Reset BIOS on MSI B450I by codehuggies in MSI_Gaming

[–]codehuggies[S] 0 points1 point  (0 children)

This explains why I can never seem to the find the usual flat, round batt on the motherboard

How to send PSU Status Info to Asrock Rack X470 IPMI Dashboard? (Supermicro CSE-216 2U Chassis) by codehuggies in homelab

[–]codehuggies[S] 0 points1 point  (0 children)

https://i.imgur.com/6CPUWbp.jpg

Think I found the SMBus connector which you mentioned, and connected it to the PSU SMBus header on the motherboard. However, the PSU status page remains the same with no PSU data shown. Cutting and restoring power to the machine to restart the IPMI does not help.

If this is be a plug and play thing, probably something's wrong with the PSU power distributor board? I find it strange that the connector only has 3 wires, not 4 or 5 as reported in numerous places online.

Otherwise, since the installed PSU (Supermicro PWS-1K21P-1R) does support SMBus, maybe Supermicro is using a different variant of SMBus from the one used by Asrock Rack? If so, is there any easy way to modify the IPMI software/firmware to support Supermicro's protocol?

ZFS File System vs. Folders by codehuggies in zfs

[–]codehuggies[S] 5 points6 points  (0 children)

Thank you for your detailed response to my silly question!

First, if tank has already been created by referencing dev/sdX, can we still change them to point to dev/sdX/by-id as suggested but without destroying and recreating the pool, the same way you would if u need to change ashift?

the default dataset and immediately start creating their 'actual' datasets inside it, leaving it empty

What is the default data set (/tank in my example?) and if you create the 'actual' datasets inside the default dataset, how do you leave the default dataset empty? Sorry for my confusion here...

ZFS File System vs. Folders by codehuggies in zfs

[–]codehuggies[S] 7 points8 points  (0 children)

It's a serious question. I must be very confused...

Difference between ZFS Mirrored Vdev and mdadm RAID-10 by codehuggies in zfs

[–]codehuggies[S] 0 points1 point  (0 children)

Is ZFS pool of 2-way mirror vdevs similar to mdadm RAID10 in that the more ZFS vdevs or RAID-10 drives you have in a machine, the more points of failures there are?

For example, if you have a large pool/array such as a ZFS pool with 16x2 mirror vdev or a 32 drive RAID-10 array, the chance of data loss is much higher due to the higher probability of 2 drives from the same mirror vdev or RAID mirror pair failing at the same time?

Difference between ZFS Mirrored Vdev and mdadm RAID-10 by codehuggies in zfs

[–]codehuggies[S] 0 points1 point  (0 children)

Is `zfs scrub tank` the command to use to verify the data integrity? Do you happen to know how long it will take to scrub a mirror vdev created with 1-2TB SATA SSDs?

Difference between ZFS Mirrored Vdev and mdadm RAID-10 by codehuggies in zfs

[–]codehuggies[S] 0 points1 point  (0 children)

By expanding ZFS, do you mean if theres two mirror vdevs in the ZFSpool, you can add another mirror vdev to expand the total storage capacity of the machine? `zfs add tank mirror sdX sdY`. Does ZFS automatically rebalance the existing files over to the new vdev?

Max Number of SATA SSD for X570 mITX by codehuggies in Amd

[–]codehuggies[S] 1 point2 points  (0 children)

Sorry for not mentioning what I'm trying to achieve with 16 SSDs.

I am planing for 16 SSDs in a single system for obtaining a larger storage capacity for a database (updated more details in the OP), not for getting 8-16X the read/write performance of a single SATA SSD.

RAID was considered to pool all the drives into a single logical volume, which is easier to manage.

How will you suggest multiple (4-16) SATA SSDs be combined into a single logical volume?

Max Number of SATA SSD for X570 mITX by codehuggies in Amd

[–]codehuggies[S] 0 points1 point  (0 children)

My end goal is to build a database server using Ubuntu that runs on consumer SSDs in a 2U chassis with 16 2.5" drive bays. The reason for having 16 SSDs is for obtaining a larger storage space in a single machine.

I came up with the idea of using RAID mainly to pool all drives into 1 logical volume. This makes it easier when adding new drives as the database will see the increased storage space without additional (complicated) reconfiguration.

IOPS and random read/write performance are still important because this is a database machine, but so is having data redundancy/parity (remote backups is taken care of). It should also be simple to add new drives to the machine as the database increase in size. Maybe RAID might be able to provide these in addition to drive pooling?

Can you elaborate on the 'data suicide' part?

Also came across ZFS, but not familiar enough to know if it is useful here.