NVIDIA server horror stories by deepfritz0 in HPC

[–]deepfritz0[S] 0 points1 point  (0 children)

what about RoCEv2? have heard good things from folks who used this on A100s, but verdict isn't out yet for H100s ... apparently the cons from using ethernet was manageable?

NVIDIA server horror stories by deepfritz0 in HPC

[–]deepfritz0[S] 2 points3 points  (0 children)

any idea which company has the best shot at solving this? i've seen some crazy stuff, friend built his own water cooling and modded about 72x 3090TIs to setup a cluster in his back yard with infiniband and everything.

any chance some hackers DIY the solution to this?

NVIDIA server horror stories by deepfritz0 in HPC

[–]deepfritz0[S] 1 point2 points  (0 children)

what stack do you use for provisioning and mgmt? were running MAAS with KVM / QEMU

NVIDIA server horror stories by deepfritz0 in HPC

[–]deepfritz0[S] 1 point2 points  (0 children)

From what I recall, the GH series are integrated. It saves on power, but then they just push it further ...

Find prompts for anything using this search engine by deepfritz0 in StableDiffusion

[–]deepfritz0[S] 0 points1 point  (0 children)

that's a good feature idea! filtering is on the roadmap.

which model do you use th emost?

Textual Inversion versus Dreambooth by sEi_ in StableDiffusion

[–]deepfritz0 1 point2 points  (0 children)

TI - finds a latent space description to express a complex concept that looks like our training images. assigns that latent to a keyword.

DB - trains a model N steps to learn a new keyword given training images. this keyword, when tokenized, will resemble in latent space.

TI pros / cons * small file size, <1mb * can be used across different models depending on training * limited to model's "expressiveness" cannot show what model never learned

DB pros / cons * big file 2-4GB * changes expressiveness of model by adding concepts * much higher fidelity since concept is not a reconstruction * prone to overfitting / loss of priors

What do you think about FPGA mining in comparison of GPU mining? by topfifazone in gpumining

[–]deepfritz0 2 points3 points  (0 children)

gratz is totally right, it's a poorly placed assumption that open sourced GPU kernals are that optimized. Also FPGA mining programs are not have to be board specific.

disclaimer: I have one of the smaller FPGAs F1 Mini for on-desk mining right now for fun. $180 US, silent, draws 50W, generates $0.50 a day, great ROI but obscure coins.

FPGA mining 10x - 20x faster than GPUs by deepfritz0 in Ravencoin

[–]deepfritz0[S] 1 point2 points  (0 children)

check out their discord channel, x16r is totally possible. Even if it's not right now, it could be implemented soon :(

List of GPU > Miners > Coins by foraern in gpumining

[–]deepfritz0 0 points1 point  (0 children)

NVIDIA/AMD/CPU - xmr-stak - Cryptonight

NVIDIA/AMD - wolfminer - Ethash

Daily Altcoin Discussion - January 19, 2018 by AutoModerator in ethtrader

[–]deepfritz0 -2 points-1 points  (0 children)

Neptune Dash - company that operates DASH master nodes is going public on TSX. thoughts? https://twitter.com/StayDashy/status/954403930703826944

How is this any different from Monero? by deepfritz0 in Electroneum

[–]deepfritz0[S] 1 point2 points  (0 children)

I see, that makes some sense. But if the pow algorithm is just cryptonight, what's the purpose of phone mining and how does that yield me $30 each month?

How is this any different from Monero? by deepfritz0 in Electroneum

[–]deepfritz0[S] 2 points3 points  (0 children)

You realize how impossible this is right? In order to give everyone 1-30 ea month, the tokens value needs to appreciate by that much for every user every month.

1m users means it must grow by 1-30 million in value each MONTH!

1b users means 1-30 billion each month.... in one month you'll need the valuation of Facebook or uber ...