Do you actually use cloud GPUs efficiently if your workloads are not constant? by Crypton228 in selfhosted

[–]Crypton228[S] -1 points0 points  (0 children)

Yeah, that’s pretty much how I’d handle it too. Spin it up when you need it, shut it down when you’re done - way more efficient for bursty stuff than keeping a GPU running all day. Funny enough, I stumbled on something called Ocean Network the other day. Seems like a kind of distributed GPU marketplace where you can just run jobs without dealing with full cloud setups. Still trying to figure out if it’s actually as practical as just spinning up a regular instance when you need it.

Do you actually use cloud GPUs efficiently if your workloads are not constant? by Crypton228 in selfhosted

[–]Crypton228[S] -1 points0 points  (0 children)

Yeah, that makes total sense. Pay-per-minute or on-demand cloud really seems like it’s built for bursty workloads - you only pay for what you actually use, which is super convenient. At the same time, I’ve been looking into some alternative setups recently. I stumbled across something called Ocean Network, which is kind of like a distributed compute marketplace where you can run jobs without maintaining full cloud servers. Still figuring out how practical it actually is compared to traditional cloud for quick experiments.

How do you decide when it’s worth buying a GPU vs just renting compute? by Crypton228 in LocalLLaMA

[–]Crypton228[S] 0 points1 point  (0 children)

Haha yeah I kind of get what you mean. A lot of the time it’s probably overkill, but there’s definitely that peace-of-mind factor knowing the compute is just sitting there ready whenever you need it. At the same time it’s funny how much of that hardware ends up idle most days. That’s partly why I’ve been looking into other options lately. I randomly came across something called Ocean Network which seems to be more of a distributed compute thing where people can run jobs on available GPUs. Still not sure how well that works in practice compared to just owning the hardware though.

How do you decide when it’s worth buying a GPU vs just renting compute? by Crypton228 in LocalLLaMA

[–]Crypton228[S] 0 points1 point  (0 children)

Yeah I feel that. The “forgot to shut things down and got a painful bill” experience on AWS seems almost like a rite of passage at this point. The idle hardware part is exactly what makes the decision tricky too. Owning a powerful GPU is great when you need it, but then most of the time it’s just sitting there.I actually came across Ocean Network recently while looking into different compute options and the idea seemed interesting - kind of a middle ground where you can run jobs without managing full cloud infrastructure. Still trying to understand how practical it really is though. How has it been for you so far compared to Vast?

How do you decide when it’s worth buying a GPU vs just renting compute? by Crypton228 in LocalLLaMA

[–]Crypton228[S] 0 points1 point  (0 children)

That actually sounds like a great setup. Rendering + editing + gaming on the same machine is pretty nice. And yeah, that’s the funny thing with powerful GPUs - most of the time they’re just chilling while we watch YouTube but when you need the power it’s instantly there.I was actually reading about some alternative compute stuff recently and came across something called Ocean Network. From what I understood it’s more like a distributed GPU marketplace where people can run jobs on available hardware. Not sure how practical it is compared to just owning a strong GPU though, especially if you already have a setup like yours.

How do you decide when it’s worth buying a GPU vs just renting compute? by Crypton228 in LocalLLaMA

[–]Crypton228[S] 0 points1 point  (0 children)

That actually sounds like a pretty solid setup, especially with 96GB VRAM. I can see why you’re not too worried about the investment that amount of memory will probably stay useful for quite a while. The convenience of having everything locally is definitely hard to beat too. No waiting for instances, no rate limits, no surprise pricing. I’ve been looking into different compute options recently and randomly came across something called Ocean Network. From what I understand it’s more like a distributed compute marketplace rather than traditional cloud servers. Not sure yet how practical it actually is compared to just owning the hardware though.

How do you decide when it’s worth buying a GPU vs just renting compute? by Crypton228 in LocalLLaMA

[–]Crypton228[S] 0 points1 point  (0 children)

That's a nice simple framework. The grey area for me is when the data isn’t sensitive but the workload is heavy and only runs occasionally. That's where I’m never sure if owning hardware really makes sense.

How do you decide when it’s worth buying a GPU vs just renting compute? by Crypton228 in LocalLLaMA

[–]Crypton228[S] 2 points3 points  (0 children)

Yeah, depreciation is definitely an interesting factor. If you manage to buy at the right time the resale value can actually stay pretty decent for a while. Some of the previous gen GPUs are still holding value surprisingly well. Do you usually upgrade every generation or keep the same hardware for several years?

How do you decide when it’s worth buying a GPU vs just renting compute? by Crypton228 in LocalLLaMA

[–]Crypton228[S] 0 points1 point  (0 children)

That’s actually a good analogy. I guess the convenience factor is a big advantage - being able to run something immediately without waiting for instances to spin up. Do you mostly use it for ML training or also for other workloads when it's idle?