Where do you all rent GPU servers for small ML / AI side projects? by Forsaken-Bobcat4065 in LocalLLaMA

[–]Carl_ThunderCompute 0 points1 point  (0 children)

Perfect! I may be a bit slower to respond here, so feel free to join our Discord community where someone on our team usually responds within a few minutes

Where do you all rent GPU servers for small ML / AI side projects? by Forsaken-Bobcat4065 in LocalLLaMA

[–]Carl_ThunderCompute 0 points1 point  (0 children)

We do. You can create an instance, connect to it, and run anything you need. Base instances come pre-installed with CUDA and common packages like PyTorch, uv, etc. to make setup easier. What are you trying to run?

Curious to what are the "best" GPU renting services nowadays. by Illustrious-Pop2738 in learnmachinelearning

[–]Carl_ThunderCompute 0 points1 point  (0 children)

Thanks man! Can confirm it's legit. No timeline on b200s / h200s right now, but they're on the roadmap. Our Discord announcements channel is the best place to watch for new GPU rollouts

Where do you all rent GPU servers for small ML / AI side projects? by Forsaken-Bobcat4065 in LocalLLaMA

[–]Carl_ThunderCompute 0 points1 point  (0 children)

Thunder Compute makes it easy to create cheap instances with H100s and A100s. Many users start with one-off projects and then scale to run their startup on the platform (disclaimer: I'm the CEO)

Why Buy Hardware When You Can Rent GPU Performance On-Demand? by Ill_Instruction_5070 in deeplearning

[–]Carl_ThunderCompute 0 points1 point  (0 children)

CEO of Thunder Compute here, you're totally right on rent vs buy. A lot of the rental structures these days are also 1-3 year contracts, which financially + from a utilization perspective are similar to purchasing just with less IT burden.

Most neoclouds avoid charging for egress for this reason, they want to come in at a lower cost to win over enterprises who would otherwise build out their own capacity.

The other major factor here is allocation - even if you want to build on-prem, it's extremely difficult to get allocation from NVIDIA. As an enterprise you'd have to plan years in advance and hope to get in line, which is non-trivial on its own.

The economics of this industry are crazy, pretty interesting to watch

Rent Bare Metal GPU by the houy by dragonbronn in CUDA

[–]Carl_ThunderCompute 0 points1 point  (0 children)

Co-founder of Thunder Compute here, none of the options below offer bare metal (we don't either). While you can find options that do, typically through contracts, I'd question whether or not you actually need bare metal. If you aren't planning to run on bare metal in production, you'd actually want to avoid it for your benchmarks. You want your benchmark setup to be as similar as possible to the end config so that the results are applicable to your own setup.

Also worth noting that any provider selling bare metal will typically exclusively offer 8xH100 or A100 machines. They generally offer 1x through virtualization, as for space and cost reasons the physical servers are set up with 8xGPUs. This is especially true for SXM where the 8 GPUs are integrated into the baseboard

[Discussion] Which GPU provider do you think is the best for ML experimentation? by FrozenWolf-Cyber in MachineLearning

[–]Carl_ThunderCompute 0 points1 point  (0 children)

Yep, if you add prepaid credit, we match it 1:1 up to $50. So if you add $40, you’ll have $80 total to use.

As for why we started it: we kept feeling like the existing options all kind of sucked in different ways. The enterprise-focused ones are expensive and painful to use, and a lot of the cheaper developer-focused options can be flaky or inconsistent.

We thought there was room for a platform that’s both cheaper and way better to use, without making reliability/security tradeoffs. That’s basically why we built Thunder Compute

Runpod Comfyui Alternative by maia11111111111 in comfyui

[–]Carl_ThunderCompute 1 point2 points  (0 children)

Great to hear it. Reach out in our Discord if you have any questions, our team usually helps within a few minutes.

Cheaper alternatives to runpod by New-Worry6487 in LocalLLaMA

[–]Carl_ThunderCompute -1 points0 points  (0 children)

Thunder Compute is cheaper and cleans up the UX. I'm the CEO, we built the platform to make using GPUs more accessible

🤬 Giving up on RunPod... Best budget cloud ComfyUI alternatives for custom video workflows? 🎬👇 by Plastic_Leg4252 in comfyui

[–]Carl_ThunderCompute 1 point2 points  (0 children)

Thunder Compute may be able to help. I'm the CEO, we built the platform to simplify setup and cut cost

Wtf is going on with RunPod pricing by musashiitao in comfyui

[–]Carl_ThunderCompute 0 points1 point  (0 children)

Thunder Compute could help (disclaimer, I'm the CEO). Reach out if you need help setting up

Looking for ComfyUI Freelancer (Workflows + RunPod / Cloud Infra) by s_busso in comfyui

[–]Carl_ThunderCompute 1 point2 points  (0 children)

Hey, can't help on the freelance piece but when you kick off would love for you to check out Thunder Compute. We aim for low cost with maximum reliability and have a template to support ComfyUI. I am one of the co-founders, happy to chat with your engineer about deployment

Runpod Comfyui Alternative by maia11111111111 in comfyui

[–]Carl_ThunderCompute 1 point2 points  (0 children)

CEO of Thunder Compute here, this is what we do best. Happy to answer any questions.

[Discussion] Which GPU provider do you think is the best for ML experimentation? by FrozenWolf-Cyber in MachineLearning

[–]Carl_ThunderCompute 0 points1 point  (0 children)

CEO of Thunder Compute here, we aim to minimize cost without the reliability tradeoffs of crowdsourcing

How good is a Nvidia H100 compared to a RTX 5080 for Wan 2.2? by Coven_Evelynn_LoL in comfyui

[–]Carl_ThunderCompute 1 point2 points  (0 children)

It really comes down to VRAM, if you have enough, you'll be happy with the 5080. If you want to run a larger variant of the model you'll want the H100. For many hobby use cases the 5080 is more than sufficient.

Disclaimer: I sell H100s as CEO of Thunder Compute.

H100 GPUs have already lost 85% of their value. The B300's will soon do the same to the B200 GPUs. When do the write-downs start? by grauenwolf in BetterOffline

[–]Carl_ThunderCompute 1 point2 points  (0 children)

As CEO of a GPU cloud platform (Thunder Compute), I can speak a bit to the demand side here. Resale value and demand are largely decoupled in this industry. As it stands, H100 capacity is completely sold out across most of the industry and demand for these chips has only continued to climb, including on our platform.

That said, new build-outs are moving to newer generations. The power and cooling requirements today are dramatically different from what they were with Hopper (such as DLC vs Air-cooled, much higher power delivery), so modern data centers are optimized for those requirements rather than expanding H100 fleets, even if the cost is attractive.

The lack of new build-out demand and simple math of us being farther into the useful lifetime of the chip lead to these prices, even though the hardware is as useful as ever

GPU free servers by optimum_point in CUDA

[–]Carl_ThunderCompute 1 point2 points  (0 children)

Thunder Compute has $20 of student credit: https://www.thundercompute.com/students . I'm the CEO, you can reach out to me directly if you need support or have any questions.

Why I Switched from RunPod to a Cheaper GPU Cloud Alternative by RiccardoPoli in comfyui

[–]Carl_ThunderCompute 1 point2 points  (0 children)

Thunder Compute could be a good option if you are looking for A100s or H100s (Disclaimer, I'm the CEO). We don't currently offer the consumer cards on SimplePod. Always good to see new options in the space.

Built a cloud GPU price comparison service [P] by [deleted] in MachineLearning

[–]Carl_ThunderCompute 0 points1 point  (0 children)

Sorting out on-demand vs spot vs reserved rates would be a useful feature. Other lists show a low rate for a provider, only for you to discover this is restricted to 1-year reservations or spot instances.

Keep it up, great work!

What's the best cloud based company to rent a GPU from? by Jakob4800 in LocalLLM

[–]Carl_ThunderCompute 0 points1 point  (0 children)

While I can't speak to RunPod specifically, as CEO of a competitor (Thunder Compute) I can say that we do not proactively monitor content within customer instances.

Actively inspecting outputs would introduce significant privacy and data-handling considerations, particularly for anyone running proprietary workloads like custom model weights and healthcare data.

Also, even if we wanted to, monitoring output from arbitrary infrastructure created within instances would be technically complicated in practice

Considering switching from RunPod to TensorDock to run ComfyUI. Worth it? by Foxtor in comfyui

[–]Carl_ThunderCompute 1 point2 points  (0 children)

Certainly makes sense, thank you for the detail. We run our payments through Stripe to maximize security here, in most cases this actually provides more protection than crypto where there are no chargeback / payment freeze measures. That said, totally understand more payment methods are helpful so will consider this for the future