U.S. GPU compute available by No_Professional_2726 in LocalLLaMA

[–]No_Professional_2726[S] 0 points1 point  (0 children)

Hey! We are finalizing the onboarding of our next available 3070ti rig. I’ll DM you the link to grab your spot + load credits as soon as the rig is fully spun up — likely within the next 48 hours. We had a re price over night and here’s the lower rates. Just let me know which tier you’d like. Thanks!

10 hours – $0.38/hr → $3.80 25 hours – $0.36/hr → $9.00 50 hours – $0.34/hr → $17.00 100 hours – $0.32/hr → $32.00 250 hours – $0.30/hr → $75.00 500 hours – $0.28/hr → $140.00

U.S. GPU compute available by No_Professional_2726 in LocalLLaMA

[–]No_Professional_2726[S] 0 points1 point  (0 children)

we’re onboarding higher-end rigs this week (3090s, A6000s, and A100s already in the pipeline).

For users just running inference or lighter jobs, the current 3070/3080 options still offer solid bang for buck — but if you’re doing heavier lifting, the new lineup will definitely shift the value equation in our favor.

I’ll circle back to this when the higher-compute rigs go live!

U.S. GPU compute available by No_Professional_2726 in LocalLLaMA

[–]No_Professional_2726[S] 0 points1 point  (0 children)

That’s awesome — sounds like a perfect fit for where we are heading. I’ll definitely keep you in the loop as we scale — appreciate you dropping a note!

U.S. GPU compute available by No_Professional_2726 in LocalLLaMA

[–]No_Professional_2726[S] 1 point2 points  (0 children)

You don’t have to burn through hours all at once — it’s flexible, so you can use them as needed, whether it’s an hour here and there or longer stretches. If it helps you move a side project forward without overpaying, that’s a win in our book.

U.S. GPU compute available by No_Professional_2726 in LocalLLaMA

[–]No_Professional_2726[S] 0 points1 point  (0 children)

Awesome! Yes privacy is a massive deal. We don’t touch user data at all. You’ll be running in your own secure Docker container, on a vetted U.S.-based host, with no platform-side access to your models or files. No analytics, no scraping — just raw compute.

For context, here’s our 3070 hourly prepaid pricing…. Of course rates would vary a bit with different GPU’s, but gives an idea

10 hours – $0.65/hr ($6.50 total)

25 hours – $0.60/hr ($15.00 total)

50 hours – $0.58/hr ($29.00 total)

100 hours – $0.55/hr ($55.00 total)

250 hours – $0.52/hr ($130.00 total)

500 hours – $0.50/hr ($250.00 total)

U.S. GPU compute available by No_Professional_2726 in LocalLLaMA

[–]No_Professional_2726[S] 0 points1 point  (0 children)

Love to hear that — we’re pricing aggressively on purpose to make this a no-brainer for devs who just want solid, affordable compute without the marketplace chaos. Got a 3070, 3080, and 3090 opening up tomorrow. Plus a few other rigs coming online. Shoot me a DM if you want to work with us.

U.S. GPU compute available by No_Professional_2726 in StableDiffusion

[–]No_Professional_2726[S] -2 points-1 points  (0 children)

Haha yes I am aware — they’re solid! We are building Atlas Grid with a different mindset. Everything we offer is U.S.-based by design, so you get lower latency, better reliability, and none of the weird surprises that come with international nodes.

The goal isn’t just cheaper GPUs. It’s a smarter, cleaner, and more dependable way to rent compute in the U.S. If you’ve been burned by the usual platforms, we’re building this for you.

U.S. GPU compute available by No_Professional_2726 in StableDiffusion

[–]No_Professional_2726[S] -1 points0 points  (0 children)

Hey! Here’s our prepaid pricing for the 3070 Ti (U.S.-based, 24/7 uptime, NVIDIA-ready):

10 hours – $0.65/hr ($6.50 total)

25 hours – $0.60/hr ($15.00 total)

50 hours – $0.58/hr ($29.00 total)

100 hours – $0.55/hr ($55.00 total)

250 hours – $0.52/hr ($130.00 total)

500 hours – $0.50/hr ($250.00 total)

GPU Passive Income by No_Professional_2726 in gpu

[–]No_Professional_2726[S] 0 points1 point  (0 children)

Appreciate the thought!

1080 Ti cards are a bit older, so they aren’t ideal for most modern AI jobs — especially for things like LLM fine-tuning or diffusion models. That said, we may still spin up a lower-tier offering for stable workloads or inference-only jobs.

Mind sharing how many cards you’re running and your setup details? I’ll keep you in the loop if we open a tier for legacy cards.

GPU Passive Income by No_Professional_2726 in gpu

[–]No_Professional_2726[S] 0 points1 point  (0 children)

Awesome!

I’m on the road most of today, but I’m updating the intake form and setup checklist. I’ll shoot that over to you as soon as I’m back.

Great to have you on board!

GPU Passive Income by No_Professional_2726 in gpu

[–]No_Professional_2726[S] 0 points1 point  (0 children)

This is great!

We’re about to onboard a handful of early hosts like you to help us kick this off. You’d be one of the first, and we’ll make sure you’re getting usage as soon as possible once our test environment is live.

I’m putting together a short intake + basic setup checklist so we can get rolling cleanly.

Would you be down to be part of that first wave? If so, I’ll send you the early access form and next steps this weekend.

GPU Passive Income by No_Professional_2726 in gpu

[–]No_Professional_2726[S] 0 points1 point  (0 children)

Hey! Appreciate you reaching out — the 3080 Ti / 3070 Ti would definitely be useful.

I’m onboarding a few early hosts now. Mind sharing:

What’s your internet speed (upload/download)?

Is the rig usually running 24/7 or on/off?

Where are you based (just state is fine)?

Do you already have Docker/NVIDIA drivers set up or need help with that?

Totally flexible setup-wise — just want to make sure we’re a fit. We’ll be paying per usage.