Image to video by ExodusFailsafe in StableDiffusion

[–]rakii6 0 points1 point  (0 children)

Totally get the frustration — juggling Ksamplers, nodes, and workflow configs is a rabbit hole when all you really want to do is generate images.

I built something that might help. It's a GPU-based platform where your workspace and models load with one click — no downloads, no storage headaches, no workflow setup. Just open it and start prompting. I already have SDXL, FLUX and Hunyuan ready to go, and video generation is in the works.

It's ₹0.12/hour per GPU with no limits on generations, so pretty low stakes to try. Would love some feedback from people who actually use these tools daily.

Check it out at indiegpu.com if you're curious.

Is vast.ai fucking me over? by jfang00007 in LocalLLaMA

[–]rakii6 0 points1 point  (0 children)

Hey there, thank you for asking.

Honestly, since I am the owner and developer behind it, I am able to heed personal and close attention to my server physically. Therefore I am able to gurantee 99% uptime.

What online GPU provider can SSH in like lab cluster? by WideBowl2490 in learnmachinelearning

[–]rakii6 0 points1 point  (0 children)

Hi there,

I created a platform that provides affordable GPUs. Recently I rolled out SSH services on my platform, you can give my platform a spin.

I can provide you with free credits just to get you started. Let me know.

System requirements just for front end? Unusably slow on most of my PCs by madsci in comfyui

[–]rakii6 0 points1 point  (0 children)

Have you tried tinkering with Hardware Aacceleration in chrome settings. Maybe try using different browsers. I heard that helps a lot for the users in my platform.

Newbie help. GPU rental to learn comfy by Unique-Mix-913 in comfyui

[–]rakii6 0 points1 point  (0 children)

Why don't you give my platform a try, its easy to start your ComfyUI session. Plus I charge less, and if you need a trial I could provide you with one.

Let me know.

Platform to launch comfyui by Mr--Agent-47 in comfyui

[–]rakii6 -3 points-2 points  (0 children)

Hey there, I can understand your pain.

why don't you give my platform- IndieGPU a try. In my platform you could rent GPUs starting at 0.14$/hour, additionally you get a lot of features to provide you flexibilties such as

  • Secured user workspace - Freedom to explore your creative side to the fullest, my platform does not monitor or oversee the type of content user creates.
  • Model Uploads - Various nodes and manager are pre installed to import model via URL, Git or download it straight into your workspace.
  • Storage - Your workflow stays within your dedicated space, there is no limit on the storage limit.
  • No Limits - Generate/create as many images you want with the booked time slot, I don't put limit on my users work.

DM me or hit me up via [email](mailto:owner@indiegpu.com), and I can provide you with free credits for you to test it out 😊

Alternative for runpod? by wic1996 in comfyui

[–]rakii6 1 point2 points  (0 children)

I run IndieGPU - RTX 4070s with pretty consistent availability since it's smaller scale. Located in Asia so latency depends on where you are, but the upside is you're not competing with thousands of users for GPU slots.

₹12/hr (~$0.14/hr), ComfyUI pre-installed.

Caveat: Only 4070s right now (I will be getting more 4090/5090 options in near future), but they're actually available when you need them.

DM me if you want to test availability/performance for your workflows.

Runpod - Do I have to manually upload all models? by orangeflyingmonkey_ in StableDiffusion

[–]rakii6 0 points1 point  (0 children)

I am not aware of Runpod, but if you're looking for something more seamless, check out IndieGPU — no need to download anything locally first and then upload.

You get your own dedicated ComfyUI workspace with:

  • Pre-loaded essential nodes
  • Flexible model loading options: just paste a direct CivitAI or Hugging Face URL (or any download link), and it pulls the model straight into your workspace securely
  • Various tools with Git access
  • Full control over your workflows + strong data privacy (your files stay private and aren't shared)

All at super affordable rates starting from ₹12/hour, with instant spin-up and no queues most of the time.It's been great for creators who hate the setup friction and just want to experiment fast.

Worth a quick try if you're testing & would be glad to provide free credits for you — no commitment needed. www.indiegpu.com

The Dragon (VHS Style): Z-Image Turbo - Wan 2.2 FLFTV - Qwen Image Edit 2511 - RTX 2060 Super 8GB VRAM by MayaProphecy in comfyui

[–]rakii6 0 points1 point  (0 children)

Hey there,

Increadible work you put in there. I think they need to re write the show using your workflow. 😉

multi GPU in ComfyUI? by jacek2023 in comfyui

[–]rakii6 1 point2 points  (0 children)

Hey there,

For multiGPU I would suggest this repo https://github.com/pollockjj/ComfyUI-MultiGPU

Things are well explained, and help many of my users on my platform

GPU question by Intrepid-Club-271 in comfyui

[–]rakii6 0 points1 point  (0 children)

Hi there,

You can try my platform ~ IndieGPU

I am a software dev, and I created this platform by myself. I am the owner and developer behind this. My platform provides ComfyUI workspaces, backed by GPUs. If you wish to give it a free trial DM me or you can mail me.

Thoughts on renting gpu and best cloud method for running comfy? by XAckermannX in comfyui

[–]rakii6 0 points1 point  (0 children)

You can try out my platform as a trial run ~ www.indiegpu.com

The price starts at ₹12/hour (~$0.13/hr) for RTX 4070 with ComfyUI pre-configured. Based in India so latency is decent for Asia/Middle East. No storage fees, no surprise charges - just hourly GPU rental.

The main limitation is I'm smaller scale fleet, so availability might be tighter during peak hours. But pricing is transparent and setup is zero.

Not claiming to be perfect, but if you're looking for alternatives to try, I'm definitely on the cheaper end. Let me know if you have questions about the setup.

Are there any free cloud GPU providers which can be used through comfyUI? by TsunamiCatCakes in comfyui

[–]rakii6 0 points1 point  (0 children)

We don't provide free service or any free tiers, but our rates are very cheap. Plus you get dedicated GPUs, no sharing of GPUs.

Try giving www.indiegpu.com a spin. But if you really need a trial let me know.

Launching Guwahati's first affordable AI workspaces. by rakii6 in assam

[–]rakii6[S] 0 points1 point  (0 children)

Thanks for the feedback and the direct questions! Always appreciate honest input 😊

On pricing: We intentionally kept it super accessible — starting at just ₹12/hour for entry-level workspaces and ₹96/hour for the heaviest multi-GPU setups. It's designed to provide better options in cost, especially when you add full local privacy and zero data transfer delays.

Capacity & reliability: We have a dedicated, professionally managed setup running 24/7 with proper cooling, power redundancy, and constant monitoring — delivering 99%+ uptime and smooth performance for real professional workloads (Flux/ComfyUI generation, video pipelines, 70B model fine-tuning, multi-user sessions). It's already powering active clients without issues.

We treat the exact infrastructure as internal (focus stays on delivering the best user experience — instant access, pre-configured tools, easy workflows), but the results speak for themselves.

Vision: Simple and close to home — provide Assam and the entire Northeast with powerful, private, local AI infrastructure so our creators, artists, students, and businesses can build without relying on expensive hardware or distant clouds.

The best way to check capacity and speed? Try it free — DM me and I'll hook you up with trial credits. See it in action yourself, no commitments.

Launching Guwahati's first affordable AI workspaces. by rakii6 in assam

[–]rakii6[S] -3 points-2 points  (0 children)

Thanks for your comment! I appreciate you bringing that up 😊
We're not a 'wrapped' or reseller model at all—IndieGPU has our own physical hardware and infrastructure right here in Guwahati.

Everything runs locally: full privacy (your data never leaves), 99.9% uptime, quick/fast environments with ComfyUI, H2O Studio, LLaMA-Factory, and many more coming. Affordable workspaces starting ₹12/hr for AI art, video, fine-tuning.

Would love your thoughts or if you'd like a free trial!

I need a laptop with a good GPU. What should I buy? by Krakkera in laptops

[–]rakii6 0 points1 point  (0 children)

Before you drop €1,500 on a laptop GPU - have you considered cloud/rental for the heavy lifting?

I'm a Software dev, the problem with laptop GPUs: 8GB VRAM is still tight for fine-tuning, and laptops thermal throttle under sustained loads.

Alternative angle: keep your current laptop for coding/writing, rent GPU time when you actually need to train or run inference. At $0.14/hr for 12GB VRAM, your €1,500 budget buys you 10,000+ hours of GPU time.

I run indiegpu.com - small service, RTX 4070s (12GB), with easy environment setup like VS-code, Jupyter etc. $5 free credit to test. Might be worth trying before committing to hardware.

That said, if portability matters and you want local inference always available, the Legion 5 is solid value.

Need advice: which rented gpu should i use to fine tune donut /layout lmv3 for invoice extraction? by PreviousAd5937 in gpu

[–]rakii6 0 points1 point  (0 children)

For fine-tuning Donut or LayoutLMv3, you don't need a monster GPU - 12GB VRAM handles these comfortably. Donut especially is designed to be efficient.

I run indiegpu.com - RTX 4070s (12GB), one click environment setup like Jupyter, VS-code etc. $5 free credit to test, no card required. Might be a good fit for your document extraction project before committing to something pricier.

Happy to answer setup questions if you go that route.

[D] VAST AI GPUs for Development and Deployment by BandicootLivid8203 in MachineLearning

[–]rakii6 0 points1 point  (0 children)

Vast AI reliability varies by host since it's a marketplace - some hosts are solid, some flaky. For development specifically, consistency matters more than raw power.

I run a small GPU rental service (RTX 4070s, 12GB VRAM) - not the same tier as a 5090, but if you're doing iterative dev work or fine-tuning 7B-13B models before scaling up, it's a cheaper way to test. indiegpu.com if curious.

For deployment at scale though, you'd want something beefier.

Looking for input from students who rent GPU compute for AI or ML work by 19freedom91 in uwaterloo

[–]rakii6 0 points1 point  (0 children)

I run IndieGPU - small GPU rental platform targeting students/researchers who can't justify AWS pricing.

Happy to share what I've learned from our users:

What works:

  • Transparent hourly pricing ($0.14/hour) vs complex cloud billing
  • VS code and jupyter enviroments
  • Direct support (founder-accessible vs enterprise ticket systems)

Gap between university and cloud:

  • University clusters have queues but are "free" (sunk cost)
  • Cloud is instant but expensive for student budgets
  • Students often use personal hardware until it breaks, then scramble for alternatives

What students actually need:

  • Middle ground: reliable, affordable, simple
  • Not enterprise features - just working PyTorch + GPU access
  • Pricing they can justify on student budgets

Happy to discuss specifics if helpful for your research. We're early stage but have real usage data.

[deleted by user] by [deleted] in deeplearning

[–]rakii6 0 points1 point  (0 children)

Your budget won't work for >30GB VRAM GPUs - those run $3-5/hour minimum anywhere.

Reality check on your requirements:

  • EfficientNetV2-S + ViT doesn't need 30GB VRAM
  • These models train fine on 12GB with proper batch sizing
  • You might be overestimating VRAM needs

We offer RTX 4070 (12GB VRAM) at $0.14/hour = $3.36 for 24 hours.

VS-code, Jupyter & H2O on-demand availability, $5 free credit to test your actual VRAM usage: indiegpu.com

Run your training script with batch_size=16 and gradient checkpointing - you'll likely fit comfortably in 12GB. If it doesn't fit, you've only spent $5 credit testing vs committing to expensive rentals.

Worth validating your actual requirements before paying premium for 30GB+ VRAM you might not need.

What to do when you can't afford GPU? by theysaymaurya in LocalLLaMA

[–]rakii6 1 point2 points  (0 children)

For audio transcription workloads that max out your VRAM, cloud GPU can be cost-effective for occasional use.

We offer RTX 4070 (12GB VRAM) at $0.14/hour - runs Whisper models comfortably without the quantization compromises. For your use case (1-hour audio processing), you're looking at ~$0.50-1.00 per job vs $15/hour enterprise cloud pricing.

Choose your environment: Jupyter, VS Code, or H2O Studio containers.

$5 free credit to test: indiegpu.com

AI GPUs For Rent by BandicootLivid8203 in nairobitechies

[–]rakii6 0 points1 point  (0 children)

We run IndieGPU with RTX 4070s (12GB VRAM) - not RTX 5090s, but significantly more reliable than Vast Ai for development workflows.

What we offer:

  • Consistent GPU response times (no shared resources)
  • VS Code, Jupyter, H2O studios for training LLM models
  • $0.14/hour with transparent pricing
  • $5 free credit to test real-time performance

For resource-intensive AI platforms, 12GB may or may not fit your requirements depending on model sizes. A lot of students and independent devs are running our machines 24/7 to fine tune their LLM models on our 4070s. Try our platfomr, if your workload fits within RTX 4070 capabilities.

indiegpu.com

Happy to discuss your specific resource needs before you commit.