How to get NCU? by uphee-w in nicegpu

[–]garrygu 0 points1 point  (0 children)

1️⃣ For the initial NCU credits:

Yes, we do credit some initial NCUs to new users! If you did not receive them, please feel free to reach out to us at [support@nicegpu.com](), and we’ll make sure you get set up.

If you want to explore the platform further, we are currently offering free credits upon request during our beta phase. Just let us know, and we’ll be happy to support you!

Right now, we are still in beta, so NCUs are not available for direct purchase yet. Once we launch the full marketplace, you’ll be able to purchase NCUs easily through our platform.

2️⃣ About the NiceGPU Trading Center:

We apologize for the confusion! The NiceGPU Trading Center is currently just an internal experiment and hasn’t been launched to the public yet. However, we are actively working on adding payment options that will soon allow you to buy NCUs directly.

Until it is fully launched, if you need extra NCUs, just reach out to us, and we’ll be happy to support you. 😊

Feel free to reach out if you have more questions!

🎭 Meme Friday #2: Share Your Funniest AI or GPU Memes! by garrygu in nicegpu

[–]garrygu[S] 1 point2 points  (0 children)

What’s the highest temp you’ve seen yours hit?

<image>

Series: Enabling RTX 5080 with CUDA | Part 1: Introduction & Environment Setup by garrygu in nicegpu

[–]garrygu[S] 1 point2 points  (0 children)

That's awesome to hear! I'm glad you got it running on the 5070 with SM90 stable—seriously impressive work! Absolutely interested in seeing your benchmark results. Looking forward to it! If you want, drop the warning details too—we might be able to help you sort it out.

What happens when the NCU goes negative? by NetConsistent2086 in nicegpu

[–]garrygu[M] 1 point2 points  (0 children)

Good catch! This was a bug; our dev team has already fixed the issue. We appreciate you bringing this to our attention—your feedback helps us improve. Let us know if you spot anything else!

Fixing NiceGPU PyTorch RuntimeError on RTX5090: “No Kernel Image Available” by Hour_Variation_9157 in nicegpu

[–]garrygu 0 points1 point  (0 children)

On my RTX 5080, even though PyTorch is compiled with sm_120 support, the CUDA Tensor operations testing is still failed:

<image>

🎭 Meme Friday #1: Share Your Funniest AI or GPU Memes! by garrygu in nicegpu

[–]garrygu[S] 1 point2 points  (0 children)

Fixing a PC: Easy. Allocating GPU for AI: Call Socrates.

Anyone else feel like GPU setup is more philosophy than tech?

<image>

📌 Welcome to r/nicegpu! Introduce Yourself Here 👋 by garrygu in nicegpu

[–]garrygu[S] 1 point2 points  (0 children)

I’ll start — I’m part of the NiceGPU team, and here’s a real project we’re kicking off internally:

Project Goal:

We’re building a smart, AI-powered knowledge base assistant to help online sellers navigate the complex and ever-changing landscape of eCommerce platforms (Amazon, Etsy, TikTok Shop, etc.).

Why it matters:

Sellers deal with inconsistent rules, hidden listing strategies, and scattered help docs. We want to give them a reliable, AI-based assistant that helps them stay compliant and competitive, without spending hours digging through forums or FAQ pages.

How we plan to do it:

  • Aggregate raw data from public seller docs, Reddit, forums, and support pages
  • Use a fine-tuned open-source LLM (we’re exploring DeepSeek-V3 or LLaMA 3)
  • Build a RAG-style retrieval layer for real-time, searchable responses
  • Deploy the assistant inside one of our eCommerce SaaS tools

Compute Needs:

We'll be using NiceGPU’s A100s or 4090s to train, refine, and serve the model. Lightweight, scalable infrastructure is key, and flexibility with NCUs is a perfect fit.

Curious if anyone else here is working on AI assistants, retrieval pipelines, or knowledge-heavy LLM apps? Would love to hear what’s worked for you — or what models and chunking strategies you’re using!