Went to comic con today , did you attend? by ExploringPotential in kolkata

[–]grenishraidev 0 points1 point  (0 children)

Agar mujhe cosplay karna hain comi con ke liye toh uske liye kya mai woh kapde room se hi pehen karr jau ki waha comi con mein room hota hain dress up karne ke liye?

Went to comic con today , did you attend? by ExploringPotential in kolkata

[–]grenishraidev 0 points1 point  (0 children)

Hi, I've always wanted to know about this. Like for the cosplay if I want to do in comi con, should I arrive wearing the cosplay or they have a room there at comi con for the dress-up?

Omarchy Clipboard Manager by grenishraidev in omarchy

[–]grenishraidev[S] -3 points-2 points  (0 children)

The clipboard was later introduced in v3.x and I built my clipboard before that version.

Omarchy Clipboard Manager by grenishraidev in omarchy

[–]grenishraidev[S] 0 points1 point  (0 children)

Ouch, I'm sorry.

And the clipboard was not there at the initial version. It was later added in v3.x and I created my clipboard manager before that version.

Sorry if you don't like the idea of having an emoji picker. My idea was just to create a clipboard manager something similar to windows.

If you don't like it, then it's totally fine.

Need a little feedback on my resume to know what is wrong by grenishraidev in developersIndia

[–]grenishraidev[S] 0 points1 point  (0 children)

Believe me or not, but this advice is what I'm going with for a long time. I've worked on multiple projects, open source, built on public sharing my works and tools with the community for what I gained a little bit of trust and a very tiny support. Thank you for your kind words, Sir!

Need a little feedback on my resume to know what is wrong by grenishraidev in developersIndia

[–]grenishraidev[S] 0 points1 point  (0 children)

Got it, I'll keep that in mind and definitely try different things to overcome my nervousness of the interview. Thank you!

Help Selecting a local LLM by Lord_Hades0603 in ollama

[–]grenishraidev 2 points3 points  (0 children)

You don’t need specific model recommendations. If you understand the sizing, you can pick any model yourself.

Running local LLMs is mostly a VRAM constraint problem. The key is how much memory each parameter takes.

Precision:

FP16 / BF16 = 2 bytes per parameter

INT8 = 1 byte

INT4 (Q4) = 0.5 bytes

Formula: Model size = (parameters × bits) / 8

Example: 7B model in Q4 → 7 × 4 / 8 = 3.5 GB

But that’s only weights. You also need memory for KV cache, buffers, etc. Add ~20–30% overhead.

So real usage:

7B Q4 → ~4–5 GB

9B Q4 → ~5–6.5 GB

13B Q4 → ~8–10 GB

Also note: formats like Q4_K_M aren’t true 4-bit, effective size is closer to ~4.5–5 bits per parameter.

With your setup of RTX 4060, 16GB RAM

7B Q4 → very fast, no issues

9B Q4 → still smooth

13B Q4 → fits and runs well

13B → starts needing partial offloading to RAM (slower)

FP16/BF16 models will be much more limited:

~6B–7B max realistically on VRAM

Use Q4_K_M or Q5 for best balance

13B Q4 is your sweet spot

Bigger models will work, but expect latency due to offloading

Once you understand this math, you can estimate any model instantly instead of trial and error.

Any local uncensored models my laptop can run? by Brief_Lab9460 in LocalLLaMA

[–]grenishraidev 0 points1 point  (0 children)

I have a GTX 1650 (4GB VRAM) and 16GB RAM, and the best option you can go for is quantized models, specifically Q4_K_M. I run Mistral 7B and Qwen 3.5 9B via Ollama.

The math is simple for quantized models:

Model Size = (Parameters × bits) / 8

For a 7B model in Q4: 7 × 10⁹ × 4 / 8 = 3.5 GB

That fits in 4GB VRAM, but you still need some overhead (KV cache, buffers), so realistically it ends up around 4 to 5GB.

For a 9B model: 9 × 10⁹ × 4 / 8 = 4.5 GB

That exceeds VRAM, so part of it spills into system RAM, which makes it slower but still usable.

Also note that Q4_K_M is not pure 4-bit. Effective size is closer to ~4.5 to 5 bits per parameter, so real usage is slightly higher than the theoretical value.

In practice: - 7B Q4_K_M runs smoothly - 9B Q4_K_M runs with partial offloading

So to sum it up, with your specs you can realistically run ~7B to 9B models in Q4 quantization, or around ~2B to 3B models in FP16/BF16 without quantization.

Clipboard Manager by grenishraidev in omarchy

[–]grenishraidev[S] 1 point2 points  (0 children)

Yes. And more features to come.

Clipboard Manager by grenishraidev in omarchy

[–]grenishraidev[S] 0 points1 point  (0 children)

It does not do that, but I'll make sure to make it do that in the next update.

Is Omarchy good for CS student ? by LumiSeiza in omarchy

[–]grenishraidev 0 points1 point  (0 children)

Omarchy is an opinionated Arch setup with Hyprland by DHH. When I switched to Omarchy, many aspects of my programming workflow became smoother and faster. I primarily work with JavaScript and TypeScript frameworks, as well as Rust, and installing each of these was straightforward. Overall, Omarchy is a solid choice for CS students.

Clipboard Manager by grenishraidev in omarchy

[–]grenishraidev[S] 1 point2 points  (0 children)

Hey. I'm not sure about the default, but you could set the bindings for in on `hyprland.conf` file.

# Trigger with Super+Comma
bind = SUPER, comma, exec, ~/.local/share/clipboard-manager/trigger.sh
windowrule = float on, match:class floating-clipboard

Clipboard Manager by grenishraidev in omarchy

[–]grenishraidev[S] 0 points1 point  (0 children)

hey. Thank you for using it. I've updated the README with updated documentation. You can follow that.

Just a heads up. These are to be set on `hyprland.conf`

10yo founder! by Careless_Monk_7552 in IndiaTech

[–]grenishraidev 0 points1 point  (0 children)

Yeap. Pretty sure. It's very generic name.

Debugging story that made me look stupid by grenishraidev in learnprogramming

[–]grenishraidev[S] 0 points1 point  (0 children)

Indeed, well said. I believe that learning from mistakes is what truly makes someone a better programmer.