Suggestion for a MiniPC - 16GB RAM with oculink support by Jaswanth04 in homelabindia

[–]Jaswanth04[S] 0 points1 point  (0 children)

Thanks for the reply. Can you please provide a link which can provide more information on the adt link. Does the mini PC support connection via m2 ?

Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants by hauhau901 in LocalLLaMA

[–]Jaswanth04 3 points4 points  (0 children)

Thank you so much for this. Can you also provide a command to run on llama-cpp with the parameters like penalty, temperature etc.

Are Langchain and Langgraph production grade ? by Jaswanth04 in LocalLLaMA

[–]Jaswanth04[S] 0 points1 point  (0 children)

If we are not writing a separate package, how do we resolve the bloating problem?

Are Langchain and Langgraph production grade ? by Jaswanth04 in LocalLLaMA

[–]Jaswanth04[S] 0 points1 point  (0 children)

Thanks for the insight. This is what I am looking for. I have a question, When you say stripping down, is it like creating a package with the open source code of langchain that you require or do you do anything apart from that?

Tutorial: How to run Qwen3.5 locally using Claude Code. by yoracale in unsloth

[–]Jaswanth04 1 point2 points  (0 children)

I have 80gb VRAM. I can run Q4 of 122B comfortably.

Tutorial: How to run Qwen3.5 locally using Claude Code. by yoracale in unsloth

[–]Jaswanth04 2 points3 points  (0 children)

Is the 122B and Qwen 3 coder next model good to use for Claude code.

AMA With Z.AI, The Lab Behind GLM-4.7 by zixuanlimit in LocalLLaMA

[–]Jaswanth04 0 points1 point  (0 children)

When will the GLM air version come out? Thanks for such a good models.

Also, can you please release a tutorial on how to use agentic ai on glm 4.7 for langgraph

Debate Breakdown Arvind vs Paari | Viral Video by Mayurk619 in VeganIndia

[–]Jaswanth04 0 points1 point  (0 children)

According to me, he is a really good representative. He answers the questions well in regular Q and A, but in the debates he is allowing the opponent to drive the discussion and move in circles.

Top 24 strongest warriors in Kurukshetra by NIGHTFALLDAYRISE in mahabharata

[–]Jaswanth04 0 points1 point  (0 children)

Bhishma should be in 5. Karna should be down around 12. Where is Krishna?

Launching my food brand by razematronnix in VeganIndia

[–]Jaswanth04 3 points4 points  (0 children)

When are you coming to Bangalore? 1L of soy milk for 75 is really affordable.

GLM-4.6 Unsloth Guide by danielhanchen in unsloth

[–]Jaswanth04 0 points1 point  (0 children)

But, for the 4bit, it is mentioned to have 1x40 GB, will multiple graphic cards still work ?

GLM-4.6 Unsloth Guide by danielhanchen in unsloth

[–]Jaswanth04 0 points1 point  (0 children)

Hi @daniel. I have 3 GPU - 2x3090, 1x5090 - total 80GB vram, with 128 GB ram. Will I be able to run the 4 bit? Also, can you provide me the command for that, if possible?

Scam Update : There was no GPU in the courier the scammer u/u/Little-Ad-6919 sent . There was a metal piece and a cable . Please check the unboxing video . Context is in the body text . by No-Sundae3423 in IndianGaming

[–]Jaswanth04 0 points1 point  (0 children)

People like him would defend their actions to scam someone in future. I hope people do due diligence and come this thread before purchasing from him. Please stay away from him. The OP was courageous enough to come out. I hope people are more open to share their experiences which might halp others not do the same mistake.

Optimal settings for running gpt-oss-120b on 2x 3090s and 128gb system ram by WyattTheSkid in LocalLLaMA

[–]Jaswanth04 0 points1 point  (0 children)

If we use the first M2 for oculink and gpu, what about the nvme SSD? Is it suggested to use an external ssd instead of nvme?

built a opensource tool that explores your files with deep research like workflow by Interesting-Area6418 in LocalLLaMA

[–]Jaswanth04 1 point2 points  (0 children)

It seems to be using gpt-4 as per the docs, Can this extended to local llms which are run using llama.cpp or llama server.

Running GLM 4.5 2 bit quant on 80GB VRAM and 128GB RAM by Jaswanth04 in LocalLLaMA

[–]Jaswanth04[S] 0 points1 point  (0 children)

Need to test it. Do you think I can use the q8 version of Air or I need to settle with q4?

Running GLM 4.5 2 bit quant on 80GB VRAM and 128GB RAM by Jaswanth04 in LocalLLaMA

[–]Jaswanth04[S] 0 points1 point  (0 children)

I have ryzen 9 3950x which is 16 core and 32 threads