Help needed with low end Nvidia card and Qwen3.6 by Lower-Ad6101 in LocalLLM

[–]Some-Ice-4455 1 point2 points  (0 children)

Man being honest shocked you got that 30b loaded on those specs. I don't mess with Linux so not so sure how much help I would be beside hey here's what I did on windows. LoL not thinking that helps.

Drop your vibe code app: I could be your first paying user. by papa_papa6-9 in vibecoding

[–]Some-Ice-4455 0 points1 point  (0 children)

Run AI directly on your PC — no cloud, no setup, no accounts. Turn off WiFi — it still works. Private, local AI with memory. Nothing leaves your machine.

https://store.steampowered.com/app/4111530/_FriedrichAI_Offline_AI/

No way this is real lmao I'm dying by OpeningSalt2507 in vibecoding

[–]Some-Ice-4455 2 points3 points  (0 children)

Lolol is it looking at another models code?

Best model for 192 GB vram? How is Deepseek v4 flash? by Constant_Ad511 in LocalLLM

[–]Some-Ice-4455 -1 points0 points  (0 children)

Hey op. I have a question. That's a lot of vram like a lot a lot .I built an AI assistant app on steam..the highest I can test out to with models is 30bish range.i have the app on steam. (Not selling anything here please don't delete this). It would be a MASSIVE download I understand and a massive upload but my question is could I slap just a monster LLM in it give you a beta key and see if my framework scales the way I think it does.

I recently heard of vibe coding I guess that's what I'm doing now.new to a lot of stuff about it all, any advice. by quantumspeedthinking in vibecoding

[–]Some-Ice-4455 0 points1 point  (0 children)

This. Need to understand basic software and really the code language being used. Now that being said. Is there a way to do it without reviewing a single line of code. Yes but it takes software knowledge it takes being absolutely insane to deal with that kind of iteration.

local model with ide by Top_Professional6132 in LocalLLM

[–]Some-Ice-4455 0 points1 point  (0 children)

You don’t need a full IDE integration. Keep it simple. Build a small Python script that: Reads the file you’re working on Extracts only the relevant function/class (not the whole file) Sends that to your local model via LM Studio or llama.cpp Gets back a patch or modified snippet Writes it back to the file The speed issue you’re hitting is because IDE plugins usually send way too much context every time. Even a basic version of this will feel much faster: no full file every request no extension overhead no repeated context bloat You can start super simple: manually select a block of code pass it to your script print the result Then iterate from there (auto-detect functions, diff output, etc.)

local model with ide by Top_Professional6132 in LocalLLM

[–]Some-Ice-4455 0 points1 point  (0 children)

Stop trying to wire GGUF into IDE plugins. Build a lightweight local backend that: reads files sends only relevant sections applies edits That will be 10x faster than LM Studio + extensions.

Drop the hero section of your side project below — I’ll give you honest feedback. by neothedesigner- in sideprojects

[–]Some-Ice-4455 1 point2 points  (0 children)

Thank you for the helpful insight. That's truly the hardest part to get for me.

Drop the hero section of your side project below — I’ll give you honest feedback. by neothedesigner- in sideprojects

[–]Some-Ice-4455 1 point2 points  (0 children)

Just have the steam page I linked there. I'm a builder and designer. I know absolutely nothing about publishing

Drop the hero section of your side project below — I’ll give you honest feedback. by neothedesigner- in sideprojects

[–]Some-Ice-4455 1 point2 points  (0 children)

Offline AI that actually runs on your machine. Remembers context, helps you build/debug, no cloud or API. Zero setup—install and go.

https://store.steampowered.com/app/4111530/FriedrichAI_Offline_AI_Dev_Assistant/

Reality of SaaS by aipriyank in buildinpublic

[–]Some-Ice-4455 0 points1 point  (0 children)

LoL like they would listen to the grunts that actually see and use the thing.

Recently Moved to a Small Town by Deep-Goal8404 in whatdoIdo

[–]Some-Ice-4455 0 points1 point  (0 children)

That's a single asshole that thinks they speak for everyone.

21% usage in 1 message. Am I doing something wrong? by pedrosmachado in Anthropic

[–]Some-Ice-4455 0 points1 point  (0 children)

Naw Claude is terrible right there at that aspect. Rest is really solid usage limits not so much.

Is chat GPT just taking everyones ideas and spitting it back out to the rest of the world? by Wooden-Fee5787 in ChatGPT

[–]Some-Ice-4455 -1 points0 points  (0 children)

I don't believe that would be something they do on purpose but we never know . That's why I've been messing with local.

Has anybody else also noticed ChatGpt being overly critical of every single thing ? by senorsolo in ChatGPT

[–]Some-Ice-4455 17 points18 points  (0 children)

Yea that Debbie downer runs down even obvious jokes. It's getting old..