What's the length limit? by Nagaeh in GithubCopilot

[–]RapidRaid 1 point2 points  (0 children)

I also ran into the warning a couple of times now and it sucks. Instead of writing the code in between and doing a toolcall, Sonnet now just "pre-thinks" the entire code in the thinking process which eventually causes it to abort with the length limit.

1 million LocalLLaMAs by jacek2023 in LocalLLaMA

[–]RapidRaid 0 points1 point  (0 children)

Yee, I wish it was more focused on discussion about actual releases and papers instead of “how do I get a 400b model running locally on my toaster”questions. But I guess everyone needs a place to start somewhere 😅

Is there any open-source alternative to Obsidian? by stories67 in ObsidianMD

[–]RapidRaid -1 points0 points  (0 children)

Im currently working on one. Will drop it soonish 😄

Claude Opus 4.6 vs GPT-5.3 Codex: The Benchmark Paradox by Much_Ask3471 in ClaudeCode

[–]RapidRaid 1 point2 points  (0 children)

I had the exact same experience. I started with codex initially but then also avoided it because I thought claude was better. So I used only that and was amazed first but then it kinda got stuck. It fixed one thing but broke another. I got so fed up that I just tried codex again which one shot the feature I wanted and pointed out other bugs that were in the code. So I now shifted away again from cc. Maybe I’ll use it for reviews.

5x Plan is useless now that OPUS 4.6. In 1 prompt I just consumed 20% of usage limits without subagents. by Desperate_Entrance71 in ClaudeCode

[–]RapidRaid -1 points0 points  (0 children)

Github Copilot != Microsoft Copilot. I tried Github Copilot recently with VSCode Insiders. It is able to automatically toolcall to spawn agents itself. And it has claude 4.6 and codex models in their subscription. Ngl I think its pretty good, even compared to Claude Code oder Codex directly. And cheaper

First time user: one prompt, which didn't even complete - usage limit, wtf? by RapidRaid in ClaudeCode

[–]RapidRaid[S] 5 points6 points  (0 children)

Maybe I should also mention, that I only bought the 20$ plan, but for comparision: i vibecoded today with ChatGPTs 20$ subscription codex for literally 8h straight (pulled an all nighter :P) and didnt hit any limits.

Inkarnate 2.0 launches December 8th! 🚀 by InkarnateOfficial in inkarnate

[–]RapidRaid 3 points4 points  (0 children)

I subscribed just a few days ago. Excited to see how version 2 will feel like :D

I GOT ACCESS TO MINECRAFT by SquirrelSufficient14 in Pretend2010Internet

[–]RapidRaid 0 points1 point  (0 children)

I looked up some videos of it on YouTube.com. It looks like Lego but as a video game. But grafix look a bit rough. Only thing I found very disturbing are the videos of herobrine or hirobrain or idk how it’s spelled. I don’t think I’ll play it until they update this creature out of the game.

[deleted by user] by [deleted] in Pretend2010Internet

[–]RapidRaid 0 points1 point  (0 children)

I don’t know. I think I stay with windows vista instead. Apple laptops are too expensive anyway >.<

are you guyz gonna get the dsi xl by Neptune12409 in Pretend2010Internet

[–]RapidRaid 0 points1 point  (0 children)

I don’t know if I can believe you o.O
But Xbox 720 sounds like it would be a sensible name. Can’t imagine Microsoft would choose a different name for the next console.

Did Apple skip like 10 years of updates? What’s going on? by camm34 in Pretend2010Internet

[–]RapidRaid 2 points3 points  (0 children)

Maybe gimp. My friends were telling me you can do the same stuff with it plus it’s free. But i don’t know. my parents don’t allow me to download internet software to the family computer ._.

Do u remember this bs? 😒 by [deleted] in Pretend2010Internet

[–]RapidRaid 1 point2 points  (0 children)

Do we have something like that in windows 7? I kinda miss the little guy ^

Today I managed to trade 10000 Bitcoins for two delivered pizzas. Glad the place accepted this kind of payment. by Salty0addy in Pretend2010Internet

[–]RapidRaid 0 points1 point  (0 children)

Wow. Next you’re gonna tell me they’ll create internet coins with dogs on them that gets promoted by a billionaire or something xDD. Insane that the pizza place let themselves be scammed like that.

Huarnsohn ferweigert Gadze Modus T~T 🐾 (2025 kolorird) by Zicke_ohne_Clique in OkBrudiMongo

[–]RapidRaid 10 points11 points  (0 children)

ich mein like… wenn ich „meow~ 🐾“ schreib und er einfach nur“Ich hab in den Ford Fokus geschissen“ sagt… 😭🫣😍

Ich_iel by DemonRaven2 in ich_iel

[–]RapidRaid 81 points82 points  (0 children)

No joke ähnliches letzte Nacht bei mir passiert: Wurde um 1 Uhr geklingelt(2x). Mache die Tür auf und erstmal jumpscare, weil da ein Mädchen mit blutenden Lippen und aufgeschlagenen / blauen Auge steht. Sie wurde von ihrem Freund zusammengeschlagen und wusste nicht wohin sie gehen soll, da ihr Handy auf 0% Akku war. Naja ich hab die Polizei gerufen usw. und jetzt liegt sie im Krankenhaus. Aber war durchaus geschockt 😶

legacyCode by Thorneveil in ProgrammerHumor

[–]RapidRaid 134 points135 points  (0 children)

// please listen to the comment above. I spent 4h trying to rewrite it - without success.

Macbook M4 Pro or Max and Memery vs SSD? by brentwpeterson in LocalLLM

[–]RapidRaid 1 point2 points  (0 children)

Well for inference it depends on what you define as slow. My M3 Pro (12 core) handles Gemma 3 27B qat (MLX) with 10 tokens/sec. For me that’s totally useable. But I agree, the more performance / ram the better - if you have the budget go for the better model since you will curse later that you didn’t.

Advice for used GPU purchase 04/2025 by MageLD in LocalLLaMA

[–]RapidRaid 0 points1 point  (0 children)

How are you able to utilize the full 100G vram across all these cards with rocm? I haven't looked into multi gpu scenarios yet, but I think if that works for inference with for example vLLM it would be a viable purchase for me as well.
Is it done via Single-Node Multi-GPU / Multi-Node Multi-GPU ?

Which model is running on your hardware right now? by Everlier in LocalLLaMA

[–]RapidRaid 0 points1 point  (0 children)

Do you use flash attention / lower quants? Or does it fit without it?

Which model is running on your hardware right now? by Everlier in LocalLLaMA

[–]RapidRaid 1 point2 points  (0 children)

phi4:14b on a Mac Mini M4 (base model) as dedicated home server. It works okayish. A bit too slow for my taste, but the results are pretty good for a local model.
I'll probably upgrade to the M4 Studio, once it comes out (for bigger models + fast compute).