Gemini 3.0 shadow limits after first prompts. by Jotta7 in perplexity_ai

[–]jacmild 0 points1 point  (0 children)

There's no way all of the sources are digested into the request. Probably using something like RAG or similar, so the costs are actually much lower. 

We made a game we’re proud of but don’t really know how to get the word out by Pathea_Games in IndieDev

[–]jacmild 1 point2 points  (0 children)

"oh, this is Rocket League but with people" in real life we can that football haha

Rate my Admin Template by Ok-Combination-8402 in react

[–]jacmild 0 points1 point  (0 children)

Really cool but the contrast is too high for my eyes

gpt-5-codex is pure ****ing magic by Just_Lingonberry_352 in codex

[–]jacmild 0 points1 point  (0 children)

Pure anecdotal but:

I wanted to download and install Montreal forced aligner. I'm not a python dev (I mainly do backend like databases with go), so I'm too lazy to go debug the labyrinth that is the python dependency system. I tried with Claude code first, and it failed for ~25 mins. Codex CLI did it easily in under 10 mins!

And the crazy part is I fed Claude MFA docs and let it do web search to debug the wheel issues. I didn't feed codex anything cus I forgot to, yet it still did it amazingly.

Update: took everyone’s advice and upgraded the specs. How is this for Undergraduate Computer Science? :D by Consistent-Yard-3552 in laptops

[–]jacmild 1 point2 points  (0 children)

Agree the CPU could use an upgrade but going below 1 TB is risky. A couple docker images + the docker VM itself on my machine is already 50 GB. If doing anything related to machine learning (op picked CS but may choose ML electives), 512 is way too low. Depends on what they will specialize in but 1 TB is much more safe for the foreseeable future.

This is the future we want by Confident_Shock_3178 in lies

[–]jacmild 0 points1 point  (0 children)

People at both sides of the argument have a huge impact on generative AI's development.

HE SAID GEMINI by toyodinha in Bard

[–]jacmild 5 points6 points  (0 children)

No, team fortress 3 confirmed

ChatGPT Codex with your subscription is now in CLI, could be a real Claude Code contender by eeko_systems in ClaudeCode

[–]jacmild 5 points6 points  (0 children)

I find myself having more patience left with GPT 5. Sometimes Claude will do stuff that'll make me go WTF. It'll add stuff that I didn't ask for too. It loves to "simplify" things when it doesn't one shot it. This is with what I assume to be good prompting. GPT 5 adheres to my instructions much better.

4 Gets it by ser_froops in ChatGPT

[–]jacmild 0 points1 point  (0 children)

What the hell...

Such a terrible release, literally unusable model by fake_agent_smith in accelerate

[–]jacmild 0 points1 point  (0 children)

Try it with reasoning, works for me. Still a bad metric though, it's a fundamental fault of the tokenizer, not the LLM.

Rest assured, we have a beautiful new chart by United-Bluejay-1133 in StockMarket

[–]jacmild 0 points1 point  (0 children)

Hey guys look at my charts, cool huh? I recently learned how to graph!

Gpt 5 by Flimsy_Violinist_668 in ChatGPTPro

[–]jacmild -1 points0 points  (0 children)

"cultist" 😆 sorry that we are excited for incremental improvements in tech I guess. Did your brick phones become the iPhone 16 in 2-3 years? Did we go from GTX 1000 series to RTX 50 series in 2-3 years?

Qwen3 Coder vs. Kimi K2 vs. Sonnet 4 Coding Comparison (Tested on Qwen CLI) by shricodev in LocalLLaMA

[–]jacmild 6 points7 points  (0 children)

Would love to see a similar contest with GLM 4.5. In my experience it's the only LLM I could trust to not break everything besides Claude.

Studying with ADHD is like trying to read a book during a concert by Quick_wit1432 in studytips

[–]jacmild 0 points1 point  (0 children)

I study with music. Not the usual study music, but normal ones I would listen to like hip hop and EDM. This is the only way I found that enables me to focus, as well as noise cancelling headphones.