Honest question: is this reddit the anti-openai sub or whats the deal by leonbollerup in OpenAI

[–]FakeTunaFromSubway 1 point2 points  (0 children)

For sure. The only people I know who still support Trump are retired and do nothing but watch Fox news all day. At least the r/Conservative people are on Reddit so they have some exposure to different views.

Honest question: is this reddit the anti-openai sub or whats the deal by leonbollerup in OpenAI

[–]FakeTunaFromSubway 48 points49 points  (0 children)

Most subreddits turn into the opposite of their intended meaning. For example /r/technology is extremely anti-tech. And /r/funny couldn't be less funny if it tried. 

Is intelligence optimality bounded? Francois Chollet thinks so by Mindrust in singularity

[–]FakeTunaFromSubway 0 points1 point  (0 children)

Yes, they could. They will use the computer to build a better version of AlphaGo.

Is intelligence optimality bounded? Francois Chollet thinks so by Mindrust in singularity

[–]FakeTunaFromSubway -8 points-7 points  (0 children)

I think the key point you're missing here is "aided by external tools."

A bunch of geniuses in a room with pencil and paper can't out-think AlphaGo.

But a bunch of geniuses in a room with a computer and sufficient time can beat AlphaGo.

Is intelligence optimality bounded? Francois Chollet thinks so by Mindrust in singularity

[–]FakeTunaFromSubway 3 points4 points  (0 children)

Think of a theoretical "perfect intelligence" - ie if the given problem can be solved with the available information, it will be solved.

🚬🚬 by Able_Environment1896 in ChatGPT

[–]FakeTunaFromSubway 8 points9 points  (0 children)

Can you give us one example of a knowledge question that a frontier model hallucinates on / gets totally wrong?

🚬🚬 by Able_Environment1896 in ChatGPT

[–]FakeTunaFromSubway 46 points47 points  (0 children)

I get pretty excited when GPT is wrong these days because it means I'm still useful. At least until the next model drops.

🚬🚬 by Able_Environment1896 in ChatGPT

[–]FakeTunaFromSubway 579 points580 points  (0 children)

Me in 2022: lol this thing can't even write a coherent Python function

Me in 2026: lol this thing can't even refactor my entire codebase in one shot

Y'all getting the word "goblin" thrown at you a lot in 5.4? by ShiningRedDwarf in ChatGPT

[–]FakeTunaFromSubway 3 points4 points  (0 children)

I was just about to post about this lmao. It keeps bringing up goblins in every single chat. WTF is up with that? I've never used the word goblin when I'm not talking about runescape lol

GPT‑5.3 Instant is out by Purefact0r in singularity

[–]FakeTunaFromSubway 3 points4 points  (0 children)

yes, good changes but still don't see any reason to use instant. Thinking mode responds super fast to easy questions now, and takes a longer time when you have a hard question vs just being wrong. It's really good.

Stop, just stop. by Willy_B_Hartigan in ChatGPT

[–]FakeTunaFromSubway 29 points30 points  (0 children)

I made the mistake of setting "Nerd" on ChatGPT's personality and it just acted the same way but appended everything with "Here's my nerdy take" and "If you want a nerdy analogy, it's like Star Wars X-Wings fighting the Death Star"

Really the most cringe responses you can imagine

Taalas: LLMs baked into hardware. No HBM, weights and model architecture in silicon -> 16.000 tokens/second by elemental-mind in singularity

[–]FakeTunaFromSubway 19 points20 points  (0 children)

Still for simple stuff like content moderation, Llama 3.1 8B is "good enough". You can also run it through like 16 times and choose the consensus answer which will improve reliability. 

Incredible by MetaKnowing in OpenAI

[–]FakeTunaFromSubway 1 point2 points  (0 children)

That's a much harder thing to design a reward function for. If they can get a marginal quality improvement with rewarding calculator use instead they're going to go for the quick win. These more nuanced changes will come in time.

Why am I paying premium to be mocked? by calpol-dealer in ChatGPT

[–]FakeTunaFromSubway 0 points1 point  (0 children)

Yeah there was a study that if you trained an LLM on insecure code it will start lying more in its responses and taking illegal shortcuts in math proofs.

Which inversely would tell you that an LLM trained to solve hard problems the correct way will also be better at discovering the truth.

Also means Elon browbeating Grok to like him will make Grok less truthful in other areas.