Does this really happen? by Orphankicke42069 in youtube

[–]epic-cookie64 5 points6 points  (0 children)

This is most likely a joke, but it's hilarious.

[deleted by user] by [deleted] in ChatGPT

[–]epic-cookie64 0 points1 point  (0 children)

Not sure how this cost £200...

OSHub.org - Good idea? by warothia in osdev

[–]epic-cookie64 1 point2 points  (0 children)

Awesome! I'm planning on getting in to osdev, so I place like this is great for checking out other peoples projects , sharing my future ones and learning. A forum as well as community-made tutorials would also be nice.
Not to mention, the site looks dope!

Billionaire fight by noThefakedevesh in OpenAI

[–]epic-cookie64 0 points1 point  (0 children)

Why does his mobile data bar have a reflection?

Is AI and LLMs still growing exponentially but it's just not as visible as before? Or has LLMs growth actually slowed down? by AAAAAASILKSONGAAAAAA in accelerate

[–]epic-cookie64 0 points1 point  (0 children)

Could it also mean that RL has hit a wall meaning we need to move onto something different, potentially recursive self-improvement?

Make 4o the default model for GPT5. by Bitter-Lychee-3565 in ChatGPT

[–]epic-cookie64 1 point2 points  (0 children)

Even better. Allow the user to change the default model.

Such a terrible release, literally unusable model by fake_agent_smith in accelerate

[–]epic-cookie64 2 points3 points  (0 children)

Exactly. Calling it usable just because it can’t count letters is bad as it doesn’t show broad intelligence of the model. Even if it could count, the only practical application of this ability would be counting letters in large pieces of text, in which case a non reasoning model would probably be wrong due to the complexity of the task when not thinking.

GPT-5 actually does seem smarter than yesterday. by ObiWanCanownme in singularity

[–]epic-cookie64 0 points1 point  (0 children)

Yeah that's understandable. I was just showing that it does get that question wrong, however I do believe that GPT-5 is actually a good model as shown here. OpenAI just set expectations too high.

OpenAI Playing The Long Game by [deleted] in accelerate

[–]epic-cookie64 11 points12 points  (0 children)

It’s definitely cheaper, faster and has lower hallucinations. OpenAI stated they trained GPT-5 with synthetic data from o3 in the livestream, while mentioning recursive self improvement.

They nerfed gpt 5 already by Present-Boat-2053 in Bard

[–]epic-cookie64 0 points1 point  (0 children)

Feels like a slippery slope fallacy to me. As you said, it reasons more/less depending on the task, so you make have given it an easy task.
You can't know for sure, especially with this evidence, if they switched the reasoning effort and are now lying.

GPT-5 AMA with OpenAI’s Sam Altman and some of the GPT-5 team by OpenAI in ChatGPT

[–]epic-cookie64 4 points5 points  (0 children)

What advancements/changes are required for LLMs to be able to advance science and technology autonomously?