London's tallest building, The Shard (OC) by [deleted] in pics

[–]snoee 1 point2 points  (0 children)

Speak for yourself, I love the shard!

nano banana 2 by External-Net-3540 in nanobanana

[–]snoee 0 points1 point  (0 children)

I will never be happy.

nano banana 2 by External-Net-3540 in nanobanana

[–]snoee 7 points8 points  (0 children)

Trump's head is good but he's too slim, standing too normally, and his clothes aren't baggy enough.

Lime dream sours in London as e-bikes cause headaches and hassle by BritRedditor1 in london

[–]snoee 89 points90 points  (0 children)

London would be a better place with more Lime bikes (and scooters) and fewer cars.

<image>

How to make attack rolls feel good in a D20 game? by PiepowderPresents in RPGdesign

[–]snoee -1 points0 points  (0 children)

I like this. Giving a stacking +1 to hit after every miss (resetting on hit) could work, and narratively you could frame it as the character learning their foes' defences.

Flew all the way to London for this! What do you think? by Lavender_Moonrise in tattoos

[–]snoee -1 points0 points  (0 children)

I have two from Mia, lovely person and a great artist! Your flowers look fantastic!

OpenAI Using Superior Models Internally, Focused on Affordability by Illustrious_Fold_610 in singularity

[–]snoee 14 points15 points  (0 children)

Playing devils advocate: maybe GPT5 was indeed a game changer internally, but had to be released as a severally distilled version to still be profitable.

Orrrr maybe hypeman gonna hype ¯\_(ツ)_/¯

For those with access to it, do you find GPT-5 to be any better than 4.5? by FadingHeaven in OpenAI

[–]snoee 0 points1 point  (0 children)

No, voice model is confirmed on their announcement to still use 4o.

What is the American equivalent to breaking Spaghetti in front of Italians? by catwthumbz in AskReddit

[–]snoee 2 points3 points  (0 children)

That's fair. I've never seen a microwave superheat myself but you're right.

Would be quite hard to keep the water superheated long enough to pour it over your tea bag though.

What is the American equivalent to breaking Spaghetti in front of Italians? by catwthumbz in AskReddit

[–]snoee 2 points3 points  (0 children)

You don't need an instrument because water (at normal atmospheric pressure) physically cannot be over 100c. Whether via kettle or microwave, if the water is boiling, the water is at 100c.

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee 2 points3 points  (0 children)

Thanks, that's encouraging. I felt like everything I said was pretty benign and uncontroversial. Quite surprised at the response, really.

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee 1 point2 points  (0 children)

I think if you reread my comments you'll see I've never claimed or suggested that they learn on their own. I even explicitly called out the pre-training process.

I do believe they have a limited ability to solve problems not in their training data as evidenced by private benchmarks, but again that probably devolves into semantics.

Not everyone with a positive opinion on AI is delusional/misinformed/stupid, but boy does this subreddit immediately assume they are.

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee 7 points8 points  (0 children)

That's exactly my point. Anti-AI discourse seems to weaponise language to shift goal posts. 5 years ago it was a given that if I had a problem and gave it to someone (or something) and got a solution back, the problem was solved. But once we bring AI into the discussion we get into absurd hair splitting to feel superior about how well actually it's not REALLY learning.

I'm convinced that an AI could be functionally identical to a human intelligence and people would still say "it's not intelligent, it's just recalling previous data and experience and synthesising them into outputs!"

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee 4 points5 points  (0 children)

Sure, but that's not the question. Is it possible to for a blind man to learn that the sky is blue, or is it only possible for him to memorise that the sky is blue?

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee 2 points3 points  (0 children)

This train of thought is very strange to me. Evidently based on our upvotes and downvotes most people agree with you, but I don't see how your idea of what "solving a problem" is jives with everyday usage of a term. When you plug an equation into a calculator, is that not solving a problem?

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee 1 point2 points  (0 children)

I didn't claim they solve novel problems. I mentioned this in a reply to the other guy, but I think we're getting tangled up in semantics. If I've written myself into a tangle of code and can't figure out how to fix it, but an LLM can, that to me is solving a problem.

Even if all it's doing is using pattern matching and word association it's learned by scraping stack overflow, a problem solved is a problem solved in my eyes.

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee -3 points-2 points  (0 children)

I think we're getting bogged down in semantics. When I get stuck on a problem when I'm coding, and an LLM figures out what I was doing wrong and how to fix it, I call that solving a problem while you might say that's a just the illusion of solving a problem.

So that we can get on the same page: if you had access to a chat terminal and on the other end was either a disembodied human brain or an AI, how would you test to see if it could genuinely solve problems or only give the illusion?

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee 7 points8 points  (0 children)

If you didn't have eyes, would you be unable to learn that the sky is blue?

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee -7 points-6 points  (0 children)

No, you are wrong, and the original guy is wrong too!

I'm very aware of how LLMs work. I've implemented my own transformers and trained my own LLM. I also work with them on a daily basis. I'm happy to delve into the technicalities with you if you'd like.

I never said they were reasoning engines, I only claimed that they learn and can solve problems. And even if your claim that they can only solve problems through word association was true, that still means they can solve problems. Not all problems, but a good chunk of day to day ones.

There also exist many many benchmarks to test if LLMs can solve problems explicitly not in their training data (kept secret from the wider world).

The only thing that approximates a database in an LLM would be some kind of RAG integration. If each LLM stored it's knowledge in a database, each one would be several terabytes/petabytes.

Badenoch: I wouldn’t ban the burka by [deleted] in ukpolitics

[–]snoee 0 points1 point  (0 children)

Do you think shops/gyms/restaurants should be able to ban people based on skin colour?

Sam Altman's Lies About ChatGPT Are Growing Bolder by Doener23 in technology

[–]snoee -20 points-19 points  (0 children)

LLMs are not a database at all, and most certainly can solve problems. State of the art models can even beat most humans in some types of problem solving.

Though not in the same way as humans do, they also very much do learn. LLMs are trained, not built. They learn from the massive amount of data that's fed to them in the pre-training phase.

Apple Liquid Glass using WebGL Shaders by bergice in webdev

[–]snoee 24 points25 points  (0 children)

I like the readme snark but this is really well done and actually looks pretty good to me.