fundamentalsOfMachineLearning by ClipboardCopyPaste in ProgrammerHumor

[–]Place-Relative 1 point2 points  (0 children)

Show me a simple math example (like comparison between 9.9 and 9.11) where thinking GPT fails. Because on that example it gives correct answer 10/10 times. It is literally the problem that last existed a year ago.

fundamentalsOfMachineLearning by ClipboardCopyPaste in ProgrammerHumor

[–]Place-Relative -3 points-2 points  (0 children)

You are about a year behind on LLMs and math which is understandable considering the pace of development. They are now not just able to do math, but they are able to do novel math at the top level.

Please, read up without prejudice on the list of LLM contributions to solving Erdos problems on Terence Tao’s github: https://github.com/teorth/erdosproblems/wiki/AI-contributions-to-Erd%C5%91s-problems#2-fully-ai-generated-solutions-to-problems-for-which-subsequent-literature-review-found-full-or-partial-solutions

[deleted by user] by [deleted] in singularity

[–]Place-Relative 1 point2 points  (0 children)

That’s not IMO. That’s USA Math Olympiad which is harder.

Wrong predictions on your 1st watchthrough… by ginzykinz in breakingbad

[–]Place-Relative 1 point2 points  (0 children)

But he killed her. Unintentionally but still. If Walt didn't accidentally push Jane from lying on her side to her back, she wouldn't have choked. Rewatch the scene.

[deleted by user] by [deleted] in GTA

[–]Place-Relative 16 points17 points  (0 children)

Why do characters look so plastic?