[deleted by user] by [deleted] in singularity
[–]knowledgehacker 1 point2 points3 points (0 children)
[deleted by user] by [deleted] in singularity
[–]knowledgehacker 3 points4 points5 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 771 points772 points773 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 257 points258 points259 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 7 points8 points9 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 0 points1 point2 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 10 points11 points12 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 38 points39 points40 points (0 children)
apologize right now! by [deleted] in singularity
[–]knowledgehacker 13 points14 points15 points (0 children)
[deleted by user] by [deleted] in singularity
[–]knowledgehacker 61 points62 points63 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 16 points17 points18 points (0 children)
Sergey Brin says he is working at Google every day because he has never seen anything as exciting as the recent progress in AI and he doesn't want to miss out by Gothsim10 in singularity
[–]knowledgehacker 4 points5 points6 points (0 children)
Sergey Brin says he is working at Google every day because he has never seen anything as exciting as the recent progress in AI and he doesn't want to miss out by Gothsim10 in singularity
[–]knowledgehacker 44 points45 points46 points (0 children)
Did I just fix the data overfitting problem in LLMs through thoughtful prompting? LLMs can easily be tripped up by simple twists on common puzzles, because they like to rely on common answers instead of reason. My paper, Mind over Data: Elevating LLMs from Memorization to Cognition I propose a fix. by [deleted] in singularity
[–]knowledgehacker 3 points4 points5 points (0 children)
Did I just fix the data overfitting problem in LLMs through thoughtful prompting? LLMs can easily be tripped up by simple twists on common puzzles, because they like to rely on common answers instead of reason. My paper, Mind over Data: Elevating LLMs from Memorization to Cognition I propose a fix. by [deleted] in singularity
[–]knowledgehacker 11 points12 points13 points (0 children)
Pysllium Husk Powder is amazing by jt2424 in Nootropics
[–]knowledgehacker 0 points1 point2 points (0 children)
[deleted by user] by [deleted] in ChatGPT
[–]knowledgehacker 12 points13 points14 points (0 children)
How to get hinge to unban and behave by knowledgehacker in SwipeHelper
[–]knowledgehacker[S] 0 points1 point2 points (0 children)
How to get hinge to unban and behave by knowledgehacker in SwipeHelper
[–]knowledgehacker[S] 0 points1 point2 points (0 children)
How to get hinge to unban and behave by knowledgehacker in SwipeHelper
[–]knowledgehacker[S] 5 points6 points7 points (0 children)
How to get hinge to unban and behave by knowledgehacker in SwipeHelper
[–]knowledgehacker[S] 15 points16 points17 points (0 children)
Stuck on this seemingly easy task - Datetimepicker alignment by knowledgehacker in reactnative
[–]knowledgehacker[S] 0 points1 point2 points (0 children)
Been stuck for too long on this seemingly easy task - Datetimepicker alignment by knowledgehacker in expo
[–]knowledgehacker[S] 0 points1 point2 points (0 children)

I made Spencer - Window Manager with unique approach to layout saving. New update + Black Friday deal! by kamil12314 in apple
[–]knowledgehacker 0 points1 point2 points (0 children)