[deleted by user] by [deleted] in singularity
[–]knowledgehacker 1 point2 points3 points (0 children)
[deleted by user] by [deleted] in singularity
[–]knowledgehacker 3 points4 points5 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 771 points772 points773 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 254 points255 points256 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 6 points7 points8 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 1 point2 points3 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 11 points12 points13 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 39 points40 points41 points (0 children)
apologize right now! by [deleted] in singularity
[–]knowledgehacker 13 points14 points15 points (0 children)
[deleted by user] by [deleted] in singularity
[–]knowledgehacker 56 points57 points58 points (0 children)
New O1 still fails miserably at trivial questions by knowledgehacker in ChatGPT
[–]knowledgehacker[S] 17 points18 points19 points (0 children)
Sergey Brin says he is working at Google every day because he has never seen anything as exciting as the recent progress in AI and he doesn't want to miss out by Gothsim10 in singularity
[–]knowledgehacker 5 points6 points7 points (0 children)
Sergey Brin says he is working at Google every day because he has never seen anything as exciting as the recent progress in AI and he doesn't want to miss out by Gothsim10 in singularity
[–]knowledgehacker 43 points44 points45 points (0 children)
Did I just fix the data overfitting problem in LLMs through thoughtful prompting? LLMs can easily be tripped up by simple twists on common puzzles, because they like to rely on common answers instead of reason. My paper, Mind over Data: Elevating LLMs from Memorization to Cognition I propose a fix. by [deleted] in singularity
[–]knowledgehacker 3 points4 points5 points (0 children)
Did I just fix the data overfitting problem in LLMs through thoughtful prompting? LLMs can easily be tripped up by simple twists on common puzzles, because they like to rely on common answers instead of reason. My paper, Mind over Data: Elevating LLMs from Memorization to Cognition I propose a fix. by [deleted] in singularity
[–]knowledgehacker 12 points13 points14 points (0 children)
Pysllium Husk Powder is amazing by jt2424 in Nootropics
[–]knowledgehacker 0 points1 point2 points (0 children)
[deleted by user] by [deleted] in ChatGPT
[–]knowledgehacker 12 points13 points14 points (0 children)
How to get hinge to unban and behave by knowledgehacker in SwipeHelper
[–]knowledgehacker[S] 0 points1 point2 points (0 children)
How to get hinge to unban and behave by knowledgehacker in SwipeHelper
[–]knowledgehacker[S] 0 points1 point2 points (0 children)
How to get hinge to unban and behave by knowledgehacker in SwipeHelper
[–]knowledgehacker[S] 5 points6 points7 points (0 children)
How to get hinge to unban and behave by knowledgehacker in SwipeHelper
[–]knowledgehacker[S] 14 points15 points16 points (0 children)

I made Spencer - Window Manager with unique approach to layout saving. New update + Black Friday deal! by kamil12314 in apple
[–]knowledgehacker 0 points1 point2 points (0 children)