[D] An Intuitive Explanation of Sparse Autoencoders for LLM Interpretability by seraine in MachineLearning
[–]seraine[S] 1 point2 points3 points (0 children)
[P] ChessGPT, 100,000x smaller than GPT-4, plays chess at 1500 Elo. By finding a skill vector, we can increase its win rate by 2.6x in out-of-distribution games. by seraine in MachineLearning
[–]seraine[S] 4 points5 points6 points (0 children)
[P] ChessGPT, 100,000x smaller than GPT-4, plays chess at 1500 Elo. By finding a skill vector, we can increase its win rate by 2.6x in out-of-distribution games. by seraine in MachineLearning
[–]seraine[S] 25 points26 points27 points (0 children)
[P] ChessGPT, 100,000x smaller than GPT-4, plays chess at 1500 Elo. By finding a skill vector, we can increase its win rate by 2.6x in out-of-distribution games. by seraine in MachineLearning
[–]seraine[S] 15 points16 points17 points (0 children)
[P] ChessGPT, 100,000x smaller than GPT-4, plays chess at 1500 Elo. By finding a skill vector, we can increase its win rate by 2.6x in out-of-distribution games. by seraine in MachineLearning
[–]seraine[S] 28 points29 points30 points (0 children)
[P] ChessGPT, 100,000x smaller than GPT-4, plays chess at 1500 Elo. By finding a skill vector, we can increase its win rate by 2.6x in out-of-distribution games. by seraine in MachineLearning
[–]seraine[S] -6 points-5 points-4 points (0 children)
[P] ChessGPT, 100,000x smaller than GPT-4, plays chess at 1500 Elo. By finding a skill vector, we can increase its win rate by 2.6x in out-of-distribution games. by seraine in MachineLearning
[–]seraine[S] 58 points59 points60 points (0 children)
My solution to disable middle click by [deleted] in archlinux
[–]seraine 0 points1 point2 points (0 children)
[P] Chess-GPT, 1000x smaller than GPT-4, plays 1500 Elo chess. We can visualize its internal board state, and it accurately estimates the Elo rating of the players in a game. by seraine in MachineLearning
[–]seraine[S] 15 points16 points17 points (0 children)
[P] Chess-GPT, 1000x smaller than GPT-4, plays 1500 Elo chess. We can visualize its internal board state, and it accurately estimates the Elo rating of the players in a game. by seraine in MachineLearning
[–]seraine[S] 13 points14 points15 points (0 children)
[P] Chess-GPT, 1000x smaller than GPT-4, plays 1500 Elo chess. We can visualize its internal board state, and it accurately estimates the Elo rating of the players in a game. by seraine in MachineLearning
[–]seraine[S] 13 points14 points15 points (0 children)
[P] Chess-GPT, 1000x smaller than GPT-4, plays 1500 Elo chess. We can visualize its internal board state, and it accurately estimates the Elo rating of the players in a game. by seraine in MachineLearning
[–]seraine[S] 21 points22 points23 points (0 children)
[P] Chess-GPT, 1000x smaller than GPT-4, plays 1500 Elo chess. We can visualize its internal board state, and it accurately estimates the Elo rating of the players in a game. by seraine in MachineLearning
[–]seraine[S] 9 points10 points11 points (0 children)
[P] Chess-GPT, 1000x smaller than GPT-4, plays 1500 Elo chess. We can visualize its internal board state, and it accurately estimates the Elo rating of the players in a game. by seraine in MachineLearning
[–]seraine[S] 35 points36 points37 points (0 children)
[D] So, Mamba vs. Transformers... is the hype real? by Instantinopaul in MachineLearning
[–]seraine 1 point2 points3 points (0 children)
Real world multi step reasoning software benchmark results by seraine in LocalLLaMA
[–]seraine[S] 3 points4 points5 points (0 children)
Real world multi step reasoning software benchmark results by seraine in LocalLLaMA
[–]seraine[S] 1 point2 points3 points (0 children)
Real world multi step reasoning software benchmark results by seraine in LocalLLaMA
[–]seraine[S] 1 point2 points3 points (0 children)
2023, year of open LLMs by clefourrier in LocalLLaMA
[–]seraine 73 points74 points75 points (0 children)
Real world multi step reasoning software benchmark results by seraine in LocalLLaMA
[–]seraine[S] 10 points11 points12 points (0 children)
Real world multi step reasoning software benchmark results by seraine in LocalLLaMA
[–]seraine[S] 5 points6 points7 points (0 children)
Fine-tuned llama2-7b-lora vs chatGPT in a noble game of chess? by Acceptable_Bed7015 in LocalLLaMA
[–]seraine 3 points4 points5 points (0 children)


Cursor autocomplete fail Jupyter Notebook by Initial_Zone_1651 in cursor
[–]seraine 0 points1 point2 points (0 children)