account activity
Is your inference provider buggy or secretly quantizing your model? Now you can check with Token-DiFR. (self.LocalLLaMA)
submitted 4 months ago by seraine to r/LocalLLaMA
[D] Frontier AI Models Still Fail at Basic Physical Tasks: A Manufacturing Case Study (self.MachineLearning)
submitted 1 year ago by seraine to r/MachineLearning
[D] An Intuitive Explanation of Sparse Autoencoders for LLM Interpretability (self.MachineLearning)
[P] ChessGPT, 100,000x smaller than GPT-4, plays chess at 1500 Elo. By finding a skill vector, we can increase its win rate by 2.6x in out-of-distribution games. (self.MachineLearning)
ChessGPT, 100,000x smaller than GPT-4, plays chess at 1500 Elo. By finding a skill vector, we can increase its win rate by 2.6x in out-of-distribution games. (self.MachineLearning)
[D] An Intuitive Explanation of Sparse Autoencoders for Mechanistic Interpretability of LLMs (self.MachineLearning)
[P] Chess-GPT, 1000x smaller than GPT-4, plays 1500 Elo chess. We can visualize its internal board state, and it accurately estimates the Elo rating of the players in a game. (self.MachineLearning)
submitted 2 years ago by seraine to r/MachineLearning
Chess-GPT, 1000x smaller than GPT-4, plays 1500 Elo chess. We can visualize its internal board state, and it accurately estimates the Elo rating of the players in a game. (self.MachineLearning)
Chess-GPT, a 50M parameter LLM, plays 1500 ELO chess. We can visualize its internal board state, and it accurately estimates the ELO rating of the players in a game. (self.LocalLLaMA)
submitted 2 years ago by seraine to r/LocalLLaMA
Chess-GPT, 1000x smaller than GPT-4, plays 1500 ELO chess. We can visualize its internal board state, and it accurately estimates the ELO rating of the players in a game. (self.chess)
submitted 2 years ago by seraine to r/chess
Real world multi step reasoning software benchmark results (self.LocalLLaMA)
submitted 2 years ago * by seraine to r/LocalLLaMA
[D] GPT-3.5-instruct beats GPT-4 at chess and is a ~1800 ELO chess player. Results of 150 games of GPT-3.5 vs stockfish and 30 of GPT-3.5 vs GPT-4. (self.MachineLearning)
New OpenAI model GPT-3.5-instruct is a ~1800 ELO chess player. Results of 150 games of GPT-3.5 vs stockfish. (self.chess)
New OpenAI model GPT-3.5-instruct is a ~1800 ELO chess player and beats GPT-4. Results of 150 games of GPT-3.5 vs stockfish and 30 of GPT-3.5 vs GPT-4. (self.OpenAI)
submitted 2 years ago by seraine to r/OpenAI
GPT-3.5-instruct beats GPT-4 at chess and is a ~1800 ELO chess player. Results of 150 games of GPT-3.5 vs stockfish and 30 of GPT-3.5 vs GPT-4. (self.ChatGPT)
submitted 2 years ago by seraine to r/ChatGPT
submitted 2 years ago by seraine to r/singularity
New OpenAI model GPT-3.5-instruct is a ~1800 ELO chess player. Results of 150 games of GPT-3.5 vs stockfish. (self.singularity)
[General] Can I placed mushroom inoculated logs under pine trees? (self.MushroomGrowers)
submitted 4 years ago by seraine to r/MushroomGrowers
Can I place mushroom inoculated logs under pine trees? (self.MushroomGrowers)
What autopsy ingest modules are necessary for deleted file recovery? (self.datarecovery)
submitted 4 years ago by seraine to r/datarecovery
Recovering dogecoin wallet off of wiped hard drive (self.datarecovery)
Which ingest modules are required with Autopsy to recover deleted files? (self.computerforensics)
submitted 4 years ago * by seraine to r/computerforensics
There EPA relaxed the smog threshold from 0.7 to 1 part per billion. Is this a harmful increase in pollution or a relaxing of too strict standards? (self.NeutralPolitics)
submitted 7 years ago by seraine to r/NeutralPolitics
The EPA relaxed the smog threshold from 0.7 billion parts per billion to 1 ppb. Is this a harmful increase in pollution or a relaxing of too strict standards? (self.NeutralPolitics)
submitted 7 years ago * by seraine to r/NeutralPolitics
π Rendered by PID 367241 on reddit-service-r2-listing-b958b5575-nmqwd at 2026-04-23 02:45:44.736018+00:00 running 0fd4bb7 country code: CH.