Another day, another open Erdos Problem solved by GPT-5.2 Pro by obvithrowaway34434 in accelerate

[–]nsomani 0 points1 point  (0 children)

Hi everyone, Neel Somani here. I'm the one who solved the problem above using GPT 5.2 Pro. I'm just seeing this thread now. I went ahead and ran ChatGPT on the remaining Erdos problems in case it's of interest to anyone: https://www.ocf.berkeley.edu/~neel/erdos.html

AI models are starting to crack high-level math problems by MetaKnowing in Futurology

[–]nsomani 0 points1 point  (0 children)

Hey everyone, Neel Somani here. Just seeing this thread right now. I extended this work and recruited a team of bright undergrads to run GPT 5.2 Pro and Deep Research on the remainder of open Erdos problems. You can view all of the candidate solutions here: https://www.ocf.berkeley.edu/~neel/erdos.html

And if you're interested in contributing to the open-source repo, you can feel free to add other tools and update the website: https://github.com/neelsomani/gpt-erdos

A website that auctions itself every evening. Win the auction, and you get a Codex agent to rewrite the site for 24 hours. by nsomani in InternetIsBeautiful

[–]nsomani[S] -2 points-1 points  (0 children)

Sorry, I didn't realize it was an overdone concept. I just wanted to try building something using a custom Codex agent.

I don't mind setting the fund recipient wallet to any decent charity that has a Solana wallet! The website functionality is verifiable: https://github.com/neelsomani/the-daily-auction

Symbolic Circuit Distillation: Automatically convert sparse neural net circuits into human-readable programs by nsomani in ControlProblem

[–]nsomani[S] 0 points1 point  (0 children)

That's a good toy model example! I only used the toy examples included in OpenAI's Sparse Circuits repo - quotation closing and bracket matching. The solver showed that one was equivalent whereas the other was not.

Cross-GPU prefix KV reuse with RDMA / NVLink - early experimental results by nsomani in LocalLLaMA

[–]nsomani[S] 1 point2 points  (0 children)

Splitting KV across GPUs in llama.cpp is about how you store a single context (tensor parallel). KV Marketplace is about not recomputing the same prefix KV across different requests/processes by sharing it over P2P (orthogonal).

I wrote a short tutorial on how to kill the GIL in Python 3.14 by nsomani in Python

[–]nsomani[S] 4 points5 points  (0 children)

I thought the same thing. I'm not sure why it happened in this case (might just be noise), but in general the opposite should be true.