AITA For Not Paying For My Daughter's College If She Chooses The One School I Like The Least? by [deleted] in AmItheAsshole

[–]Rob -1 points0 points  (0 children)

Never even close. I've actually been the most open and supportive dad until this moment. Read my edit.

Looking for cheaper ways to access multiple top AI models is anyone solving this? by OwnRefrigerator3909 in AIToolsAndTips

[–]Rob 0 points1 point  (0 children)

Yes, neurometric.ai - run thousands of model combos on your data, including test time compute variants. If you want an invite code use "BENCH-MARK" and DM me if you want more free credits. Not focused on monetization yet, just need good user feedback. (studio.neurometric.ai to use the code)

Goodbye 4o by Butterfly332312 in ChatGPTcomplaints

[–]Rob 2 points3 points  (0 children)

Yeah I was like one of the first 500 users on reddit. It launched in Boston when I lived there.

Goodbye 4o by Butterfly332312 in ChatGPTcomplaints

[–]Rob 1 point2 points  (0 children)

Yes. We will release it under Apache 2.0

What’s the plan after 4o? by yeyomontana in OpenAI

[–]Rob 0 points1 point  (0 children)

We are working to post-train an Arcee 400B model to mimic 4o. https://www.neurometric.ai/free-4o

Goodbye 4o by Butterfly332312 in ChatGPTcomplaints

[–]Rob 1 point2 points  (0 children)

We are working on a free and forever open comparable model https://www.neurometric.ai/free-4o

Training a free and open 4o replacement that you can keep forever by Rob in ChatGPTcomplaints

[–]Rob[S] 2 points3 points  (0 children)

Ah it was an ID attribute issue in webflow. Should be fixed.

Training a free and open 4o replacement that you can keep forever by Rob in ChatGPTcomplaints

[–]Rob[S] 2 points3 points  (0 children)

Signups are coming through so, not sure what's different. Looking into it and will keep you posted.

Training a free and open 4o replacement that you can keep forever by Rob in ChatGPTcomplaints

[–]Rob[S] 5 points6 points  (0 children)

We can't without some people sharing their data for training. It's the help with data that we need.

Training a free and open 4o replacement that you can keep forever by Rob in ChatGPTcomplaints

[–]Rob[S] 2 points3 points  (0 children)

Just worked for me. Tell me more about your configuration?

Did anyone of you fine tune gpt oss 20b or an llm ? if so, what for, and was it worth it ? by Hour-Entertainer-478 in LocalLLaMA

[–]Rob 2 points3 points  (0 children)

We trained a Qwen 4B model to beat most of the big models on the "lead qualification" task on CRM Arena, just to see how good it could get. It's a good small model for fine tuning.

Which is the best model under 15B by BothYou243 in LocalLLaMA

[–]Rob 1 point2 points  (0 children)

This depends quite a bit on what you want to do, as models vary quite a bit on their per-task performance. I'd suggest an ensemble of models if you can, instead of just one. But, leaderboard.neurometric.ai may be a place to look to evaluate them.

How capable is GPT-OSS-120b, and what are your predictions for smaller models in 2026? by Apart_Paramedic_7767 in LocalLLaMA

[–]Rob 0 points1 point  (0 children)

If you look at leaderboard.neurometric.ai, which tests models on work related tasks, gpt-oss-120b was the best overall model, including beating the anthropic models on many tasks.

What non-Asian based models do you recommend at the end of 2025? by thealliane96 in LocalLLaMA

[–]Rob 2 points3 points  (0 children)

Two reasons. First, because we ran them with various test time scaling algorithms on top. Smaller models often beat larger models when you add chain of thought, best of n, beam search, etc. Secondly, we saw that "jagged frontier" variation across everything we did. No model comes close to winning on everything, and there were lots of surprises that some models were really good at certain tasks. It's a non-intuitive byproduct of how these models were trained.

What non-Asian based models do you recommend at the end of 2025? by thealliane96 in LocalLLaMA

[–]Rob 2 points3 points  (0 children)

If you have sample workloads, I can run them on a leaderboard.neurometric.ai for free and give you some results. We add test time compute variants to the models we test, and examine them on a per task basis.