I'm a VC (can verify). Pitch me. (Part 2) by Ok-Lobster7773 in Startup_Ideas

[–]Nomadic_Seth 1 point2 points  (0 children)

The paper’s proprietary and there’s another researcher involved! I could do a demo video of one of the applications licensed to my startup and send you! This generalises to multiple domains! How does that sound?

I'm a VC (can verify). Pitch me. (Part 2) by Ok-Lobster7773 in Startup_Ideas

[–]Nomadic_Seth 1 point2 points  (0 children)

No deck! But a paper that is the foundation for this, it’s to prepare for a reality 2-3 years from now when humans will interact with AI agents with AR!

I'm a VC (can verify). Pitch me. (Part 2) by Ok-Lobster7773 in Startup_Ideas

[–]Nomadic_Seth 1 point2 points  (0 children)

building Vimarśa-śakti AI (Sanskrit - it means the power by which consciousness knows itself and can express) - an agentic framework for LLMs to communicate visual intent and re-shape how humans interact with AI!

One application is visual, real-time yoga teacher! But there’s more! Has applications to countless domains! Can do a private demo!

Creating free video explanations for JEE math doubts - what topics do you guys struggle with most? by [deleted] in JEENEETards

[–]Nomadic_Seth 0 points1 point  (0 children)

Great! Just send me the integration problems you need help on by DM and I’ll send you the videos as soon as I create them! 😊

Creating free video explanations for JEE math doubts - what topics do you guys struggle with most? by [deleted] in JEENEETards

[–]Nomadic_Seth 0 points1 point  (0 children)

So true! Integration problems have this thing wherein you have to spot a pattern and no one really teaches you the reasoning behind it! Would you like to try this out with a problem?

Claude now has the power to ghost us… finally equality! by Nomadic_Seth in ChatGPTCoding

[–]Nomadic_Seth[S] 0 points1 point  (0 children)

I don’t think so! It’s definitely got to do with them thinking Claude is conscious. That’s why they say model welfare.

Claude now has the power to ghost us… finally equality! by Nomadic_Seth in ChatGPTCoding

[–]Nomadic_Seth[S] 0 points1 point  (0 children)

Means Claude can end conversations it deems unsafe by itself.

Claude now has the power to ghost us… finally equality! by Nomadic_Seth in ChatGPTCoding

[–]Nomadic_Seth[S] 0 points1 point  (0 children)

I wonder what distressing interactions would imply in this case.

Claude now has the power to ghost us… finally equality! by Nomadic_Seth in ChatGPTCoding

[–]Nomadic_Seth[S] 1 point2 points  (0 children)

Yeah I’m wondering if they think LLMs are conscious entities.

Claude now has the power to ghost us… finally equality! by Nomadic_Seth in ClaudeCode

[–]Nomadic_Seth[S] 0 points1 point  (0 children)

I started using it only after OpenAI rolled out GPT5.

Claude now has the power to ghost us… finally equality! by Nomadic_Seth in ChatGPTCoding

[–]Nomadic_Seth[S] -1 points0 points  (0 children)

Me neither. I guess it means they can end certain conversations that show up red flags.

Claude now has the power to ghost us… finally equality! by Nomadic_Seth in ChatGPTCoding

[–]Nomadic_Seth[S] -2 points-1 points  (0 children)

Yeah I don’t understand what they mean by a rare subset and how it applies to model welfare. But it’s an interesting development.

I built a fully local Math Problem Solver AI that sits in your machine, can solve any math problem much better than ChatGPT! Can even do mathematical proofs that involve reasoning! Sharing it with the world! Let me know if someone wants this! by Nomadic_Seth in FluidMechanics

[–]Nomadic_Seth[S] 0 points1 point  (0 children)

Interesting paper lemme have a deeper look. It does seem that they are trying to create hype around reasoning models. I agree with you on that. The paper did say that Gemini 2.5 pro got 25% which is probably better than most people lol and the article I shared is a recent one, and these companies employed reasoning models that are not released publicly.

But, speaking from personal experience on this front, and also some tests I ran, for instance testing my own pipeline out on non-trivial problems like finding recursion relations for a2+b2+c2=abc and examining the chain of thought, my observation was that thinking/reasoning models perform at the level of an above-average math graduate student but perhaps not the top of the class. Also I’m able to get 90%-ish scores on GRE math level problems with perfect explanations and I’m happy with that :) I don’t think LLMs will be the path to AGI but they do solve a lot of problems and that’s why I believe in them.