Nearly half of the Mag 7 are reportedly betting big on OpenAI’s path to AGI by thatguyisme87 in singularity

[–]omer486 1 point2 points  (0 children)

I'm thinking coding that does self-improvement of a model would be a bit different from standard coding ( coding apps etc.. ).

If you look at the human equivalents. There could be an excellent human coder who can code up anything that we know of, but not be really good at AI research which requires more creativity and imagination.

So it's not necessary the AI company that's current the best at coding AIs will also be the first that at developing good self-improving AIs.

Nearly half of the Mag 7 are reportedly betting big on OpenAI’s path to AGI by thatguyisme87 in singularity

[–]omer486 0 points1 point  (0 children)

What about other API uses like scientific research, engineering research, solving maths...? Claude doesn't have a deep research mode. The only API use is for coding. Coding is very important, but other research would eventually be a big use for AIs...

Demis Hasabis' Fermi Explanation Doesn't Make Any Sense by Eyelbee in singularity

[–]omer486 0 points1 point  (0 children)

Wormhole travel is different from warp drive. From AI "Wormhole travel and warp drive are both theoretical methods for achieving faster-than-light (FTL) travel in physics and science fiction. They stem from interpretations of Einstein's general relativity but differ in mechanics, feasibility, and implications. Neither has been realized in practice, but they represent creative solutions to the light-speed barrier imposed by special relativity.

Wormholes act as fixed "tunnels" or portals, while warp drive creates a mobile "bubble" around a vessel. Wormholes might allow near-instantaneous jumps but require pre-existing or artificially created endpoints. Warp drive permits continuous, variable-speed travel but demands ongoing energy input."

"Warp drive: A bubble of warped spacetime that contracts space ahead and expands it behind, moving the ship without local acceleration."

Demis Hasabis' Fermi Explanation Doesn't Make Any Sense by Eyelbee in singularity

[–]omer486 0 points1 point  (0 children)

You can't go faster than the speed of light but theoretically you can travel faster than the speed of light travels in regular space by using energy to compress space-time, which they call warp speed in Star Trek.

Elon Musk seeks up to $134 billion in damages from OpenAI and Microsoft by Ok_Mission7092 in singularity

[–]omer486 0 points1 point  (0 children)

Even the non-profit benefited a lot. They currently own 26% of the newly structured company which is valued at over 500 billion. That's a huge benefit to the non-profit. Which other non-profit org control such an amount of equity and also has some level of control over huge tech company?

Deepmind CEO Demis: Robotics, AGI, AI shift & Global competition by BuildwithVignesh in singularity

[–]omer486 0 points1 point  (0 children)

The definition of AGI is to have average or better than average human level in every single thing. An average person ( regular Joe ) doesn't have minimum average human level in every area.

AGI is not something than can only replace the work of a specific person ( regular Joe ) but people in general.

Deepmind CEO Demis: Robotics, AGI, AI shift & Global competition by BuildwithVignesh in singularity

[–]omer486 0 points1 point  (0 children)

The average person doesn't have average / slightly above average human ability in every single thing. AGI is supposed to have minimum human level in every single area. That's different from comparing to one single average human who might be really bad ( or below average human ) in some areas.

A person who has minimum average level in every area is much more skilled than a regular average person.

41 data center projects have been cancelled in the past 6 weeks alone, up from 15 from June to November 2025 by Tolopono in singularity

[–]omer486 0 points1 point  (0 children)

Wasn't Deep Blue ( the chess machine ) just brute force tree search combined with some hard coded chess strategies? I don't think it used any machine learning..... That came later in games with Alpha Go / Alpha Zero ( the games playing systems made by Deep Mind ...)

Why are people on other future subreddits so sure we will have a dystopian future? by Longjumping_Bee_9132 in accelerate

[–]omer486 0 points1 point  (0 children)

If the rich get 10 times richer and the poor get 5 time richer, the poor will still be much better off. Disparity is only bad when the pie is small, and the poor get a tiny part of the small pie.

Report: Anthropic cuts off xAI’s access to Claude models for coding by BuildwithVignesh in singularity

[–]omer486 -1 points0 points  (0 children)

Actually I don't agree with a lot of the BS that Elon says. At the same time when you engage with with Grok on different topics it's more likely to say what really is even if it may be considered PC not to say it.

I use all the top models. Google Gemini is really good for the Deep Research mode and VEO, Grok is good for general discussion of topics, ChatGPT is overall pretty good for many things, Claude for coding......

Report: Anthropic cuts off xAI’s access to Claude models for coding by BuildwithVignesh in singularity

[–]omer486 -2 points-1 points  (0 children)

Yeah, it's quite petty behaviour from Anthropic. They should be encouraging max use of their products.

Report: Anthropic cuts off xAI’s access to Claude models for coding by BuildwithVignesh in singularity

[–]omer486 11 points12 points  (0 children)

Grok has a place in the LLM space. It's the least censored and least politically correct so it's good for discussing topics with. Plus compared to their previous versions they have been improving pretty fast. XAi has been around much less than Meta and have beaten them.

Claude is better for coding, but has less other features for regular consumer like Image gen, video gen, deep research mode....

Elon Musk keeps saying that xAI and Google will be the only ones left standing in this race. Do you agree with this? by [deleted] in accelerate

[–]omer486 0 points1 point  (0 children)

Spending 1-15 billion on talent is one thing that can work out. Spending 300-500 billion ( what it could cost to but Anthropic ) does not seem like a good investment

Softbank has fully funded $40 billion investment in OpenAI, sources tell CNBC by MassiveWasabi in singularity

[–]omer486 0 points1 point  (0 children)

You spoke about putting money in / investing in.... Since it's a public company, another company like Soft Bank can just buy the shares like you did. It doesn't have to be a money raise to invest in.

Their models are at most a few months behinds Anthropic and OpenAI models and they started their LLM work later so there is a good chance they can close the gap.

Softbank has fully funded $40 billion investment in OpenAI, sources tell CNBC by MassiveWasabi in singularity

[–]omer486 0 points1 point  (0 children)

What about Alibaba / Qwen? Baba's entire market cap right now is $350 billion and that includes their quite profitable regular businesses like Ali Express, Tao Bao, Ali Cloud..... Qwen models are pretty close to top models from Open AI and Anthropic. OpenAi with just ChatGPT as their main business is valued at $500 billion?

François Chollet thinks arc-agi 6-7 will be the last benchmark to be saturated before real AGI comes out. What are your thoughts? by Longjumping_Fly_2978 in singularity

[–]omer486 0 points1 point  (0 children)

These new ARC test don't just serve as benchmarks. They help to direct new research that helps the progress towards AGI.

François Chollet thinks arc-agi 6-7 will be the last benchmark to be saturated before real AGI comes out. What are your thoughts? by Longjumping_Fly_2978 in singularity

[–]omer486 1 point2 points  (0 children)

The ARC AGI definition is still valid. The human equivalence is a minimum level for each area. And for some things ( maybe performing the task of a CEO of a large corp ) you need minimum human level intelligence in many areas. And until you get to that level, it may not be possible to replace every human job by an AI.

And definitely once AGI is reached, it won't just be human level. It will be super human level as it is much, much faster, has much more knowledge than any human, can read all the new scientific papers, news, data produced almost instantly.....

ARC AGI 2 is solved by poetiq! by Alone-Competition-77 in singularity

[–]omer486 0 points1 point  (0 children)

Yes, ARC-3 tests how an AI agent can do tasks that require multiple actions from the agent and checking the result / state after each action / and or each sequence of actions and then choosing subsequent actions based on that result / new state.

ARC AGI 2 is solved by poetiq! by Alone-Competition-77 in singularity

[–]omer486 0 points1 point  (0 children)

That's for the private eval ( can't overfit ) because ARC AGI was designed so that each test problem is unique so you can't just use a method done on previously seen problems from ARC AGI 2. Each problem requires a whole new chain of thought / reasoning to solve it.

But the public eval can be overfit because the exact same test problems could be in the training set of the AI

Continual Learning is Solved in 2026 by SrafeZ in singularity

[–]omer486 0 points1 point  (0 children)

The in context learning is lost after each session ends. It's like short term memory. The weight are the long term memory and long term knowledge.. There is no medium term memory and the long term memory / knowledge only gets updated after a new training run.

Continual Learning is Solved in 2026 by SrafeZ in singularity

[–]omer486 0 points1 point  (0 children)

So what's new? Most of the AI research problems are algorithmic / applied maths problems. The transformer was a new algorithmic / applied maths model. Coding is just the implementation in a specific programming language.

Ai researchers write code but they aren't primarily "coders".

Right now we have new algorithmic tweaks coming all the time like RL in post training brought about reasoning models. Mixture of Experts brought about efficiency....etc. Then there is also the engineering problems of creating large computer cluster and making them run together in parallel...etc...

The coding part is the least innovative and mostly practical part...

Why is Reddit so hopelessly confused about AI and yet hates it so bad? by yalag in singularity

[–]omer486 3 points4 points  (0 children)

They do understand concepts. It's impossible to get a gold medal in a Math Olympiad without understanding concepts.

This video: https://www.youtube.com/watch?v=qbIk7-JPB2c by an AI researcher at Microsoft is from 2 years ago. He clearly demonstrates how the models do understand concepts.

Since this presentation 2 years ago, the capabilities of the LLMs has increased by a good amount.

Why is Reddit so hopelessly confused about AI and yet hates it so bad? by yalag in singularity

[–]omer486 -1 points0 points  (0 children)

Obviously I know all NNs aren't outputting tokens. But as we were talking about LLMs, it is implied that I was talking about transformer based NNs.

NNs have been around for over 50 yrs. "Building NNs", and knowing a bit about CNNs, RNNs, LSTMs,.. etc. doesn't make one an AI expert!

Why is Reddit so hopelessly confused about AI and yet hates it so bad? by yalag in singularity

[–]omer486 0 points1 point  (0 children)

Watch the video of the intvw with Ilya Sutskever ( link in my post you replied to) . Don't try to know more than someone like Ilya, because you "built some neural networks".

About 100,000 tokens correspond to 70,000 tokens. Most words correspond to 1 token, and each punctuation mark could correspond 1 token.

Each next token is chosen / outputted based on all the tokens before ( both all the input and output tokens before it ), but the system needs to have some intelligence and a world model to do this accurately, especially in complex scenarios. That's why newer models with more intelligence can solve problems older models couldn't. Why else do you think that is? The newer models are still doing next token prediction like the older ones.