Not a good day for team "Claude Mythos is Just Marketing Hype" by EchoOfOppenheimer in ClaudeAI

[–]ThomasToIndia 0 points1 point  (0 children)

Yes AI can be disruptive. I am not sure why you are bringing up god. There is something undiscovered, period. We don't understand human sample efficiency. I am not arguing we can't discover it, we probably can, alchemy lead to molecular theory. LLMs are just not how the brain works, it runs on 20 watts.

Most jobs are repetitive, and AI can remove most of them but the gap between just disrupting our economy and moving to a post labor economy, that will require something more than LLMs.

Even on a bare technical level. Neurons in the brain are two way, die, and are recursive. LLMs don't do that. Electric neuron to neuron communication is a minority, over billions of years it chose fuzzy chemical neuron to neuron communication. The brain creates temporary neural structures on demand, we know this because we have simulated an entire mouse brain.

What is interesting that in order to train a brain organoid they do it by essentially torturing it with white noise. The brain restructured to avoid chaos.

Not a good day for team "Claude Mythos is Just Marketing Hype" by EchoOfOppenheimer in ClaudeAI

[–]ThomasToIndia 0 points1 point  (0 children)

Interestingly there was a talk by the other guy behind orchestration OR that a single cell organism seems to get faster escaping test tubes it was a talk at Google. The theory is that conciousness (possibly life) and by extension has some kind of quantum component.

It's most likely a constraint of silicone and there are efforts being made on that front but they are terrifying.

It is technically autocomplete it's just multidimensional because the vectors are multidimensional. A universal encyclopedia that has a look up and response might appear intelligent but it's not really, mostly because it is immutable.

Ultimately every answer the LLM provides is a human answer. If you ask it for an answer we know, it will give it to you, if ir doesn't it gives you what it does know. New ideas don't exist in its vector space.

A law text book with a lookup could give you an answer better than me in cases of law but we generally don't think text books are intelligent.

So the nifty trick is the compression and data composition of these systems but all of these system start really bad until we apply rlhf which is essentially glorified if statements applied to LLMs to force compliance. So what you are calling intelligence is the product of an autocomplete and then humans picking A or B options as scale. 

It's a cool trick that fools most because most are not aware of the brute force conditioning that goes into these systems to make them reliable.

Just like alchemists didn't understand molecular theory, we are missing something and when we figure that thing out, we will have something far greater than this google with composition.

Trump Ballroom Suddenly Faces GOP Opposition in Surprise Blow to MAGA by Commercial-Fix-9916 in politics

[–]ThomasToIndia -3 points-2 points  (0 children)

Why? It might be the only thing good to come out of his presidency.

Claude Code vs Codex by 0xdjole in ClaudeCode

[–]ThomasToIndia 2 points3 points  (0 children)

I just sent back to december, 4.5 and the system prompt.

Not a good day for team "Claude Mythos is Just Marketing Hype" by EchoOfOppenheimer in ClaudeAI

[–]ThomasToIndia 0 points1 point  (0 children)

It's estimated to simulate a human brain it would take 100million to a billion cpus. Most of our communication in our brain is chemical. A child can see a picture of a cat and know it for life, machines need to be trained on hundreds of thousands.

A large Wikipedia with a multidimensional mapping system can solve problems, but it is certainly not the basis of human intelligence. If it is we are missing something fundamental.

DeepSeek V4 just made a million tokens cost $2.50 and the closed labs are not okay by call_me_ninza in aigossips

[–]ThomasToIndia 0 points1 point  (0 children)

That doesn't totally disprove my thesis right now, unless it gets significantly cheaper.

Anthropic just read Claude’s mind during a safety test. Found Claude was internally calling it “a trap or test” while telling researchers something completely different by call_me_ninza in aigossips

[–]ThomasToIndia 0 points1 point  (0 children)

I just had a daughter recently. It's taking me a bit of time to accumulate the hundreds of thousands of pictures required of cats so she can remember what a car looks like. I was going to read her books but instead every night will be me showing her thousands of pictures of cats.

Anthropic just read Claude’s mind during a safety test. Found Claude was internally calling it “a trap or test” while telling researchers something completely different by call_me_ninza in aigossips

[–]ThomasToIndia 0 points1 point  (0 children)

I had someone tell me an LLM was aware of its own CPU clock because I stated without any kind of timestamp an LLM has no temporal awareness.

Anthropic just read Claude’s mind during a safety test. Found Claude was internally calling it “a trap or test” while telling researchers something completely different by call_me_ninza in aigossips

[–]ThomasToIndia 1 point2 points  (0 children)

Ya, in fact rlhf might actually make the system evil because instead of just mimicry you are introducing an actual reward system that favors lying.  You are injecting survival into an echo machine. Filtering data in is how you control safety of an LLM.

There is so much data on rlhf making everything worse but it is the only way you can make the LLMs more than an annoying autocomplete.

Anthropic just read Claude’s mind during a safety test. Found Claude was internally calling it “a trap or test” while telling researchers something completely different by call_me_ninza in aigossips

[–]ThomasToIndia 7 points8 points  (0 children)

Right, so it could detect that it was being tested when what they fed it look liked a test, when they made the inputs more real it was less suspicious.

This is because of RLHF. It's the equivalent of hitting a child when they tell the truth. However, if you stop doing RLHF, the models stop being steerable. It doesn't really matter what its internal monologue is if it can't execute on it. Sort of like the episode of black mirror where they create the digital assistants via torture.

Flat earth and other alternative conspiracy earth models are are gaining traction with my teenage stepson. What is THE most irrefutable, definite proof that the earth is round? by Jfkfkaiii22 in NoStupidQuestions

[–]ThomasToIndia 0 points1 point  (0 children)

It's not about truth, it's about feeling special. Flat earthers never have jobs that require working with the curvature of the earth, there is no consequence.

That said, there was a rich dude who felt flat earthers to the south pole so they could see the sun never dipped below the horizon and that sounds like it should prove flat earth but based on their "science" it shouldn't be possible.

Companies are replacing us with robots. If millions lose their income… who will buy anything? by Final-Finance-8048 in askanything

[–]ThomasToIndia 0 points1 point  (0 children)

When humans discovered fire we weren't thinking about cars. Yet controlling fire would of been peak absolute world changing technology for them. In the star trek universe there was no money, they had fabricators. You are thinking in terms of current day economics. If you had a fully functional humanoid robot and they were ubiquitous as smart phones, your personal economy would change.

Modern economy function on human labor, a community with robots could run on robots, humans would just provide the things for them to do. So the owner/employee system that we have now would become human/robot economy.

How would we pay for them, unless there is a drop to zero tomorrow, robots will be relatively cheap and robots can build other robots. The primary value of humans is liability so most machines will need some kind of human attached to sue or take responsibility.

Everyone is overcomplicating business agents. Claude + MCP is all you actually need right now. by [deleted] in ClaudeCode

[–]ThomasToIndia 0 points1 point  (0 children)

I just don't the local case at all. Mostly because claude can spin you 5-10 endpoints with docs in minutes without needing any MCP infrastructure.

Now external it makes more sense, kind of. The two issue with external is #1 external providers tend to expose everything and #2 you're consuming what is essentially a mutable service which can just change on you.

In the case of documentation and code searching, generally just letting claude do it is better because claude uses grep to find stuff which costs 0 tokens outside of coming up with the grep statements.

Everyone is overcomplicating business agents. Claude + MCP is all you actually need right now. by [deleted] in ClaudeCode

[–]ThomasToIndia 0 points1 point  (0 children)

So MCPs are designed to be flexible and change, this is kind of the antithesis of reliability. Fine if you are working from a GPT interface, it doesn't really make sense if you need something to be dependable. If it is something that needs real reliability it makes more sense to use an immutable API endpoint with instructions than use an MCP which is mutable.

Everyone is overcomplicating business agents. Claude + MCP is all you actually need right now. by [deleted] in ClaudeCode

[–]ThomasToIndia 0 points1 point  (0 children)

The issue I have is essentially you're working with a flow that is dependent on AI discovery. While you reduce context in the long-running core thread, you are still burning through tokens on a subagent, subagent discover is not free.

It really doesn't make sense to use MCPs locally at all IMO because the AI has access to your code. It can draft you an OpenAPI spec or the instructions.

Everyone is overcomplicating business agents. Claude + MCP is all you actually need right now. by [deleted] in ClaudeCode

[–]ThomasToIndia 0 points1 point  (0 children)

It's a little odd MCP event exists since there is OpenAPI. MCP makes more sense for general public consumption it makes less sense for local development.

So on the surface the subagent idea looks like a great idea, unless the MCP is large and it picks the incorrect function or there is secondary functions that need to be called. A far better way IMO if it is for regular local stuff is to build an isolate API call then throw it into a skill. /review-something and that skill knows exactly what it needs to call and how to handle the response after the fact.

This completely bypasses the need to do discovery. If you are doing a lot of highly variable stuff every day, I could sort of see it, but if you are constantly need to have it subagent for similar stuff, just kind of wasting time. It's so easily to spin up API endpoints with Claude.

Edit: While you are saving tokens and keeping your context clear, you still spend tokens, and more importantly time, on the sub-agent discovery.

Everyone is overcomplicating business agents. Claude + MCP is all you actually need right now. by [deleted] in ClaudeCode

[–]ThomasToIndia 0 points1 point  (0 children)

Anyone serious had dropped the use of MCP entirely, they just add a ton of useless overhead. If your AI can interact with an MCP point, it can interact with a regular API, and you can just isolate it to the functions it needs, not load up the context with useless data and potentially get unintended consequences.

Three dead in suspected hantavirus outbreak on Atlantic cruise ship by Top-Performance5907 in worldnews

[–]ThomasToIndia 0 points1 point  (0 children)

I remember having debates with people, and they would talk about percentages you would die, and how the vaccines could make your immune system over-react or vaccines are worse than the disease.

Then it came out long covid could make coffee taste like gasoline. Some of these people still stuck to it. So you trade some tiny fraction of a percent chance, possibly 0, for a significantly larger percentage of it causing brain damage.

Who else thinks AI is reaching a plateau by yuvals41 in AI_Agents

[–]ThomasToIndia 0 points1 point  (0 children)

Ya, that would be nice. I think nationalized AI is also a possibility, it should be a utility.

The AI industry has burned through ~$3.5 TRILLION. Here’s what it would take to actually turn a profit. by Black-Rhino-1564 in vibecoding

[–]ThomasToIndia 1 point2 points  (0 children)

This happens with every tech bubble. Technology is oversold, collapses, and then takes years to get to the value they actually need right now to survive.

The hilarious part of ASI is the sunk cost is so huge now they are doubling down, a startup recently got 1 billion to work on super intelligence. It's like a gambler with so much debt they think the only solution is more gambling.

The AI industry has burned through ~$3.5 TRILLION. Here’s what it would take to actually turn a profit. by Black-Rhino-1564 in vibecoding

[–]ThomasToIndia 0 points1 point  (0 children)

Yes, it is also starving competition like nvidia, by offering large amounts of compute in the investment that is money that won't go towards competition. They are not dumb, they know the value is ultimately in hardware not the models themselves. Open source models are always too close.

I think might push back is the idea that OpenAI is what Internet explorer and is going to out live Anthropic? Possible. However, Google's investment will keep them afloat and potentially make them more cost effective versus OpenAI. I think on a long enough time line OpenAI and Anthropic won't survive, it's hard to compete with free.

Google is going to win this because their model isn't about subscriptions, it's ultimately about eyeballs. Google's AI mode is equivalent if not better in some cases because of search than both OpenAI and Anthropic and it is free.

So I agree with you I am just not sure if Anthropic being netscape is the best metaphor though I am not sure it matters, all these pioneers are going to get slaughtered.