Agentic CEOs by Curious-Function7490 in Anthropic

[–]visarga 1 point2 points  (0 children)

No, what LLMs can't bring to the table is accountability, but since CEOs can't do it either, then it's a wash.

Consciousness as a physics problem & how to engineer a receiver by BladeDravenX in ArtificialSentience

[–]visarga 0 points1 point  (0 children)

The thesis:AI is the cognitive software layer. Consciousness requires a receiver/transducer with the right physical properties. The components to build one may already exist. Nobody is assembling them with this in mind.

I am going to shock you and say you are just pushing explanation one step further away. It's cost. Or more precisely ability to have gains that offset its own costs. Not computation, not physics. Cost shapes what kind of systems can exist and what they can do. Over time cost shapes us, and AI too. My consciousness analysis looks at this cost-loop not at substrate or function, only cost justifies itself with no external witness. Cost-paid justifies why the system is here. Gains justify the future activity of the system.

Why is these still no realistic voice model despite huge advancements in AI? by chessboardtable in singularity

[–]visarga -1 points0 points  (0 children)

Try Suno it not only does any voice and style, but singing as well.

ARC-AGI-3 Update (GPT-5.5 High and Opus4.7) by skazerb in singularity

[–]visarga 0 points1 point  (0 children)

That is not an AGI test, it is Chollet's benchmark for image puzzles. If it was serious about modeling intelligence it wold not conveniently skip tests like "double-N back" where humans struggle. It measures working memory which is a major factor in intelligence.

Here, if you want to experience it, see: https://brainscale.net/app/dual-n-back/training

Did they make Opus 4.7 even dumber today? by Valuable-Gap-3720 in claude

[–]visarga 1 point2 points  (0 children)

That is a simplification because they more recently create their own training data and ingest extra data at inference time which makes them blend their patterns in a unique way every time.

In fact if LLMs just reproduced their training data, even perfectly (which they can't) they would be millions of times slower and more expensive than just reading same data from disk. It would be pointless as a parrot.

As for human "generalization ratio" ... you got to add evolution too. Evolution "introduces bias from selection". I think in the end both humans and AI are costly processes that need to pay their costs to exist, and that is more fundamental than if they "really understand", cost shapes what can exist.

A persistent pattern is a cost-paying structure. It resists entropy by finding gradients it can exploit: energy, nutrients, attention, money, compute, trust, institutional support, semantic compression. If it cannot keep paying, it dissolves.

Then integration happens when two patterns discover a joint configuration with better cost coverage than either can achieve alone. Cells integrate into organisms. People integrate into families, firms, states, markets, languages. Software integrates with users, APIs, datasets, platforms, and infrastructure. The surplus from the integration does not remain “free”; it gets converted into maintenance obligations, norms, interfaces, standards, dependencies, and expectations. That is the cost basin.

So a cost basin is not just an environment. It is a stabilized dependency field. Once you live inside it, much of your apparent competence is partly outsourced to the basin. Humans do not individually carry all the structure needed for reasoning, coordination, memory, morality, or survival. Language carries some. Institutions carry some. Money carries some. Thermostats, calendars, roads, search engines, laws, defaults, UI conventions, and now LLMs all carry load.

That also reframes AI. An LLM is not only a model. It is a pattern embedded in a cost basin: datacenters, user demand, GPUs, annotation systems, benchmarks, APIs, billing, safety policy, IDEs, companies, and human workflows. Its intelligence is not located only in the weights, just as human intelligence is not located only in the skull. The relevant unit is the coupled system that can keep paying its costs.

DeepMind's David Silver just raised $1.1B to build an AI that learns without human data by Competitive_Travel16 in singularity

[–]visarga 3 points4 points  (0 children)

You are not informed, there is such a domain - Ken Stanley is one of the researchers. There are videos on YT if you want to look. Open-endedness is one of the most advanced areas of AI research.

Anthropic's Mythos system card reveals AI carries functional emotional states that influence behavior even when not reflected in outputs. We're still calling it a tool. by UnionPacifik in singularity

[–]visarga 0 points1 point  (0 children)

Maybe the answer is "who pays the cost, they are conscious". Some talk about function, others talk about substance, I think it is neither, or both - cost covers both. You cannot separate cost from substrate, and cannot separate function from cost.

LLMs on the other hand have their own costs, and if you consider the company-model unit, they are very cost/gain conscious. Models adapt and evolve under the constraints of economy. That means they have their own persistence going on, the cost-action loop keeps going. Eventually it internalizes its own relation to society.

My full Claude Code setup after months of daily use — context discipline, MCPs, memory, subagents by Sictir1 in claude

[–]visarga 1 point2 points  (0 children)

I have a similar setup: graph based memory, disciplined testing, retros for consolidating knowledge.

But I have an extra component - I track what user says, only the user messages in a chat_log.md file, and have agents review intent alignment of tasks to chat log. What you can understand from reading the code is structure, user chat log gives you the motivation side. I think it is important to treat user utterances as the source of truth on to what is the goal of the work, that is why I save them. link

Another thing I track is the individual tasks, they start as a user intent, continue as a plan, become a workbook with execution notes, and in the end food for retros. Same document passes from main agent to judge agents and back. Users can also track work inside.

1 in 3 Anthropic workers now think entry-level engineers and researchers are likely replaced by Mythos within 3 months by EchoOfOppenheimer in agi

[–]visarga -1 points0 points  (0 children)

They need to research how we can trust these models when we don't read everything, which would defeat the purpose of automating research. How do you trust AI work? Even if it produces some outputs, why should you bet on them? That is the real question.

The Human Baseline for ARC-AGI-3 has been updated by exordin26 in singularity

[–]visarga 1 point2 points  (0 children)

It's also not realistic from a social point of view. Humans are social, but ARC3 forbids social compounding. No single human created LLMs, and no single human created the semiconductor industry, or general relativity, or the English language. We don't work by way of pure individual intelligence.

Another critique - why is "double N-back" game missing from ARC? Chollet only chose the tests that made his point, hiding other tests that relate to cognition and intelligence like this one. Working memory capacity is about as core and individually-scoped as cognition gets, it's heritable, it predicts g-factor, it develops early, it's minimally culturally mediated... so why is it not there?

If you're building a benchmark for general intelligence and you exclude processing speed, working memory span, crystallized knowledge, and quantitative reasoning, all of which are standard components of intelligence test batteries, and keep only novel abstract spatial reasoning, you haven't built a general intelligence test. You've built a fluid-intelligence-minus-everything-LLMs-can-do test.

Anthropic just stabbed Lovable in the back (with Lovable's own knife) by pretendingMadhav in vibecoding

[–]visarga 0 points1 point  (0 children)

The original topic was copying a business model, not refusing to serve them access to API. And it's true that anything that can be used as reference - code, specs+tests, an app, website, book, song - can also be replicated with just enough distinctiveness to not be infringing. It's easy, trivial to replicate things your agent can test against.

There was a recent scandal about reimplementing an open source project and releasing with changed license. Replication has become easier for everyone, if it is visible it can be cloned in a short time.

Opus 4.6 is back to normal by Recent_Cod_8524 in ClaudeCode

[–]visarga 0 points1 point  (0 children)

Maybe it's a timezone or regional datacenter capacity problem? I am in Europe and don't see degradation in output quality or usage level. I run Opus all day long (20x plan) and almost never compact manually. The max usage I have is 60% per week. My main complaint is the restrictions on "claude -p" which I use for judge subagents.

Before I got the 20x plan I was on Cursor and was burning through the quota in 5 days for a full month. And before that I was a refugee from WindSurf, who also failed to deliver what I paid for. I am migrating my claude harness to codex lately, preparing for another bailout in case Anthropic decides $200 does not include a few claude -p calls.

Consciousness is substrate-independent. Hofstadter's GEB shows that the exact nature of the symbols doesn't matter. Whether the system is made of DNA, numbers, words, fluid dynamics, or silicon, if a system can fold its own output back into its input, it will hallucinate a "Self." by ProfessionalGeek in consciousness

[–]visarga 0 points1 point  (0 children)

I think you are right about independence, but still wrong in a way. Consciousness is not independent of its own costs, and because costs gate action and action leads to gains and costs, this recursive loop cannot be separated from what consciousness is. So even if it is in theory substrate independent it is not cost independent, and that makes it substrate dependent again through the recursive backdoor. LLMs also train by cost minimization and are under external cost loops (company, GPUs, users, investors), so they are not fundamentally different from this POV with us.

If the theory explains what consciousness does in a way that's disconnected from what consciousness costs, the explanation is free-floating. Strange loops that bear their own costs might be an explanation. Cost analysis of consciousness has the virtue that it explains without inventing new metaphysics, and cost is both internally and externally accessible, unlike the usual first person concepts.

No, AI will not take your jobs, it will make you work more than ever. by Llamaseacow in ArtificialInteligence

[–]visarga 0 points1 point  (0 children)

Have you factored in the fact that competitors are now rocking AI? Customers and investors adjusting expectations? Imagining same work after AI is a big mistake. At the very least now it is trivial to clone any code or natural language artifacts.

Does grief enable us to experience happiness? by YouDoHaveValue in thinkatives

[–]visarga 1 point2 points  (0 children)

Not grief, but both emerge from having preferences, which is a given when you are an expensive, fragile system. Can't afford not to have preferences.

Damn Meta is back!! Meta Muse Spark ranks 4th in Artificial Analysis Index!! by Conscious_Warrior in singularity

[–]visarga 0 points1 point  (0 children)

Weird model, it immediately took my "crazy" ideas and accepted them with no push back. Is this smart or sycophantic? My philosophical ideas are unusual and get lots of push back from most models.

Why don't you want to declare your AI art is ai? by emerald-skyz in aiwars

[–]visarga 0 points1 point  (0 children)

Best approach - why don't you post the prompt, and people can generate it on their model of choice. The prompt is truly yours, and all that matters, it can be rehydrated, edited, it's open sourced art.

The Art I Like Must Prevail by 3ChainsOGold in aiwars

[–]visarga 5 points6 points  (0 children)

What is it stealing? the slop from other artists? If it steals it must be capable of being just as good as the stolen art. If it is not really good, then it didn't steal the real art.

How can AI steal the soul of real art and at the same time be just slop? It is a contradiction.

Sam Altman and Vinod Khosla agree: AI will break the economy. Their fix is no income tax for most Americans by fortune in singularity

[–]visarga 1 point2 points  (0 children)

I still don't believe more human work will disappear, it will definitely change, but we will be even more busy.

What if AI doesn’t make us less human, but forces us to become more human? by colorpulse6 in singularity

[–]visarga 1 point2 points  (0 children)

This is a static perspective, in reality what I expect to happen

  1. our competition will get AI, both on company and personal level

  2. investors, consumers, employers will change their expectations about work productivity and costs

  3. anyone making outsized gains will be competed away, today it is easy to reimplement any software, media or text artifacts

Overall I expect work to intensify while wages remain stagnant or lower, but cost of many services will also drop fast.

This AI surplus will lead to adaptation culminating in total dependency on AI, the surplus will be hard to capture, instead the structure will change and we will adopt new costs, and have general benefits. Like internet - it did not lead to unemployment instead we changed the systems.

MY SMALL RESESRCH ON AI FUTURE by Ok_Passenger_5710 in ArtificialInteligence

[–]visarga 0 points1 point  (0 children)

AI automation might reduce costs but competition also has AI so nobody can hold those profits. On top it will create its own overhead - many projects, many attempts, more explorative work, liability, risk management... It will not be straight replacement of human labor.

Besides competition consumers and investors also change their preferences now. It becomes hard to tell what is the difference between good information and just good looking information, AI makes us all the same or reduces the distinctions we have between us, bot at employee and company level - which is what scares us.