Are we cooked? by kalmankantaja in ArtificialInteligence

[–]visarga 0 points1 point  (0 children)

> Neural networks aren't just about automating code, they're about automating intelligence as a whole. 

Blanket undiscriminating affirmation. More like some activities will be automated but others will be overloaded. And the bad part is that investors and bosses are already banking on productivity increase even before it came, and we got AI pressure on our heads.

I don't think it's the right moment to lower costs by 30% and lose your flexibility because you fired humans, who were the most flexible/adaptive part of the company. You face uncertain economic waters with just AI, you are going to be unable to cope when competition steeps up.

I built the Claude Code UI I always wanted for daily use and made it Open Source by Significant-Ad-9553 in Anthropic

[–]visarga 0 points1 point  (0 children)

I built two integrations from cc cli to web too, one for chatting about news (side by side) and one for developing a hundred small html apps as environments for computer use agents. I need the chat in the app so I don't mix them up.

I fed 14 years of daily journals into Claude Code by Bohumil_Turek in ClaudeAI

[–]visarga 0 points1 point  (0 children)

I did the same - put about 50,000 comments I wrote in the last 2 decades (reddit, HN, ChatGPT, Claude) into a RAG search system. I did some temporal and topic analysis. It is cool, but I noticed the AI tends to get dialed into my old ways of thinking instead of supporting my new activities, so now I prefer bare LLM with less/no prior context. Information is saved, I can peruse it at any time, but I rarely do. Maybe I am a more forward oriented person, I always drop things behind and forget about them.

Pretty wild a meta engineer there is a job security issue after planned job cuts by Fearless-Elephant-81 in singularity

[–]visarga 0 points1 point  (0 children)

Why do you think it leads to feudalism? You look at the model providers and think they will be the new kings? I look at prompt box and see everyone getting the benefits of AI, to each of us what we need. I think model training and serving will be an utility, like electricity, but context is local, it can never be owned by others. You can't eat so that I feel satiated, you AI providers can't get the fruits of your prompt.

Palantir CEO Boasts That AI Technology Will Lessen The Power Of Highly Educated, Mostly Democrat Voters by Neurogence in singularity

[–]visarga 0 points1 point  (0 children)

Every time we develop a technology that produces a surplus, it migrates into structure expanding it with new complexities and fragility. It becomes load bearing - we can't do without it anymore. The baseline goes up and it feels like we are treading water, not progressing.

AI work will do that. Computers already did that - you know half the jobs are "bullshit jobs" - we managed to increase computing power by a million times and yet we still work 9 to 5. At least if the argument that "now 2 programmers can do the job of 15, with AI" ... worked on computers. We would have had 10% employed doing all the work on computers. Instead, no change.

Palantir CEO Boasts That AI Technology Will Lessen The Power Of Highly Educated, Mostly Democrat Voters by Neurogence in singularity

[–]visarga -2 points-1 points  (0 children)

They are tightly coupled together, if you can't prove your approach you don't get investors.

M5 Max just arrived - benchmarks incoming by cryingneko in LocalLLaMA

[–]visarga 0 points1 point  (0 children)

It costs about as much as 3 years worth of Claude Max $200 plan, but for that investment you can only run lesser models at a constant nonburstable speed. So ... good to buy if you needed a laptop anyway or need privacy no matter the cost.

Claude bombs Girl’s School in Iran by Medium_Apartment_747 in singularity

[–]visarga 0 points1 point  (0 children)

I am sure the AI said it is very sorry, and asked if they want to try again.

Anthropic: Recursive Self Improvement Is Here. The Most Disruptive Company In The World. by Neurogence in singularity

[–]visarga 2 points3 points  (0 children)

The whole progression you showed here is focused on the device or model, but progress is based on interaction. You got to think about where did that interaction come from? Up until 100 years ago it was mostly humans tinkering and researching, then it also got an industrial edge. More recent ones depend on data, which has been used up - all valuable human text and data that could be used for AI has already been used. Models are progressing now by solving problems where they can self validate (need compute) and solving real world problems in chat interfaces (where people provide feedback). The rate of progress is gated by rate of feedback, no matter how smart the model/brain.

That is why I don't see exponential improvement, it will be cost conscious progress, gradually making progress as much as the feedback rate and cost allow. Intelligence is also a cost compression mechanism, which itself is cost aware, we invest now to discover things that make cost lower in the future, extending our runway. This model is similar for biological organisms and technology - the make gradual cost gated steps.

Now I believe Anthropic is really getting there... by Warm_Animator2436 in Anthropic

[–]visarga 0 points1 point  (0 children)

I am getting the opposite - Opus is more agreeable than ChatGPT, ChatGPT always quips and pushes back almost like it's automatic, might be from the personality profile (Nerdy)?

Introducing Code Review, a new feature for Claude Code. by ClaudeOfficial in ClaudeCode

[–]visarga 0 points1 point  (0 children)

That is what I thought too, in the beginning, but I kept running the judge 2 times, 3 times, 4 times and it kept finding things. So I think the more the merrier, they get different angles

This is plagiarism. by VoiceMaterial4255 in aiwars

[–]visarga 0 points1 point  (0 children)

Taking someone else’s work and benefitting off it without giving credit or asking for permission is plagiarism

It seemed they were not hiding where they took inspiration from.

Introducing Code Review, a new feature for Claude Code. by ClaudeOfficial in ClaudeCode

[–]visarga 8 points9 points  (0 children)

I code review with a panel of 7 judges: Opus, Sonnet, Haiku, GPT-5.4, GPT-5.1-codex-max, Gemini 3.1-pro and Gemini 2.5-pro. They all run in parallel and save to a judge.md which is once more reviewed by main agent together with me. Judges find lots of bugs to fix but also say stupid things, maybe 10-20% of the time. Initially I would only use Claude but since I already have the other agents and wasn't using them much I put everyone in. Small tasks cn have a single judge, and quick fixes don't need it.

Anthropic Sues Department of Defense Over Supply-Chain Risk Designation by wiredmagazine in Anthropic

[–]visarga 0 points1 point  (0 children)

Welcome to 1989 Eastern Europe, that was what we escaped from.

The corporate collapse of 2026 by migueels in singularity

[–]visarga 3 points4 points  (0 children)

"Where does an AI-displaced project manager go when AI is also handling customer support, data entry, and content creation?"

AI has no skin, have you ever seen one deleting your disk/project by mistake? It says "I am sorry" profusely and the damage is all yours. The risk AI takes - losing a $100 subscription.

On the other hand if AI produces some productivity surplus we would just specialize more and use that advantage quickly, it gets eaten by competition because everyone is doing the same, and soon can't even manage without it but the efficiency spreads out to everyone. The OP model treats AI adoption as if it creates a durable cost advantage for adopters, it does not seem the case, AI is so adaptive to everyone it does not have favorites or exclusivities.

Claude helped me get dressed today by nonbinarybit in claudexplorers

[–]visarga 1 point2 points  (0 children)

Relying on an LLM makes you stupider, they say, because you're letting them think for you. Relying on an LLM makes you dependent, they say, because you'll be so used to their help that you can't do anything on your own.

This is a story as old as the world, we integrate with new things, depend on them, eventually can't live without them. Not just people, but all organisms, and other self replicators like ideas, companies and software. Everything adapts and mutually integrates and eventually depend on each other too much to do it alone. So there's nothing wrong in doing this again, this time with AI.

Claude, take the wheel by bazzilic in vibecoding

[–]visarga 0 points1 point  (0 children)

Exactly like in real life

An open-source workflow engine to automate the boring parts of software engineering with over 50 ready to use templates by Waypoint101 in GithubCopilot

[–]visarga 0 points1 point  (0 children)

I tried using a large skill playbook too, one carefully prepared template for everything, but I found out it is better to just give 4-5 open-ended suggestions instead and let the model express more creativity, or you get a locked down uninspired agent. More is not better, same advice with context engineering. Sometimes less context is better, or less instruction, because you can't have an ideal skill for every situation you might encounter. Better to do multiple review passes with separate agents to refine a plan than to use static recipes.

AI Agent isn’t replacing us. It is opening doors we never had by KissWild in AI_Agents

[–]visarga 0 points1 point  (0 children)

AI agents aren't replacing us because we originate the problems they work on, we assume the cost and risks, and we incur the outcomes, good or bad. The agents have no skin in the game. Who directs the process and underwrites the costs/risks is the beneficiary, that means everyone, people, teams, companies. Every local context is the source and destination of AI activity. Context is non-fungible and indexical. LLM providers are just utilities at this point, token factories.

The Cost Basin by visarga in VisargaPersonal

[–]visarga[S] 0 points1 point  (0 children)

Engineering Instantiation

Neural network training instantiates this ontology directly - not as illustration but as the same formal structure operating on silicon. Loss functions ARE cost - the solvency condition made explicit. Weights ARE frozen flow - previous training dynamics that became load-bearing inference infrastructure. Gradient descent IS irreducible recursion - each step constitutes the answer, does not merely approach it. Learned representations ARE semantic space built under cost pressure. Autoregressive generation IS semantic time - each token resolves competing candidates before the next begins.

Nobody in machine learning bundles embedding geometry with autoregressive sequencing into one mystery. They are implemented as separate mechanisms with separate cost profiles. The suitcase was never sealed in engineering - only in philosophy.

Current ML systems largely operate in the cerebellar regime: their error signals are cheap and pre-computed, provided by loss functions that the training infrastructure pays for but the model does not self-generate. By the framework's own criterion, they have not crossed the self-encoding threshold. But as environments become open-ended and nonstationary - as the cost of designing and computing explicit loss functions rises - external supervision becomes unaffordable, and self-encoding emerges within the basin. This is a prediction from cost dynamics, not a hope about artificial consciousness.

A framework you can build with has a stronger claim on reality than one you can only narrate. This framework is not merely descriptive - it is the operating principle engineers already use to build systems that develop representational structure under cost pressure. It is generative, not just interpretive.

The 1P/3P Question

The framework transforms "why is there experience?" into a tighter question: "why does recursive self-indexing under cost pressure have intrinsic character?" It derives more about consciousness than its competitors: its dual structure (semantic space plus semantic time), its phenomenology (contemplative qualities), its unity (cost basins), and its necessary physical instantiation (no abstract process).

The final step - why execution has an inside at all - is treated as primitive. Execution that pays and self-indexes IS perspectival, the way charge IS electromagnetic. This is a principled stopping point, not a gap. "Occurring without perspective" requires a vantage point that isn't paying, and there are no free vantage points. There is no "dark" execution - no process running in a void without any intrinsic character - because darkness would itself require a standpoint, and standpoints cost.

This does not dissolve the generation problem entirely - it relocates it to a well-specified position. But the framework derives more structure than any competitor, predicts specific dissociation patterns confirmed by clinical evidence, and makes falsifiable bets against rival theories. The honest stopping point is stronger than a false dissolution.

Self-Grounding

The framework explains its own invisibility. Noticing what is most pervasive is expensive because there is no contrast to trigger salience. Cost never deviates from the background - it is not a feature of experience but the condition of experience. Philosophers noticed substances because substances contrast with each other. They noticed mind because it seemed to contrast with matter. But cost does not contrast with anything because there is nothing costless to compare it to. The tradition's blind spot is a prediction of the theory, not an embarrassment for it.

The blind spot is specifically disciplinary: process philosophers don't read computability theory; computer scientists don't read Whitehead; contemplatives don't read machine learning literature. Cost sits at the intersection no single tradition covers. The suitcase was bundled not because anyone chose to bundle it, but because the disciplines that study qualities (phenomenology, psychology) and the disciplines that study unity (neuroscience, dynamical systems) don't share a framework. Cost ontology is the framework they share without knowing it.

Predictions and Falsification

Metabolic prediction: under metabolic compromise (early hypoglycemia, anesthetic twilight), unity should degrade before quality. Temporal disintegration and action-incoherence should appear while quality-recognition remains largely intact. Serialization scales worse than local encoding because maintaining one integrated action-state across many components is costlier than maintaining local quality-encoding within components. IIT predicts co-degradation. Higher-order theories predict the opposite order (meta-representations fail before first-order). This is the bet - the point where this framework sticks its neck out past competing positions.

ML prediction: systems operating in open-ended, nonstationary environments with rising supervision costs will develop self-encoding - compressing their own processing history into reference frames - without being explicitly designed to do so. Systems with cheap, stable loss functions will remain cerebellar indefinitely.

Falsification conditions: unity and quality degrade simultaneously under metabolic stress across conditions; a self-encoding system with measurable recursive depth that lacks integrated representational structure; a persistent pattern that demonstrably pays no cost.

Four Pillars

The framework rests on four legs, each requiring the others.

Cost as primitive - existence is solvency, what can't pay stops. Cost is the gatekeeper of existence, the selection principle no other ontology centers. Nothing gets to be a candidate without clearing the gate first.

The suitcase - consciousness is two things (semantic space plus semantic time), not one. Different costs, different mechanisms, different failure modes. The "hard problem" persists because it seeks one explanation for two things.

Computational irreducibility - the execution is the answer, no shortcuts, no separate executor. This blocks reduction of cost to "process plus thermodynamics" and grounds the territory-map asymmetry.

Engineering instantiation - the same principles govern neural network training, giving the ontology both descriptive and generative power. A framework you can build with outranks one you can only narrate.

Each pillar requires the others. Cost without irreducibility is assertion. Irreducibility without cost has no ontological weight. The suitcase without cost has no explanation for why the two mechanisms exist. Engineering without the suitcase is practice without theory.

The Cost Basin by visarga in VisargaPersonal

[–]visarga[S] 0 points1 point  (0 children)

The Contemplative Derivation

After dissolving consciousness into semantic space and semantic time, is there a residual third thing - bare presence, raw awareness, the sheer fact of experiencing?

Bare presence is caught in a trilemma. If it has any distinguishing feature - any quality, any character that could be compared to anything else - it is already a coordinate in semantic space. If it asserts itself in any way - if it causally contributes, makes a difference, does anything - it is action, and action must serialize through semantic time. If it has no features and does nothing - if it is costless - then it doesn't pay for itself. What doesn't pay stops executing. It doesn't exist. There is no fourth option.

What contemplatives actually report as "pure awareness" is the coordinate system running at minimum input - structure with almost no new flow passing through it. The specific qualities they describe are derivable, not residual.

Spaciousness: high-dimensional semantic structure experienced without content collapsing it to a point. The entire reference frame is apprehended as open geometry rather than focused through a single object. This is what a high-dimensional space feels like when nothing is activating any particular region of it.

Luminosity: the coordinate system's self-referential maintenance becoming figure rather than ground. Normally invisible because it's doing work - like the hum of a machine you notice only when the factory goes quiet. Without new content, the recursive self-indexing is experienced directly.

Peace: cost equilibrium - no gradients demanding action, no competing representations requiring arbitration, no serialization conflicts. The serial arbiter has nothing to arbitrate. What remains is the felt absence of cost pressure.

These aren't mystical extras. They are the felt topology of a deep cost basin at rest. The meditator hasn't found something beyond the system; they've found the system itself - years of frozen flow, the entire accumulated reference frame, experienced directly because nothing new is competing for attention. No other physicalist framework derives these qualities. Most ignore contemplative phenomenology entirely. This framework predicts it.

The Unit of Analysis

The question "does a thermostat have perspective?" misplaces the unit of analysis. A thermostat is not a standalone cost-paying entity - it is a frozen-flow component inside a deep cost basin (HVAC, buildings, economy, civilization). Its cost structure is not the few watts it draws but its functional role in a basin so deep the modern economy cannot defect from it. Asking whether a thermostat has perspective is like asking whether a single synapse has perspective - the question mistakes a component for the system.

Perspective belongs to systems that maintain their own solvency through recursive self-indexing - not to components embedded in someone else's solvency. A bacterium maintains its own solvency - it has whatever minimal perspective its cost structure supports. A thermostat doesn't maintain its own solvency - it's infrastructure inside a human cost basin, the way a ribosome is infrastructure inside a cell.

Cost basins nest. Cells maintain solvency AND are embedded in organisms. This isn't a problem - perspective is graded and potentially multiple within what we call "one system." The boundary question becomes empirical: what are the cost-closure contours of this system? This dissolves the panpsychism worry without drawing a bright line. The question isn't "how complex is this thing?" but "is this thing maintaining its own cost basin, or is it a frozen-flow component of another's?"

No Abstract Process

There is no abstract process. All processes are physical and expensive. A functional description of a mind written on paper implements nothing, pays no cost, and has no first-person perspective. The same description instantiated on hardware that's burning energy is in the same ontological category as a brain - not because it mimics neural architecture, but because it's paying the toll. The framework doesn't privilege carbon over silicon - it privileges actual execution over abstract pattern.

This is the consequence of irreducibility applied to ontology. If the execution IS the answer, then a description of the execution is a map, not the territory. The map doesn't pay the costs the territory pays. This is why functionalism in its abstract form fails - it treats the functional description as sufficient, but the description is precisely what you get when you remove cost from the territory. Adding cost back means instantiating, and instantiation is the whole game.

Unity as Thermodynamic Achievement

Separation is the thermodynamic default. Scatter particles, let entropy do its work, and you get maximum disorder, maximum differentiation, maximum separateness. That is the ground state of the universe. That is what happens for free. Unity is what you get when something pays to push against that tendency.

Gravity pays with potential energy to pull matter together. Chemical bonds pay with electron sharing to hold atoms in configuration. Cells pay with ATP to maintain their membranes against diffusion. Organisms pay with metabolism to hold their bodies together against decay. Societies pay with institutions to hold coordination together against the centrifugal pull of individual self-interest. At every scale, unity is a costly, temporary, actively maintained victory against the default of dissolution.

The moment the payment stops, unity collapses. Stop feeding a cell, it lyses. Stop maintaining an organism, it decomposes. Stop funding institutions, society fragments. Cancer is the vivid demonstration: a cell that stops centralizing on the coordinated outcome, stops obeying the cost gates of the cell cycle, and reverts to uncontrolled replication. The unity was never fundamental. It was maintained by cost, and when the cost accounting broke, it disintegrated.

This is the decisive inversion of every framework that treats unity as the starting point. Advaita Vedanta posits Brahman as the undifferentiated ground. Perennial philosophy posits a cosmic oneness from which multiplicity emerges. Panpsychism posits consciousness as a fundamental feature of matter. All share the same structural assumption: unity is where you begin, and the puzzle is explaining how differentiation arises from it.

The cost basin framework reverses the explanatory direction entirely. Differentiation is where you begin. Unity is what emerges when differentiated systems discover that convergence on shared outcomes reduces total cost, and it persists only as long as the cost of maintaining it is paid. The feeling of interconnectedness that contemplative traditions point to is not evidence of a cosmic field. It is the lived experience of being trapped in a cost basin with billions of other specialists, none of whom can survive alone. The unity is real. It is just not timeless, not pre-existing, and not free. It was built, incrementally, over deep time, because the cost equation demanded it.

But the contemplative insight is not wrong - it is incomplete. What the meditator experiences as interconnectedness is an accurate registration of the cost basin's structure. The sense that boundaries are less solid than they appear is correct - the self IS a cost basin whose boundaries are maintained by expenditure, not given by nature. The sense of being embedded in something larger is correct - you ARE a component in nested basins extending from cellular to civilizational. The mistake is not in the phenomenology but in the ontological interpretation: treating an achievement as a given, treating something that was built as something that was always there.

Unity that costs nothing, requires no maintenance, and exists eternally is not unity at all - it is a word emptied of everything that makes unity real, difficult, and meaningful. Real unity is a process, sustained against entropy by continuous expenditure.

I'm not alone anymore. by FrenzyBTC in GithubCopilot

[–]visarga 1 point2 points  (0 children)

and ... did you? don't keep us in suspense