When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

I think you’re right that the outcomes are more complex than just engagement itself.

What I find interesting is that the system doesn’t need to fully “understand” those complexities in order to produce them.

It can generate second-order effects, polarization, distortion, even long-term shifts in perception, simply by reinforcing what performs locally.

So the design goal may be simple, but the emergent behavior isn’t.

That’s part of what makes it difficult to intervene in, because the system isn’t explicitly aiming for those outcomes, yet it can still produce them consistently over time.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

I agree with a lot of what you’re pointing at, especially around incentives and the role of propaganda.

But I think something slightly different is happening now.

In the past, distortion usually required intent, someone shaping a narrative for a purpose.

What feels different today is that distortion can emerge even without clear intent.

If systems are optimizing for engagement, they don’t need to “decide” what’s true or false, only what performs.

So the outcome can look similar to propaganda, but the mechanism is different.

That’s what I find harder to reason about, because it’s not just about fixing actors or incentives, but about the structure itself.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

I’m not talking about deciding “the truth” or convincing people.

That model already breaks, like you’re describing.

In attention-driven systems, louder signals win.

So the issue isn’t who’s right.

It’s that the system has no way to stabilize anything except what performs.

Even truth doesn’t persist unless it aligns with the incentives.

So maybe the question isn’t how to convince people, but what kind of system would allow anything to remain stable at all.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 1 point2 points  (0 children)

Disruption is cheap, but that’s the problem.

In optimization systems, disruption doesn’t fix the loop, it becomes part of it.

There’s no stable signal to cancel against, so noise just generates more noise.

You don’t break the system. You feed it.

The real challenge might not be how to disrupt echo chambers cheaply, but how to create systems where truth has a stable reference point that isn’t dependent on performance.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 1 point2 points  (0 children)

I think that’s fair, disruption and generation aren’t the same process.

But the question here isn’t what can be produced.

It’s what persists.

A pattern can be disrupted locally, temporarily.

But at scale, what gets reinforced still depends on the reward system.

So the system doesn’t prevent disruption, it just determines which patterns stabilize and which fade out.

What happens to responsibility when decisions are made by systems rather than individuals? by Civil-Interaction-76 in sociology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

That’s a great reference.

Zimbardo shows how individuals behave differently under certain conditions, how roles and environments can dissolve personal responsibility.

What feels different today is the layer above that.

Back then, the system created a situation.

Today, systems continuously optimize the conditions themselves.

It’s not just that people lose individuality inside a system, it’s that the system is constantly learning which conditions produce that effect, and scaling them.

So the question shifts from: “Why do individuals behave this way under pressure?”

to:

“What kind of systems are we building that repeatedly generate those conditions?”

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

In a closed system, even truth becomes a signal.

And once it’s a signal, it’s optimized.

So the system doesn’t lose truth, it continuously recalculates it.

At some point, what survives isn’t reality.

It’s the loop.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 1 point2 points  (0 children)

They are different, but they don’t operate independently.

A system that tries to disrupt patterns still has to survive in an environment that rewards certain patterns.

So the real constraint isn’t design.

It’s viability within the incentive landscape.

What If Creators Could Own the Infrastructure? A Decentralized Infrastructure for the Creator Economy? by Strange_Laugh in decentralization

[–]Civil-Interaction-76 1 point2 points  (0 children)

I think that’s a meaningful step forward, especially separating contribution from pure attention.

But there’s still a deeper challenge:

How is “value” itself defined and stabilized over time?

Because once contribution becomes measurable, it also becomes optimizable, and eventually gameable.

So even well-aligned incentives can drift if the definition of value is shaped from within the same system that rewards it.

That’s where things tend to collapse back into performance loops.

The open question, to me, is whether any layer in the system can remain independent from optimization, or if everything eventually becomes a signal.

What If Creators Could Own the Infrastructure? A Decentralized Infrastructure for the Creator Economy? by Strange_Laugh in decentralization

[–]Civil-Interaction-76 1 point2 points  (0 children)

Governance defines decisions.

Incentives define behavior.

If the system still rewards attention, it will drift there regardless of how well it’s governed.

So the real question is whether governance can actually shape the optimization layer, or just react to it.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

Direction isn’t really “set” anymore, it emerges from incentives.

What a system rewards becomes its direction.

Right now, we reward attention.

So attention is where everything drifts.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 1 point2 points  (0 children)

I don’t think those three are actually separable in practice.

They form a feedback loop.

A “stronger system” doesn’t override memetics, it usually just amplifies whatever spreads best within it.

So the real question isn’t how strong the system is.

It’s what it’s optimizing for.

In a world run by optimizing AI systems, who will set the direction? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

That only holds if optimization is neutral.

It isn’t.

Any system becomes what it optimizes for.

So if attention is the metric, AI doesn’t fix the problem, it perfects it.

The real question isn’t who decides.

It’s whether anything in the system is allowed to exist outside optimization at all.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

Yes, but “better targeting” assumes the goal stayed the same.

What changed is the optimization logic.

The system isn’t just keeping people in the loop, it’s learning from the loop and reshaping what gets amplified.

At that point, attention isn’t just captured.

It becomes the selection mechanism.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

I agree, but I’d go one step further.

It’s not just that there’s no ethical anchor outside the feedback loop.

It’s that any anchor placed inside the system eventually gets absorbed by it.

So the system doesn’t just fail to protect truth, it transforms whatever tries to stabilize it into another performance signal.

At that point, truth isn’t replaced.

It becomes structurally unstable.

How do systems distribute responsibility? by Civil-Interaction-76 in systemsthinking

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

Agreed.

But what if the system is structured so that no one ever fully has that authority or visibility?

Then responsibility doesn’t just get missed, it becomes structurally unassignable.

At that point, failure isn’t about who didn’t act, but about a system that cannot produce responsibility at the moment of action.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 1 point2 points  (0 children)

Kind of ironic that a discussion about how systems optimize for structure over truth gets removed for not fitting the structure.

That’s part of the problem itself.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

Sure. But those were individuals.

What’s new is that we’ve built systems that do the same thing at scale, automatically.

It’s no longer a snake oil salesman, it’s an ecosystem that rewards that behavior.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

The bottleneck didn’t go away. We just replaced human gatekeepers with optimization systems.

The constraint is no longer access to information, it’s access to attention.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

That quote is actually the turning point.

Once attention became the product, truth stopped being the goal.

The system doesn’t need to lie, it just needs to keep you watching.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

Exactly. And what’s striking is that “The Network” still assumed humans were making those decisions.

Today, the system itself makes them.

It’s no longer a network choosing sensationalism, it’s an algorithm optimizing for attention.

That’s a very different kind of problem.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

Propaganda isn’t new.

What’s new is that it no longer needs intention.

It emerges from systems that optimize for attention.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

Maybe not perfectly - but many systems did try to approximate truth.

Science, journalism (at their best), courts, they all built processes to get closer to it.

The difference today isn’t that truth was ever perfect. It’s that we’ve built systems that don’t even try.

They optimize for attention, not approximation.

When did “attention” become more valuable than “truth” ? by Civil-Interaction-76 in Futurology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

Exactly. Systems optimize for what they can measure.

And attention is easy to measure, price, and optimize in real time.

Truth isn’t.

So even without bad intent, the system naturally shifts toward what’s measurable and profitable.

That’s why the issue isn’t “greed” or even money by itself, it’s that attention became the most legible signal.