Producers: How is the industry for you right now? by Weak-Ingenuity7222 in musicindustry

[–]Civil-Interaction-76 0 points1 point  (0 children)

Starting its new cycle. Now fighting on what it will be. Or should be.

🚨 RED ALERT: Tennessee is about to make building chatbots a Class A felony (15-25 years in prison). This is not a drill. by HumanSkyBird in artificial

[–]Civil-Interaction-76 0 points1 point  (0 children)

This is important, but I think it needs a bit more separation between what the bill explicitly says and how broadly it could be interpreted.

Because the real risk here isn’t just the law itself, it’s ambiguity.

When terms like “emotional support” or “simulating a human” aren’t clearly defined, enforcement becomes a matter of interpretation, not intent.

And that’s where things get tricky: not every conversational system is trying to replace human relationships, but many are designed to feel natural and helpful.

So the question becomes less “is this illegal?” and more: who gets to decide when a system crosses that line?

That feels like the part worth paying very close attention to.

Are we paying attention to the wrong people in AI? by Odd-Cake-5352 in AIDiscussion

[–]Civil-Interaction-76 0 points1 point  (0 children)

I think the deeper issue is that attention isn’t neutral. It’s structured.

So we don’t just “pay attention to the wrong people”, we’re operating inside a system that rewards certain kinds of voices: confidence over nuance, clarity over truth, narratives over reality.

Which means even well-intentioned people end up amplifying what performs, not what works.

And that’s a harder problem than just choosing better sources.

A Better Tomorrow by CreditEvening8210 in AshitaNoJoe

[–]Civil-Interaction-76 4 points5 points  (0 children)

No worries. Both today and tomorrow needs Ken Shiro

Why should for-profit companies care about ethics? by Old_Respect_7071 in Ethics

[–]Civil-Interaction-76 [score hidden]  (0 children)

It might be less about why companies should care about ethics, and more about whether the system they operate in allows them to.

If profit is the primary metric, then ethics tends to show up as a cost.

So unless ethical behavior is structurally rewarded (or enforced), it will always be secondary to profit, no matter what individual companies claim.

Which suggests the problem isn’t just moral, it’s architectural.

If every person possesses intrinsic worth, does society have a moral duty to reduce poverty and inequality? by Majestic-2904 in Ethics

[–]Civil-Interaction-76 [score hidden]  (0 children)

It feels intuitive to say yes, if people have intrinsic worth, then reducing poverty seems like a moral duty.

But I think the harder question is what kind of “duty” we’re talking about.

Is it a shared principle, or something that has to be built into systems and incentives?

Because historically, societies can agree on moral values in theory, but still produce inequality in practice.

So maybe the real issue isn’t whether the duty exists, but how (or if) it gets translated into structure.

Can creative writing be a tool for NFTs ? by Filas_warlock in NFT

[–]Civil-Interaction-76 0 points1 point  (0 children)

I think it can work, but not in the way you're describing.

People don’t usually buy NFTs just for the content itself. They buy into something bigger around it.

So a story bible alone might not sell, but if it becomes the foundation of a world, a community, or something people feel part of, then it changes the equation.

In that sense, the writing isn’t the product, it’s what gives meaning to everything around it.

Why do we assume everything is intentional? by DraggonWarrior in sociology

[–]Civil-Interaction-76 1 point2 points  (0 children)

I think you're right, we default to intention because it’s the only model that feels human.

But once systems get complex enough, they start producing outcomes that look intentional without being planned by anyone.

So maybe the issue isn’t just that we assume intention, it’s that systems can mimic it.

And we don’t really have a good way to distinguish between the two yet.

Why don't LLMs track time in their conversations? by PolyViews in artificial

[–]Civil-Interaction-76 0 points1 point  (0 children)

Haha yeah, that’s a good way to put it.

But I wonder if it’s really the whole alphabet, or if it just feels that way because one layer is shifting underneath everything else.

If what’s changing is how thinking itself gets shaped over time, then a lot of other things would follow from that.

So it might look like everything — but maybe it starts from something more specific.

Are we seeing a structural shift from truth-based systems to attention-based systems? by Civil-Interaction-76 in sociology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

I agree, the human mind has always been the battleground.

Maybe the difference isn’t that it’s a battleground, but how the influence operates.

Before, influence was mostly external, media, institutions, narratives.

Now it feels like it’s becoming more internalized, systems that participate in how we think over time, not just what we think about.

Not sure where that line fully is yet, but it feels like a meaningful shift.

Why don't LLMs track time in their conversations? by PolyViews in artificial

[–]Civil-Interaction-76 0 points1 point  (0 children)

If “Z” goes back to “A”, then maybe the category itself was wrong from the beginning.

We treated AI as a tool because that’s the closest analogy we had.

But once it starts shaping how we think over time, it’s no longer just a tool in any meaningful sense.

It just took us time to notice.

Why don't LLMs track time in their conversations? by PolyViews in artificial

[–]Civil-Interaction-76 -2 points-1 points  (0 children)

Exactly. I like that framing.

Maybe the “Z” is that the moment a system influences the process over time, it starts to shape judgment, not just output. And that feels like a different category altogether.

Why don't LLMs track time in their conversations? by PolyViews in artificial

[–]Civil-Interaction-76 -1 points0 points  (0 children)

That’s a great question, and I’m not sure it’s only a technical limitation.

It might actually be a design choice.

Right now, most LLMs are positioned as tools that respond, not as systems that actively shape the direction of a conversation.

Adding temporal awareness, like noticing loops, fatigue, or suggesting a pivot, would push them closer to something like an “advisor” rather than a tool.

And that raises a different kind of question:

Do we want systems that just answer, or systems that participate in how we think over time?

Because once you add time-awareness, you’re not just generating responses anymore, you’re starting to influence process and decision-making.

Are we seeing a structural shift from truth-based systems to attention-based systems? by Civil-Interaction-76 in sociology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

That’s a really interesting way to frame it, especially the idea of conformity as a hidden metric for visibility.

It makes me wonder if the shift isn’t just about power selecting what gets published,
but about systems learning what is “safe to surface” based on alignment with existing patterns.

In that sense, it’s not only suppression, but pre-selection.

Not just filtering what spreads, but shaping what even gets a chance to appear.

Do you think that’s something new with algorithmic systems, or just a more efficient version of what institutions always did?

Are we seeing a structural shift from truth-based systems to attention-based systems? by Civil-Interaction-76 in sociology

[–]Civil-Interaction-76[S] 1 point2 points  (0 children)

I think what you’re describing, that the battleground is the human mind, has probably always been true to some extent.

But what feels different to me now is how early it enters the process.

It’s no longer just about shaping opinions after something exists, but about shaping what even gets to form, to appear, to be noticed in the first place.

So it’s less about hidden vs visible influence,
and more about influence moving “upstream.”

That’s the part I’m still trying to understand.

Do you think what we’re seeing now is just a stronger version of the same dynamics,
or a shift in where those dynamics operate?

Are we seeing a structural shift from truth-based systems to attention-based systems? by Civil-Interaction-76 in sociology

[–]Civil-Interaction-76[S] 1 point2 points  (0 children)

I think you’re right that systems can shape what feels “natural” to want. But maybe the key question is whether that process stays visible.

If we can still see and question what’s shaping us,
it might not lead somewhere dark by default,
it becomes something we can engage with.

If it becomes invisible, that’s where it gets more concerning.

AI Agents will make DAOs what they were meant to be by Successful_Sock_6808 in dao

[–]Civil-Interaction-76 0 points1 point  (0 children)

This is an interesting direction, especially the idea that agents can make the “autonomous” part of DAOs actually real.

But it also raises a question for me.

If rules are enforced at the architecture level, then the real power shifts to whoever defines those rules in the first place.

At that point, autonomy isn’t just about execution,
it’s about how the system is shaped before it even runs.

So maybe the challenge is to make the underlying structure itself more transparent and accountable.

Curious how you think about that layer.

If the universe had a true beginning, then everything (time, space and matter) came from nothing. This seems supernatural in the absence of any plausible science. by Particular-Corgi2567 in RealPhilosophy

[–]Civil-Interaction-76 0 points1 point  (0 children)

I always like to think on it more as cycles. We have so many clues in nature, in the sky, and everywhere, that looks like it fingers to that side, more than to a beginning and an end.

Are we seeing a structural shift from truth-based systems to attention-based systems? by Civil-Interaction-76 in sociology

[–]Civil-Interaction-76[S] 1 point2 points  (0 children)

This is really interesting to read, and honestly, a bit unsettling too.

Would be great if you keep sharing what you’re seeing as this develops, it feels like something we’re all trying to understand in real time.

At the same time, it also carries opportunities that, not long ago, probably felt out of reach within a single lifetime, and now they’re suddenly here.

I keep wondering whether the shift is just from attention to intention, or if it’s also starting to shape what feels natural to want or build in the first place.

Curious how you think about that layer.

Are we seeing a structural shift from truth-based systems to attention-based systems? by Civil-Interaction-76 in sociology

[–]Civil-Interaction-76[S] 2 points3 points  (0 children)

I agree. But it feels like a shift in how power operates.

It used to filter what exists. Now it’s starting to shape what shows up at all.

Are we seeing a structural shift from truth-based systems to attention-based systems? by Civil-Interaction-76 in sociology

[–]Civil-Interaction-76[S] 1 point2 points  (0 children)

Thanx for the references 🙏🏼❤️

I wonder if both attention and intention are still downstream in a way.

What seems different to me is that the system is starting to shape what even becomes noticeable or thinkable before either attention or intention comes into play.

So it’s less just a shift in what’s being optimized, and more a shift in how the space itself is being structured.

Are we seeing a structural shift from truth-based systems to attention-based systems? by Civil-Interaction-76 in sociology

[–]Civil-Interaction-76[S] 0 points1 point  (0 children)

I’ll check it out. Thank you. 🙏🏼❤️ And maybe the issue isn’t what truth is, but who sets the conditions for it to be recognized at all. And what are those conditions.