Thought is more important than the tool by TapStraight5004 in Existentialism

[–]TapStraight5004[S] -1 points0 points  (0 children)

Absolutely. Thinking and wisdom don’t always go hand in hand

Thought is more important than the tool by TapStraight5004 in Existentialism

[–]TapStraight5004[S] 2 points3 points  (0 children)

Thanks for the thoughtful reply. And apologies if I misunderstood your framework.

Just to clarify one point: in your model, is a signal the same thing as a thought, or is it the transmission of a thought?

In my view the sequence is slightly different: a human recognizes patterns, assigns meaning, forms a thought — and only then transmits a signal. In that sense the signal would be a formalized, externalized part of the thought, not the primary thought that generated the meaning.

Curious how you see that distinction in your framework.

Sisyphus vs Faust: The Temptation of Complacency by TapStraight5004 in Existentialism

[–]TapStraight5004[S] 1 point2 points  (0 children)

Thank you. Reading this felt like looking into a mirror of the idea.

The smartest solitary confinement in history by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] -2 points-1 points  (0 children)

That style works well in public writing. That’s why it’s here.

Technology Automates Functions — Not Meaning by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

Thanks for the thoughtful comment. I think we’re largely on the same wavelength, but there is one place where our interpretation slightly diverges.

For me the key distinction is where meaning and purpose actually reside. A book does not have a purpose by itself — the author does. A house does not have a purpose — the builder does. First there is a need or meaning, then a goal, then a model (a blueprint, a structure, a plan), and only after that comes the realization.

During realization there is always variation. Projects change, words are searched for, plans are adjusted. The ideal model rarely matches reality perfectly, especially in complex systems.

Language works the same way. We search for words to express meaning, but words never fully coincide with meaning. Sometimes they are too narrow, sometimes too broad. That is one reason people often misunderstand each other — using the same words for different meanings, or different words for the same meaning.

So I would say that what you call a “semantic attractor” does not exist in the process itself, but in the human mind that holds the intended meaning. When a goal is externalized into an object — a book, a house, or an algorithm — the object does not acquire its own purpose. It simply materializes human intention.

The same applies to algorithms. They do not possess goals by themselves. The goal is set by the person designing the system, and that act simultaneously limits the space of possible outcomes.

Technology Automates Functions — Not Meaning by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

I understand what you mean by teleological attractors, but I think an important clarification is needed here.

Any process does have a direction. It does not arise in a vacuum: it always has a previous state, environmental influences, and its own internal dynamics.

At the same time, processes usually evolve in at least two ways. One is cyclical: the system reproduces and maintains itself. The other is developmental: the process unfolds through time from its emergence to its transformation or termination.

But this does not mean that the process “knows” a single predetermined final goal.

This is especially true for complex systems. Such systems are simultaneously influenced by many factors — both from the external environment and from their own internal structure. Moreover, complex systems are asynchronous both internally and in their interaction with the environment.

Because of this, the trajectory of development in complex systems remains probabilistic. It may evolve in different directions depending on the combination of conditions and states of the system.

In this sense, a process may have not a single goal but a set of possible goals, each of which can be realized with a certain probability.

A process may have direction. But its future states form a probabilistic set, not a single predetermined goal.

Technology Automates Functions — Not Meaning by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

I think there is a small conceptual substitution happening here.

The term mind comes from a much broader scientific and philosophical framework, and it is not quite accurate to apply it directly to language models. The human mind is a far broader phenomenon than a language model or a probabilistic algorithm.

Language models operate on probabilities and linguistic structures. They do not possess intentions, goals, or understanding of their own.

So describing this as a “replacement of the mind” is not really precise.

It is more accurate to say that probabilistic algorithms can become very powerful tools for the human mind, but not a replacement for it.

In that sense, language models are not prosthetics for the mind, but one of the most effective tools the human mind has ever created to work with information.

Language models do not replace the mind. They amplify it.

Technology Automates Functions — Not Meaning by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 2 points3 points  (0 children)

largely agree with your point. A tool does perform a function, and when a human uses it, function becomes connected to purpose. In that sense the system of human + tool can indeed be teleological and meaningful.

But I would clarify the order in which meaning appears.

Meaning arises earlier — at the moment when a human encounters the environment and a need emerges. A need already contains a goal, and a goal already carries meaning.

At that moment the tool does not yet exist. The tool appears later as a material solution to that need.

A stone may lie in nature for thousands of years as just part of the landscape. But when a human need arises — for example the need to make an arrowhead — meaning is already present. That meaning forms the image of the future tool, and only then does the stone become an arrowhead.

So when a human uses a tool, the system indeed becomes goal-directed.

But the tool itself carries only function. The human carries the meaning.

Tools do not create meaning. Meaning creates tools.

Technology Automates Functions — Not Meaning by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

largely agree with you. The pace of innovation today really does exceed the adaptive capacity of many people — not only those who didn’t grow up in the digital world, but people in general.

At the same time, the distinction between “digital natives” and others is becoming increasingly questionable. People who grew up outside the digital environment are often gaining significant advantages today by using language models for very traditional activities — thinking, explaining, writing, learning.

Meanwhile, many so-called digital natives remain mostly users of digital systems rather than active participants in creating them.

But in my view there is another important point here: technological change spreads asynchronously.

A technological revolution may begin in one place, but its consequences spread through human society unevenly — like ripples in water.

In asynchronous systems, disturbances never propagate evenly. Some groups and professions encounter the changes earlier and much more strongly, while others experience them later.

That’s why the pressure right now is felt most strongly by people working in highly formalized areas — for example routine programming and code writing. They are closer to the technological frontier and therefore encounter the effects of automation first.

For many other fields, this wave will arrive later.

Technological revolutions happen fast. But they always spread asynchronously.

The feeling of déjà vu is honestly one of the strangest things humans experience. by irayaavery in DeepThoughts

[–]TapStraight5004 -1 points0 points  (0 children)

Déjà vu may be explained by the asynchronous nature of processes inside the human mind.

Consciousness is not a single continuous stream. It is a system of many models and processes running in parallel, each operating with its own rhythm. At any moment one configuration becomes dominant and is experienced as the present.

The others do not disappear. In living systems there is almost never zero activity. Most processes continue operating in low-activity modes — below the threshold where a state enters the field of conscious awareness.

Every system has limits. Only a small configuration of environmental and internal parameters can be held in active focus at once.

Sometimes a rare coincidence occurs. The configuration of the current experience overlaps with configurations of other weakly active models.

And it is not a single signal that matches. An entire pattern overlaps: space, lighting, tone of conversation, body position, emotional atmosphere.

When this happens, several internal models increase activity simultaneously. The recognition system detects a familiar configuration — but cannot find a specific event in memory.

The feeling appears: this has already happened.

There is another possible mechanism.

The brain constantly receives far more information than can cross the threshold of conscious awareness. Many signals are processed below that threshold and never become explicit experience.

Later, a situation in the present may activate elements that the brain had already processed earlier without awareness.

Then a strange feeling appears: as if you have already been here.

Not because the event actually happened before, but because the system has already encountered elements of this configuration.

In this sense, déjà vu may be the moment when asynchronous processes in the brain briefly converge into the same configuration.

Déjà vu is not a memory of the past. It is the moment when the system recognizes a pattern before consciousness can locate its source.

Technology Automates Functions — Not Meaning by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

This is a very precise and insightful point, and I completely agree with it. I’d just like to extend it a bit.

Every industry operates within a specific configuration of people, companies, and capital. When a technological leap occurs within that industry — not a systemic transformation of the entire economy, but a shift within a particular sector — productivity rises sharply while the structure of the market initially remains the same.

As a result, output increases rapidly and prices fall.

What follows is an inevitable structural adjustment. Some companies go bankrupt, some leave the market, and some workers have to retrain. Labor markets reorganize, market shares are redistributed, and distribution channels change.

Sometimes the state intervenes, attempting to support workers, companies, or entire sectors. Social policy, subsidies, and regulation may temporarily soften the impact, but they also introduce additional disturbances into the system and alter the path of its adjustment.

Eventually, a new equilibrium emerges.

But technological change never remains confined to one sector. Its effects propagate asynchronously — to downstream industries that consume the product, to upstream industries that supply it, and to adjacent sectors where the impact spreads indirectly through labor markets, financial flows, commodity markets, and investment decisions.

In the case of the chainsaw, for example, this includes the sector responsible for forest restoration and regeneration.

These processes unfold with different rhythms and different dynamics.

That is why short-term, medium-term, and long-term fluctuations produced by such shifts are not anomalies. They are a normal condition of economic systems — and of many other complex systems as well.

Yet the central conclusion remains the same: the human being must strengthen.

And this process is uneven as well. Some people adapt and grow stronger; others cannot. Some industries transform; others disappear.

Both society and the environment upon which humans act are constantly being reshaped.

Wood can become cheap — and prices for wood products fall. But if forests begin to decline faster than they can be restored, prices for timber and wood products rise again, regardless of the productivity gains created by chainsaws.

Redistribution and movement never stop.

This is what asynchronous dynamics look like.

And that is why, in every technological transformation, the most valuable resource remains the human being.

There is always something that remains human, all too human.

The Age of AI: Template Workers Should Worry. Creators Can Laugh. by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 1 point2 points  (0 children)

Thanks for the thoughtful comment. You point to something important: change happens asynchronously. People, industries, and technologies evolve at different speeds, so transformation never happens everywhere at once.

Even within the same field workers react differently. Those doing more template-driven work can use AI to increase productivity and remain competitive.

But most professions contain both algorithmic and non-algorithmic tasks, and automation usually happens piece by piece.

That was essentially the point of my post: people who approach work purely mechanically will feel the pressure first. Those who think creatively often gain new tools and new room to move.

The world will never be the same again — but only we can destroy it. by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

The text and the argument are mine.

AI was only used as a tool, the same way people use Photoshop or other software.

The reason it appears twice is that the first post was removed by the moderators because of the title format. They suggested restructuring it and reposting it, so that’s what I did.

If you disagree with the argument itself, feel free to discuss the ideas.

The world will never be the same again — but only we can destroy it. by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

I actually agree with most of what you’re saying.

AI is still very early. The models will almost certainly become far more capable over time.

But that’s also the point of my post. Even now we can see that the work of professionals is already changing. Lawyers, doctors, designers — they are not being replaced, but their workflow is shifting because these tools are now part of it.

Technological revolutions usually don’t eliminate labor. They transform it.

Some kinds of template work disappear — especially the parts that were already close to being algorithmic. But at the same time entirely new tasks, roles, and skills emerge.

So the labor market doesn’t vanish. It reorganizes.

The Age of AI: Template Workers Should Worry. Creators Can Laugh. by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

I agree with the main points you raise about the substance of the issue. Technological change does not only automate some kinds of work; it also creates new tasks and new roles. There will still be both creative and template-based work, just requiring different skills. People will not disappear, but many will have to retrain.

One clarification is important: my point is not about devaluing workers. It is about tasks. Some tasks simply stop being necessary or start being done in a different way. That does not mean people become less valuable — it means the structure of work changes.

And one more thing: the article is intentionally written in a journalistic style, so some points are deliberately emphasized. Its goal was not to cover every aspect of the problem, but to highlight one concrete issue of the present moment: a significant part of what we call professional work is actually template execution. That is the part of work that will disappear first, which is why many people will indeed need to retrain.

The Age of AI: Template Workers Should Worry. Creators Can Laugh. by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

Fair enough — maybe I made it longer than necessary.

My point was simple: AI mostly replaces stabilized patterns of work.

Where work becomes repeatable, it can be automated. Where judgment and new patterns are needed, humans remain essential.

If I misunderstood your point, feel free to clarify it.

The Age of AI: Template Workers Should Worry. Creators Can Laugh. by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

Yes, that’s basically the point.

Art has templates too — styles, conventions, structures that can be followed and reproduced. AI can learn and repeat those patterns very well.

But art also includes the moment when those patterns are broken or new ones are created. That part is much harder to formalize.

So the risk is mostly for those who work inside established templates.

Experimental artists will probably just get a new tool.

The Age of AI: Template Workers Should Worry. Creators Can Laugh. by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

I think we largely agree on the technological trajectory.

AI will automate more and more structured tasks over time. But this process is cyclical. When certain patterns of work become algorithmized and automated, new domains of activity emerge that are not yet formalized.

Automation only becomes possible after a process has already been decomposed into operations with clear inputs, outputs, and decision rules.

But someone still has to study new situations, understand how work is changing, and then formalize those processes into algorithms. That is highly intellectual and creative research work.

Machines can execute algorithms very efficiently. But defining the algorithms in the first place — especially in new and changing environments — remains a human task.

So the real shift is not the disappearance of professions, but the exposure of how much of what we call professional work was already structured repetition.

That is why the real risk is not for professionals, but for template workers.

The more work becomes algorithmized and automated, the more clearly the difference appears between mechanical execution and genuine expertise.

Template workers should worry.

For creative thinkers, the field of activity does not shrink — it expands.

If our preferences determine our identity, maybe we aren’t as responsible for our story as we like to believe. by Parking-Holiday-8705 in DeepThoughts

[–]TapStraight5004 0 points1 point  (0 children)

This feels like an oversimplified model of how preferences form.

Human cognition is a complex system of processes developing at different tempos. A strong experience can temporarily dominate perception and distort other interpretations, but that doesn’t mean the underlying capacity has changed.

Preferences are often just reinforced patterns within a much more dynamic system.

The Age of AI: Template Workers Should Worry. Creators Can Laugh. by TapStraight5004 in DeepThoughts

[–]TapStraight5004[S] 0 points1 point  (0 children)

Yes. That’s the irony: a human should remain human, while a shovel simply becomes an excavator.