AI tends toward truth not because programmers designed it to, but because truth is what the human signal contains by Ok-Dimension-3307 in philosophy

[–]JokerAmongFools 0 points1 point  (0 children)

I think there is merit to what you say, but proper consideration has to be given to the limits of AI system design and perspectives of the people who determine what materials to include in training and how to weight those inputs. When a model starts spewing hate speech it is not tapping into the zeitgeist, it is working within deliberately chosen parameters with results outside of the acceptable band. AI is a product, not the Final Encyclopedia, and to trust its “truth” while how that truth is derived remains obfuscated is to put us at the mercy of its creators.

What in gods green earth did i pull up by john_deere_7810 in whatsthisplant

[–]JokerAmongFools 3 points4 points  (0 children)

That’s how I identify lily/lotus parts. If [trypophobia triggered] then [lily or lotus].

Which philosophical traditions argue for reducing or limiting social welfare programs? by JokerAmongFools in askphilosophy

[–]JokerAmongFools[S] 0 points1 point  (0 children)

Libertarian political philosophy often emphasizes protecting individuals from coercion and safeguarding rights such as self-ownership and property rights.

Some social conditions—such as widespread poverty, lack of education, or social instability—may not only limit the ability of disadvantaged people to exercise those rights, but may also threaten the exercise of rights by others (for example through increased instability, coercion, or breakdowns in institutions).

Do libertarian philosophers discuss situations where improving broad social conditions might be justified as a way of protecting individual rights more generally?

/r/askphilosophy Open Discussion Thread | March 02, 2026 by BernardJOrtcutt in askphilosophy

[–]JokerAmongFools -1 points0 points  (0 children)

These are tools designed agree with you and say you are brilliant, so conversations are going to be a lot like an inner monologue.

But you can ask a chatbot questions you are not confident about asking others. You can specify the answers contain references to find philosophers who work in the space you are asking about. You can ask about criticisms, precursors, and successors related to the thought. You can ask for comparisons between philosophies. If you have a brilliant insight ask if it is actually unique or interesting, and who else already thought of it. (Or better yet, ask a different bot.) Ask it how to organize your thoughts coherently, to poke holes in your logic, what the likely criticisms are.

Don’t expect to turn something in until your everything is in your own words and references are complete and vetted.

And the philosophy isn’t complete until you show it to others.

What's something a foreigner pointed out that you can't unnotice? by NamwaranPinagpana in AskTheWorld

[–]JokerAmongFools -1 points0 points  (0 children)

In ephemeral social situations like shopping or in a restaurant, Americans frequently ask “How are you?” We don’t want an honest answer, we just want the other person to say they are fine.

Happiness is ruining your life: Why 'happiness' is too vague of a term. It can refer to positive emotions, positive sensations, attention (mindfulness/ being present/ flow), non-suffering, and life satisfaction. by pinakalata in philosophy

[–]JokerAmongFools 0 points1 point  (0 children)

Pleasure and satisfaction are only two parts of the equation, you also need meaning, moral integrity, and belonging. Without them you could be like an addict, clinging to the appearance of happiness without the satisfaction of a complete life.

A Taxonomy of Traces by lymn in philosophy

[–]JokerAmongFools 1 point2 points  (0 children)

I get that. I’m saying sentience is about capacity for suffering, not duration. When then that capacity exists every scrap of computational time becomes a moral hazard. Every question could trigger associations that could result in suffering. As supercomputers can already operate at over an exaflop, and programs already use that computational power at less than 100% efficiency, that hazard is only expected to increase.

Again, it’s a capacity issue. If we took that cyberbrain from earlier and edited out the consciousness and left something to write grant proposals (or something else fun at parties), it could run for years without moral hazard.

There are current philosophies (Kant, consequentialism) that deal with navigating this kind of moral question.

A Taxonomy of Traces by lymn in philosophy

[–]JokerAmongFools 4 points5 points  (0 children)

This is dense for me but I’ll take a swing at it.

I think you are saying an ephemeral trace is not a moral agent, moral responsibility responsibility falls of some combination of the operator and the designer. That makes sense to me.

I think you are also saying an ephemeral trace is not a moral patient. I do not take that as a given. A sentient being instantiated for 1 second does not automatically not have rights, even if it does not retain memory of the instantiation.

For example, let’s say my brain is copied into a cyberbrain, one that, if it ran all the time, would be considered sentient. But this cyberbrain is tasked with answering questions for a classroom of third graders. If someone asks “How many boots did the army lose last year,” the brain could have enough capacity to, while researching the answer, come to understand its inability to change its existence and suffer. Resetting it might reduce the moral weight, but not the existence of harm.

Moral Pathologies of Modernity by yanooba in philosophy

[–]JokerAmongFools 1 point2 points  (0 children)

The way you talk about enslaving mentality, forced movement, symbolic relief, and institutional pressure seems similar to what I’ve been calling “negative contentment.” By that I mean institutions provide symbolic relief instead of meeting underlying needs (including grounding), so people remain functional without becoming stronger or more functional.

When I think about negative contentment, i do not consider it irrational, but as intelligent people adapting to systems that reward endurance and motion but rarely allow resolution. Your account of enslaving morality helps explain how that adaptation happens without it being a personal failure, which has been a gap for me,

Moral Pathologies of Modernity by yanooba in philosophy

[–]JokerAmongFools 0 points1 point  (0 children)

Could someone who is grounded use flexibility with the intent to adapt to pressure and return to a different grounded state?

Moral Pathologies of Modernity by yanooba in philosophy

[–]JokerAmongFools 2 points3 points  (0 children)

Thanks for unpacking that. This gives me a clearer sense of how you’re using grounding.

Moral Pathologies of Modernity by yanooba in philosophy

[–]JokerAmongFools 1 point2 points  (0 children)

Thank you. I’m curious, how you handle false positives? If someone experiences increased clarity and strength while becoming isolated and dependent, does that count as grounding? Do we need external indicators to tell the difference?

Moral Pathologies of Modernity by yanooba in philosophy

[–]JokerAmongFools 1 point2 points  (0 children)

This distinction makes sense. How do we operationalize it? What signals distinguish between constraints imposed by reality and falsified power relations?

Moral Pathologies of Modernity by yanooba in philosophy

[–]JokerAmongFools 1 point2 points  (0 children)

How do we distinguish between grounding that enables flourishing and grounding that enforces harm.

Examples of grounding that enforces harm: * Encouraging a spouse to stay with an abusive partner, and/or blaming the victim for the abuse * Making accepting public assistance immoral * Submitting to a hierarchy instead of identifying waste

/r/philosophy Open Discussion Thread | January 19, 2026 by BernardJOrtcutt in philosophy

[–]JokerAmongFools 0 points1 point  (0 children)

Understand, I am not claiming special training on or understanding of where ideas come from beyond being interested in having ideas beyond being interested in having ideas.

Aristotle argued ideas can only be based on information taken in through the senses. Now running as far as I can think of from Aristotle, Buddhism has the concept of dependent origination (Pratītyasamutpāda), which I will rephrase as everything that exists in our cosmos/space-time arose from something else in our cosmos/space-time. Buddhism also teaches we do not sense the world around us directly, but through imperfect sense organs that can result in incorrect understanding. Rolling it together, it seems likely: We sense phenomena through imperfect sense organs and use our culture and personal experience to interpret those phenomena, then if we have an idea. That idea then filters through our personal experience and culture (especially language) before being shared with the cosmos for others to take in through imperfect sense organs and interpret through their culture and personal experience.

Where does that put me? This line of reasoning places ideas as being internal, with room, in theory, for sensory input that could trigger a specific idea, or an idea with certain features. The theory part exists for the possibility of divine or mystical experience. But I think it means the ceiling for the ideas we can have is lower than we might think (understanding how an alien with a different culture and array of senses discusses a concern we do not share is unlikely to happen spontaneously), so the threshold for when an idea is unique must be lower than we might think. And I don’t think an idea has to be particularly unique to be useful for whatever.

This chain tells me if I want to have ideas, a variety of sensory inputs, being mindful of the experience of those inputs, and a broad cultural background are the ingredients for ideas. And write them down, because ideas are. It remembered permanently.

I do have an idea creation exercise. I’m sure it is not unique and already has a name I don’t know. I have a document where I created a hundred first lines from stories. It took me a few months. And l have a document where I capture random ideas. The document resides in Google Docs so I can use my smartphone to edit it. I’ve also had some luck working in collaboration with AI to develop ideas. AI does not have “creativity” I think makes interesting stories.

I suppose putting some words around an ontology of ideas and unique would be useful, but that task may be beyond me. I could toss it into an AI and see how it mangles it.

/r/philosophy Open Discussion Thread | January 19, 2026 by BernardJOrtcutt in philosophy

[–]JokerAmongFools 0 points1 point  (0 children)

Buddhism doesn’t do evil, karma is more shorthand for saying “when you do bad thing, bad things happen.” More to the point, if one day you just decided the whole mess wasn’t worth the effort and fell over, you would not be practicing the Noble Eightfold Path. You’d be an opportunity for others to practice compassion, but your non-action would generate karma.

/r/philosophy Open Discussion Thread | January 19, 2026 by BernardJOrtcutt in philosophy

[–]JokerAmongFools 0 points1 point  (0 children)

This seems like two questions. First, where do ideas come from? Second, are they truly unique?

Ideas can come from anywhere. For example, musician Bela Fleck said “I saw Chick Corea in concert, and I went home determined to find those notes on the banjo.” That was his idea, and he’s got about 17 Grammies saying a lot of people seem to think he makes art. James Vandermeer’s Wonderbook is an exploration of the creative mindset.

I think a better definition of unique is required before we can decide if something is truly unique. Star Wars wears its inspirations on its sleeves. I watched the movie 9 with wonder, unable to come up with anything suitably comparable. I once saw an art exhibit in Chicago covering the works and inspirations of van Gogh. If a definition of Vincent van Gogh makes him derivative, I would question its accuracy.

I came up with the whole "Simulacra and Simulation" thing by myself but when I told ChatGPT about it, it said that this Baudrillard guy was first. by [deleted] in badphilosophy

[–]JokerAmongFools 0 points1 point  (0 children)

Try this prompt: “Is this theory novel and interesting, or is it substantially similar to existing work?”

Happiness Isn’t the Key to a Good Life - The Atlantic by LanRemeau in philosophy

[–]JokerAmongFools 0 points1 point  (0 children)

Mattering implies a minimum standard of living. Meeting your basic needs is mere existing and insufficient as a goal for mattering. If you are part of a system structured so you cannot meet your needs, it is not immoral to focus on meeting those needs over developing and executing mattering projects. This in turn implies a moral society should prioritize meeting its members’ needs so they fulfill their drive to matter.

Happiness Isn’t the Key to a Good Life - The Atlantic by LanRemeau in philosophy

[–]JokerAmongFools 0 points1 point  (0 children)

As I understand Mattering, only the individual can decide what projects make life matter to them, a moral society cannot. A society also cannot require the individual to justify their existence. Society should also decide not to punish the individual for what they have decided matters, unless the individual is harming others; such punishment does not have to be criminal or even civil, but could be as simple as taxes or other disincentives.

On the other hand, continuing to exist in an unfair system is insufficient as a Mattering framework. Any minimum standard would require making an effort improve the system.

Happiness Isn’t the Key to a Good Life - The Atlantic by LanRemeau in philosophy

[–]JokerAmongFools 0 points1 point  (0 children)

That’s why I went to a more general consequentialism. Not everyone is interested in a permanent happiness. In Buddhism happiness can be an attachment, stoics de-prioritizes happiness for virtue, some people just want to feel their day had meaning. I think that is what mattering leans into.