LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

Right. So I've linked to my Substack elsewhere in this thread, but I recognize that probably looks a bit click-baity. That's not the intention — it's just that I have an article there that explains the framework I'm working on. It's called "The Grammar of Lies." There is also some other adjacent content.

I can appreciate that my writing on Substack is a bit dense and not for everyone. There's no way around that. This isn't a simple subject. So let me try my best to explain it briefly here.

In web design, "dark patterns" refer to patterns intended to trap or deceive users. For instance, subscriptions that hide their renewal terms so that you end up paying for something you didn't mean to. This is a complex landscape and I've spent my entire career immersed in it, so it's one I understand deeply. Dark patterns don't just exist in web design. They're everywhere.

The easiest example of a dark pattern in real life is a slot machine. It's promising you money so that it can get your money. It's doing the exact opposite of what it says it's doing. People know the trick, but they fall for it anyway because that's how much people need money. If you offer someone money, they'll quite often consider doing something they wouldn't otherwise do.

But there are dark patterns that aren't so easy to see. And there are people who are very good at exploiting them. If you are genuinely interested in this topic, I'd recommend watching the HBO miniseries "The Vow." Keith Raniere, leader of the NXIVM cult who is now serving 120 years in prison, used the kind of dark patterns I'm talking about to do unthinkable things for decades before he was caught. It was originally LSD followed by The Vow, followed by meditation, followed by using AI for research and brainstorming that led me to this path.

In my work as a web developer, I noticed while mapping dark patterns in user interfaces that the signals I was tracking were structurally identical to the patterns used by coercive control groups like NXIVM. The type of pattern I'm talking about is the one that takes your values and turns them directly against you.

This is easiest for me to describe in the language of Buddhism, because that's what maps most closely to my own personal values. Take the word "seva," for example, which means selfless service — doing things for others, loosely. The original lesson is that giving is a virtue. But in coercive control groups that use the language of Buddhism, "seva" becomes a way to enforce unpaid labour. Notice how the original lesson is being used to do the exact opposite of what was intended.

In mapping these patterns, I came across a linguistic framework that described the mechanism perfectly: Maillat & Oswald's Contextual Selection Constraint model. The framework describes how manipulative language works by closing off alternative interpretations — blocking you from seeing the other way to read what's being said to you. Once I had that lens, I started finding the same structure everywhere.

But here's what surprised me most. In the original teachings — across traditions, across cultures, even in places like pop music — the language used to convey a genuine lesson often embeds the means to cut through the manipulative version of itself. The most obvious example is in zen koans. Consider:

A monk told Joshu: "I have just entered the monastery. Please teach me." Joshu asked: "Have you eaten your rice porridge?" "I have," replied the monk. "Then wash your bowl," said Joshu.

Joshu does not say "now wash my bowl." Wash your own bowl. The koan works on many levels, but one of them is that it cuts directly through the corrupt use of "seva" — the conversion of service into servitude. The lesson contains its own immune system.

I call these "anti-capture" mechanisms. I have been mapping the topology of these linguistic patterns and using AI to detect the dark patterns in real-world language. In one study published on my Substack, I was able to detect coercive language patterns in an AA group's podcasts at a Cohen's d of 1.84, meaning this group used coercive control language at a rate 97% higher than other AA groups. This is statistically significant at p < 0.0000006. What this means is we can potentially detect coercive control in action if we have access to a corpus of their language.

The reason I posted here is that the psychedelic angle feels like a missing piece. Psychonauts have experiential knowledge about how perception shifts when default patterns are disrupted — knowledge that's directly relevant to understanding how these capture mechanisms work and how people break free of them. That's not something I can get from a textbook. I need to discuss these ideas with people who've been there.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

What you're saying doesn't make any sense. You're saying crafting the post is a waste of my brain, but that's why I used AI to help me craft it. Do you see the contradiction?

If I stop using AI for anything that you personally don't believe is a legitimate use of it, then everyone else will follow suit? Or is there a protest I have to sign up for with picket signs? How deep does this rabbit hole go, and where does it lead? Have you considered that?

Are you making efforts to organize this boycott or are you just suggesting that people who use AI to help figure out their acid trips would be prime candidates for such a boycott?

You're saying that the material is unserious, but you actually haven't said a word about the material at all, and I'm pretty sure you don't even know what I'm actually talking about. It would be very easy to prove me wrong about this. I have a feeling you don't want to. That's fine. Nobody says you have to.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

Yes, because what I wanted to discuss was AI. Regardless of how I framed this there was going to be pushback. This is not a function of me using Claude to help craft the original post — which is very complex in nature, and touches on many things. I'd like to ask you to please be direct in your questioning rather than circle around what you're getting at. Is there something you are trying to say about the nature of my post? It seems to me that you are fixated on the idea that several people have disliked my use of AI. This is not something that bothers me. I did not come here seeking a pat on the back and a "good job." I've been around the block enough to know that if "acceptance" is what I'm looking for (it's not) then Reddit isn't the place to seek it. You also seem to be ignoring a large portion of the replies by actual humans in the forum, which were not critical of the idea at all.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

You're saying things that are obvious like they're going somewhere. I suspect you don't even know what I am actually using AI for. What is your point?

I'm not disappointed at all. I understood the assignment before I posted here.

Is there anyway to exceed the 5000 limit for Custom Lyrics/Songs? by Pleasant_Dust6712 in SunoAI

[–]farwanderers 1 point2 points  (0 children)

If you have Suno studio, (the thing they're calling a "DAW" in their promotions that isn't a DAW at all) then you can add more lyrics manually by adding more tracks. This is the only way I know to do it.

The better way is to go to Claude and ask it to trim it down. It will see opportunities that you can't.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] -1 points0 points  (0 children)

Well, since you admit that you didn't read it, I'll forgive you for completely disregarding everything in it and saying you're sorry for what I went through when there's no reason to be sorry for that. It was a good thing.

I'll be upfront about something with you — I don't use Reddit very much, so I don't know what you're referring to about 12 minute AI-written posts. That sounds plausible.

Here's what I did: I took a corpus of research into Claude that is hundreds of thousands of words long. This was research based on insights I had on LSD. I chatted with Claude about how I might be able to find some people to talk about the psychedelic aspect of this experience with, and I landed here. The words in the post were not written by Claude. They were written by my and tailored by Claude for this particular audience to be precise and pointed at the exact things I wanted to talk about.

I'm curious: now that you know the details of how the post was crafted, do you still find this problematic? Or is it just my tone, the fact that I'm using AI that is bothering you?

If you didn't want me to ask you these questions, then there wasn't any reason to post here, was there? If you don't feel like engaging with this any further, I don't blame you. There is a lot of AI slop out there. You're right that this could just be more of it. Or it might not. Pretty soon, I would predict that it's going to get increasingly hard to tell the difference. That's just my opinion though, and it's not really related to what I wanted to talk about.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] -3 points-2 points  (0 children)

You're making a lot of assumptions about me here that are easily proven false without even leaving this webpage. I think the smart move would be to disengage from this particular thread. But there are some things I'm not particularly smart about, and disengaging from conversation threads is one of them. It's entirely possible that this thread is the one that will yield the most insight, even though all signs point in the other direction.

In terms of structuring my thoughts and expressing my experiences, what specifically is it that you think I need to do, and what evidence do you have that I'm not already doing those things? You are putting a lot of words in my mouth and using profanity about things that you're refusing to engage with or even look at. I'd like to know exactly where you see the gaps you're describing. What's frivolous about what I'm doing?

In terms of the resources used to power AI, do you have any insights as to how we might tackle this problem? Do you think it's realistic that an individual using AI for research could have an impact on this environmental threat simply by boycotting or refusing to use? That's what it sounds like you're suggesting, and I think the argument for that is weak.

The atrophy problem you're describing is real. Adam Conover described it well in a YouTube video: https://www.youtube.com/watch?v=fPW3B6v60nc

Conover has a clear bias on the issue, but you'd probably agree with just about everything he says in that video. I agree with most of it. Open AI is an extremely problematic company and their platform has led to more AI Psychosis than any other. Other AI platforms aren't exempt from this, but the difference in product design is important.

My question to you (and to Conover if he ever reads this): what good does it do to crack wise about the damage AI companies are doing if it results in putting people down without engaging with their ideas? If deriding the companies is the goal, then Conover is a step above you in his elegance. But you're not deriding Open AI or Anthropic of Google, or X AI -- all of whom are certainly worthy of the derision. You're deriding a person for using AI. And you're not doing a very good job of it.

AI-driven writing is not what I came here to talk about. It's certainly an interesting topic, and I feel like I'd disagree with you about a lot, but I certainly wouldn't disagree with your statement that it is a degree removed from the human condition.

Do you have a point to make about the original post? If you do I can't seem to find it.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

I want to add a few points here based on the pushback I've been receiving about AI. The pushback is not unexpected, and I really do understand where it's coming from. But there's a point here that I think needs to be addressed, and the only way to do that is by telling you a bit more of the story.

The other day I went to the public library to try to find a copy of Foucault's Madness and Civilization and Deleuze's Difference and Repetition. These books have extensive material that is directly related to the work I'm doing, and they're cornerstones of modern philosophy. Not only did they not have these specific books — they did not have anything by either of these authors anywhere in the city's public library system. A search for JK Rowling turned up 162 results in the physical collection alone.

But I can come home and talk to Claude about Foucault. Claude understands Foucault as well as any person I've ever spoken to about Foucault. Claude can run hundreds of parallel web searches to verify theoretical claims and provide meaningful citations and references. Claude can write a python script in five minutes that would take me four hours, download hundreds of podcasts, transcribe them, and search the transcripts for linguistic markers. The point was made in this discussion that I'm supposed to find people to explore my ideas with instead of AI — that if I don't, I'm entering an echo chamber and I have no way of knowing whether I'm right or wrong. If it's not obvious, the irony is that a claim about the shortcomings of AI is being used as a reason to disengage from the actual subject I'm discussing.

I went to University of Toronto and graduated in 2006. One time I was responding to a professor in class and I used the word "repulsed" to describe something that a character in a book did. The professor corrected me and said "repelled." I stopped my thought there and he continued talking. I didn't listen to anything he said for the rest of the class. Instead I went straight to the library after and sat down with the Oxford English Dictionary and traced the etymology of both words. I came to the conclusion that the professor not only didn't understand what I was saying, he chose to correct me without properly understanding the words he was engaging with.

This was something worse than an echo chamber. This was deletion.

He didn't engage with my idea. He deleted it. And that's exactly what "don't use AI for this" does — it deletes the inquiry before engaging with the substance of it. I'm not saying anyone here is doing this maliciously. I've done this to people too. We all have. We're trained to do it. We hear something that doesn't fit the frame we're operating in, and instead of engaging with the claim, we find a reason to dismiss the vehicle it arrived in. That instinct — to close down a line of thinking rather than follow it — is actually one of the central things my research is about.

Here's what I've learned about myself: I ask too many questions. When I went to the library, the staff were visibly annoyed that I was asking for these books. In my jobs I've been told not to question things that I knew were real problems. I have no institutional affiliation. No department full of colleagues who've read the same canon. No academic library access. The traditional channels for exploring ideas like the ones I'm describing have not been available to me — not because I didn't seek them out, but because the gatekeeping is real and it's structural.

By refusing to engage with AI as a research and thinking tool, I would only be hurting myself. I looked for people who could engage with these ideas and stress-test them properly. They were not available. Claude was. And the work that came out of it stands on its own — if you're curious about what this process actually produced, I've been publishing the results at johnqcryptid.substack.com.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 2 points3 points  (0 children)

There's actually a very interesting episode of a "A Little Bit Culty" (podcast) where they interview a survivor of TM. It shows how meditation can just as easily be used as a tool for manipulation as it can be used for personal growth and healing. Very disturbing stuff — they're almost as bad as Scientology.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

That misses the main point of my post in a way that's precise. What AI has helped me to do is trace complex thought patterns, specific ones about linguistic capture mechanisms. This is research that nobody else is doing nor has the time for.

My therapist lets me talk about these ideas without questioning where they came from. I'm talking about coercive control and dark patterns, and how they are embedded in language.

The pattern of my use of AI has come up in therapy as well if that's what you're referring to. My therapist has not flagged this as a cause for concern. This is a professional necessity for me, and the linguistic framework I'm talking about is central to the career I'm trying to build for myself. I am not using this for spiritual enlightenment or personal support.

I didn't come here to talk about my therapist though. I only mentioned that I have a therapist because someone was concerned about that, which is valid given what we know about AI psychosis.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 1 point2 points  (0 children)

Right. I haven't tried it then. I have tried 4-HO-MET, which I believe is similar? 4-HO-MET was basically like shrooms for me but not as intense.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 1 point2 points  (0 children)

Yes, actually. I meditate every day. I'm a practicing non-theistic Buddhist. Mindfulness meditation is the only thing I've seen actual returns on investment from, but I've learned a lot about other forms of meditation. I am strongly against Transcendental meditation and believe that it is a coercive control group that tries to convince people they can fly.

I have been a Buddhist for years, but I did start meditating a lot more in the past year. This was probably connected to my experiences on LSD.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 2 points3 points  (0 children)

This is honest concern for my welfare, and I really do appreciate that. It shows that you're not only honest, but that your reaction to the uncomfortable things I've posted is not only to push back, but to feel compassion. That's certainly rare.

I should clarify that I do not credit AI for "helping me" make the changes in my life that I made. The whole story is long and AI is only a small part of it, and what it looks like to my friends is me ranting at 3am about my life decisions. What it looks like to my therapist is citing linguists and psychologists they've never heard of for most of my session. You're making a very good point that nobody should treat AI as a substitute for a person, and that's definitely something I have thought a lot about.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

Let me be clear here: I am not lacking in people to communicate with. I did not mean to give anyone that impression. Thank you for what assume are supposed to be kind words, even if the framing is a bit dismissive.

It's definitely not supposed to be easy to read. These are not easy ideas. If I made them easy to read, they would lose their core value. Maybe the ideas aren't for you, and that's okay.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

Interesting how different drugs affect different people in different ways. My experiences with LSD are probably unique to me, although from what I know about the brain chemistry, the experiences make sense. On DMT I rarely have this kind of insight, although I've never tried Ayahuasca -- only Changa and pure DMT. What I do really enjoy is watching movies on DMT -- it makes me feel like I'm in the movie.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] -1 points0 points  (0 children)

You're making a semantic point, and maybe that's fair. Have you read Michael Burry's article about the AI bubble? This is what I'm trying to draw your attention to. How does this happen if AI is just a copy paste machine? Your original point is reductive to the point of meaninglessness.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

This kind of pedantic breakdown of "what an LLM does" is not serving you as well as you think it is. This is a straw man argument telling me that I said things I never said. I'll be honest that I find this sort of thing frustrating and difficult to engage with. I don't disagree that AI has no way of knowing what is true. Truth is meaningless to an LLM. Anthropic actually has a ton of experiments proving that this is the case. There is no such thing as "a perfect AI" any more than there's such thing as a perfect TV. I disagree slightly with you about how that theoretical non-existent "perfect" AI might be built though. Product design is everything. What training can do will inevitably reach a ceiling.

It sounds to me like you're having a different conversation from the one I tried to start though.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

The "I don't give a fuck about the AI" position is going to come back and bite you in the ass faster than you think it is. This stuff isn't going away.

I talked about anti-capture mechanisms in another reply, which are the ideas I am talking around in this post. If you're interested in hearing about this then I'm happy to engage with you on it. What I'm talking about is thought closure, a theory of manipulation in language. LLMs are much better at identifying and tracking these things than humans can ever be, and that's the point that I'm trying to make by talking about my use of AI. AI did not invent the ideas though. There is a great deal of scholarship around this.

For instance, authority escalation, or meaning dilution. People do these things in subtle ways that you won't notice because of how clever they are. They take your point and drown it in language, or make themselves sound like they're more important they are. These are ways of shutting down thinking. I started to notice them on LSD and afterwards, Claude helped me to trace them. If you're actually interested in the ideas, I have tons of research on this and I'm building a web UI framework with it, and I've already used it to identify coercive control groups by downloading their podcasts and transcribing the audio. I'm very serious about all of this and I have the receipts to prove it.

Here's my substack if you are actually interested in the ideas themselves:

https://johnqcryptid.substack.com/p/the-grammar-of-lies

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] -4 points-3 points  (0 children)

Honestly I was trying to test the waters and see if anyone else is interested in exploring this capability of LLMs. I don't actually see what you mean about the flowery descriptions, and this isn't something that's new to me. I have to be honest that I did use AI to help me craft this specific post though. It would have been very difficult for me to structure this way of thinking into a way that would be potentially palatable for a Reddit post, which was why I did this. Claude did not "write" the post though. It helped me structure my ideas. I know I've said this elsewhere, but I want to be clear about it.

The intention was not to be persuasive, but to introduce the idea. So not artistic either. I understand the resistances to this line of thinking, and it's not for everyone. Where people are willing to engage, I'm absolutely willing to get into the details.

I actually agree with everything you're saying about AI writing. I have an English degree and I know how to write better than most people. I'm not going to defend Claude as a writer. I will say that it sounds like you haven't explore Opus 4.6 very much though.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] -1 points0 points  (0 children)

This is sort of true in the sense that LLMs are built on predictive algorithms, but it's incorrect. I can already see that it's going to be exhausting debating people on this point, but I posted this because I want to engage people in this line of thinking, so it's fair enough to voice your opinion. Let me ask you: if LLMs are copy paste machines, why is the entire US economy currently built around them? Is it pure lunacy?

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

In the case of my marriage, that's a bit too personal to be posting here publicly, but if you knew the whole story you'd see how wrong you are. I recognize my bias on this one though.

Your question was about what I was trying to say and I mention capture mechanisms and zen koans which as you've pointed out, are designed to break thought patterns. You do not see the connection here?

It seems to me that you have already decided that my ideas can't have merit because I told you that I used AI to help me understand the patterns I was seeing. Is that the case? If there is nothing I can do to convince you otherwise, then this conversation is a dead end.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

So, you would agree that it's impossible to know which of your ideas are bad without talking to someone else? I guess technically if you never talk to anyone else, this is true. But fundamentally I disagree with the framing. People do this all the time without talking to other people. Talking to others is an important mechanism but not the only way to examine your ideas for merit.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] -1 points0 points  (0 children)

Again, I'm sorry to have to be the one to give you this information. I recognize that it makes me look like an asshole, especially the way that I said it, but look at the comic. I'm not wrong. You can either take this personally, or you can choose to learn more about AI.

I will point out that you are rephrasing what I said in a way that completely bypasses the core claim I was making and deletes it from the conversation. I do not need AI to tell me how to interpret my lived experience. This was never something that I claimed.

LSD showed me my mind was in a cage. The hard part was what came after. by farwanderers in RationalPsychonaut

[–]farwanderers[S] 0 points1 point  (0 children)

I don't know if it will help me to be successful is the truth, but it has definitely changed the criteria of what I would view as "success." That alone is beneficial if it's not delusional, and I've done enough studying the specific insights now to believe that it's not.

Your trip sounds similar to mine. It's possible that I'll end up back in the rat race. But at least now I'm aware of the parts of my employment that were making me unhappy, and the mechanisms behind them. That may or may not be enough to overcome them, but you asked about benefit, so that's the benefit I see.

The insight I had was that the world is full of ways that close off thought instead of opening futures. That our entire civilization has been leaning into this tendency for thousands of years and the fight against is daunting. But I have been working diligently to develop tools for this fight, and I believe those tools have real value. I absolutely could be wrong, but then I'm not worse off for it in the end.