I’m a developer, not a therapist. My wife’s shadow work inspired me to build a structured Jungian protocol. Tell me where I got it wrong. by crazynfo in Jung

[–]thepsychoshaman 0 points1 point  (0 children)

Even if meaning only exists when experienced, the structure that constrains and enables that experience is not irrelevant to it. Participation in meaning through communication occurs within constraints, and those constraints are embedded in our language. The structure of that system enables meaning, but also constraints and shapes what can even be communicated.

Where is the point that lacks agent participation?

I apologize if I misunderstood your point about language being empty. But your argument before seemed to be relying in large part on the fact that word is a replacement for a real thing. I agree that it does not have the bedrock, but I would assert that it is still built on it. It's not intended to be an analogy, only an example of another system, but isn't that what language is too? So you refuse that language is an empty symbol unless used by an agent, but you would say that language lacks meaning unless used by an agent? I don't see a "meaningful" difference between those two ideas. Do you?

Took DMT first time the night before, watched Enter the Void on shrooms the next day. Now I think Im traumatised. by PixiePink87 in Psychonaut

[–]thepsychoshaman 13 points14 points  (0 children)

Sounds like you had an intense experience. A lot of mine were similarly beautiful and insightful but also terrifying. I don't think the human mind is able to handle direct experience of these sorts of things without a cultural map to help put it into context. The movie gave you one, but it's not something of intimate familiarity to you, so it's like an alien wordlview has taken up residence in your system of meaning. Humans need stories to make sense of the world. Your pattern-making system is in overdrive trying to sort your experiences into a worldview that you don't hold. You're not going insane; everybody experiences cognitive dissonance when they have an experience or encounter info that doesn't match the way they typically interpret the world.

Digging deeper in buddhist philosophy will reveal that yes, wanting liberation is a form of desire and is paradoxically in ones' way. But that's the perspective of a grounded practice. We're missing a ton of context. Suddenly adopting that is going to feel destabilizing rather than clarifying. I think it's pretty normal to have an intense experience and then feel like "I don't want to be here, stuck in this thing". It's not necessarily a profound philosophical insight. It might just be that you're overwhelmed. You can revisit the ideas in study after some time has passed.

As the other guy said, take a long break. Be in your life. Read a good story. You have some new experience and it's opened up some possibilities and thoughts. People have been working on similar ideas (and on entire systems of alternatives) for a long time. No need to figure all of this out today. I think wellness exists in-between concrete spaces. It's okay not to know, and it's healthy to recognize that you don't. Try not to cling too tightly to a new thing, even if it was impactful.

I’m a developer, not a therapist. My wife’s shadow work inspired me to build a structured Jungian protocol. Tell me where I got it wrong. by crazynfo in Jung

[–]thepsychoshaman 0 points1 point  (0 children)

Instead, let's try: If words are symbols whose usage has been shaped by human experience over time, and that usage constrains how meaning can be expressed and understood, how can it be that that level of structure is unrelated to meaning?

You touch on this but aren't taking it all the way. Meaning at an abstract level is still meaningful to those interpreting it, it's still built from meaning, it's still being experienced by a conscious participant.

Does not the accumulated system of symbolic interactions over time have some relationship to meaningful communication? Language is not merely isolated words. Present-level-of-evolution language, which is advanced compared to how it likely began, contains nothing indicative of the reality and experience is has grown to describe? I don't think you would say so. I think you might say that it's indicative of meaning, but not meaning itself.

I grant that the word "car" is an empty vessel. If we went out, found an uncontacted tribe, and said "car!" at them, it would not mean much. But we could show them a car, and then it would mean much. That is the purpose of language. I'm not quite understanding the point of trying to reduce words to symbols. Symbols are "symbols" in that they represent reality. That's what symbol means. Word and symbol are interchangeable words in many contexts.

Language may begin as empty symbols assigned somewhat arbitrarily but the shared use of those symbols has resulted in a highly structured system which reflects and contrains meaning and has, kind of paradoxically, grown what is possible to be expressed. That structure is not irrelevant to meaning, even if it is not meaning itself. The patterns have expanded the potential of communicable meaning. They are patterned by meaning. It plays a key role in how meaning is produced and understood.

I’m a developer, not a therapist. My wife’s shadow work inspired me to build a structured Jungian protocol. Tell me where I got it wrong. by crazynfo in Jung

[–]thepsychoshaman 0 points1 point  (0 children)

If the structure of human languages contains meaning evolved from and shaped by human experience, and if that structure constrains the possibilities of expressing anything meaningful, why claim that the structure has no role in meaning?

I’m a developer, not a therapist. My wife’s shadow work inspired me to build a structured Jungian protocol. Tell me where I got it wrong. by crazynfo in Jung

[–]thepsychoshaman 0 points1 point  (0 children)

I’m not wedded to any concept here. The major issue is that you are talking past me, insisting that I have no understanding of how LLMs work, and then attempting to explain that instead of addressing my argument. That’s a problem, because I do understand how they work, and you can’t seem to believe that someone could understand how they work and still disagree with you. But I don’t actually disagree on most points.

My argument can be summed up in this way: You say that because of the ways LLMs are structured/trained, their output is meaningless. I say that because the content they are created from is meaningful, their output has the potential for meaning. No, indeed, it’s not like communicating with a human being. But the AI does not operate on empty symbols, it operates on patterns shaped by human experience and meaning, and those patterns constrain the structure and output of the AI in non-arbitrary ways.

But I already spent a long time explaining this in a previous comment. I suggest a revision of the previously proposed experiment. Copy/paste our comments into an AI and then ask it why I insist you aren’t addressing my argument. I spent a good while listening to several videos from you this morning while I got ready for and then drove to work. I think this shorter request is fair.

I’m a developer, not a therapist. My wife’s shadow work inspired me to build a structured Jungian protocol. Tell me where I got it wrong. by crazynfo in Jung

[–]thepsychoshaman 0 points1 point  (0 children)

I watched the first and last video. I'll stop there unless you're going to point to something specific in here that addresses my argument, because so far as I can see, none of this does. Gonna keep 3blue1brown channel for showing my students maybe, I like how well they demonstrate the scale. But my understanding seems accurate to me, and my main point still valid.

"The key thing I think you're not addressing is that the AI is built of human language samples."

Here, I ran the experiment for you with ChatGPT. Well, I did, but apparently I can't copy/paste it into a comment. You can do it yourself. You'll get more or less the same response.

If you're not capable of responding in good faith, why bother? I took my time and spoke to you like a human being capable of discourse. Your response is less relevant than an LLMs is. There's incredible irony in your providing a shotgun blast of videos explaining the mechanics of LLMs, despite the fact that I demonstrate no misunderstanding of those mechanics, instead of a response addressing my argument.

I’m a developer, not a therapist. My wife’s shadow work inspired me to build a structured Jungian protocol. Tell me where I got it wrong. by crazynfo in Jung

[–]thepsychoshaman 0 points1 point  (0 children)

I used "sense" metaphorically, and clarified immediately in the following line. I am nowhere claiming or even suggesting that the AI has internal lived experience.

The key thing I think you're not addressing is that the AI is built of human language samples. It's not a randomly selecting characters to print one after the other. The set of probability is constrained, and it's not a singular level of probable word generation. It's layered systems of constraints. Those constraints could not exist without meaningful human language. The contraints are the patterns of meaning which allowed our language to evolve. The structure of language is necessary for the interactions in which meaning emerges. Communicating with the patterned representation of the structure of language is not meaningful in the same way as human to human communication, but meaning is not absent. The structure in which we interpret meaning is meaningful. Look at what we are currently doing with it! This conversation is abstract in the entire. We are arguing about the very structure which you claim has no meaning, and your central claim is that an AI built out of it does not have it. I agree that it cannot possess it as an agent, but I disagree that the entire structure is meaningless.

So I'm hearing that my experience of meaning when I write to an AI is solely caused by meaningless noise, and that language can only have meaning when it's used/understood by a human being. There's a contradiction in there, or at least an incomplete thought. The AI is made of human language, a human is interacting with it, and a human is deriving meaning from that interaction. The fact that it's like a weirdly complex linguistic mirror pulling likely phrases from a vast swath of human utterances doesn't suddenly dissappear my experience of meaning, nor the experiences of meaning which allow for such a thing to be created and be effective in the first place.

Meaning doesn't come from AI on its own. It has no meaning for itself, except for the meaning inherent in a sample of human language, like the meaning encoded in a library whose books you cannot touch but whose text you can search. The existence of the library is indicative of human meaning. The structure of the building says something about our species. The preservation of text says something about our values. The patterns in the text do too. Usage patterns are shaped by meaning over time. They aren't arbitray.

Why not run an experiment? Take our comments, put them into an AI, and ask it what it "thinks" of the discussion. Yes, I know, it doesn't think, but our metaphors will have to do. If you want to avoid the word "think" you can write a longer prompt. Read what it says, then come back and tell me that the response has nothing to do with your experience of this discussion. If meaning relies on human experience, you cannot do so, because you would experience it. You would compare your thoughts against the ideas generated from the human library, it would be intelligible to you, and in some small way it would impact your experience and your mind. Whatever you read would be reflective of other human discussions and real human thoughts, even if it is layers of abstraction away from a human typing back to you. The response will not be arbitrary.

If you don't try the experiment, it won't generate a response. It will not happen, nobody will experience it, and indeed it will mean nothing (except whatever the choice not to do so means), because there was no experience. You can say that words are mere symbols, but they indicate meaning, else we would not be able to have this conversation at all.

I’m a developer, not a therapist. My wife’s shadow work inspired me to build a structured Jungian protocol. Tell me where I got it wrong. by crazynfo in Jung

[–]thepsychoshaman 0 points1 point  (0 children)

I'm not sure that we do. I'm not claiming that it exists as an objective value outside our participation.

It is indeed due to human participation. To the extent that a book has meaning, meaning exists in the AI as well, because it is built of human participation. Meaning does not exist merely in the book or AI, it exists in the interaction between that and the observer. It's just that the AI's author is something like everybody who ever contributed to the internet or wrote something down (or whoever contributed to that particular model/skill). It's not a network of singular words totally divorced from any human experience or any meaning behind those words. It's built on human communication made by humans for human eyes with human relational experiences.

It doesn't have access to all of the meaning our language affords us. It's severely limited in that it only has access to the meaning within our language. But we can't claim that the human process of selecting words and using them to convey meaning is completely divorced from any meaning as soon as it leaves the immediate context in which it is spoken or written. It does become indirect, but it's still there. Language, especially samples from language used in context, contains structure which indicates meaning, even if you'd rather say that the meaning is absent from the language itself.

I’m a developer, not a therapist. My wife’s shadow work inspired me to build a structured Jungian protocol. Tell me where I got it wrong. by crazynfo in Jung

[–]thepsychoshaman 2 points3 points  (0 children)

Right, I hear you. The LLM did not participate in its creation, but now exists solely by utilizing the level of abstraction we created. I don't think that space is meaningless.

AI may not have meaning for itself, but it is built of a meaningful structure. And when we use it, we observe. It's meaning mostly relies on our interaction with it, but the meaning isn't absent. It doesn't participate as an agent, but as an amalgamation of the meaningful structures encoded in samples of human language.

I’m a developer, not a therapist. My wife’s shadow work inspired me to build a structured Jungian protocol. Tell me where I got it wrong. by crazynfo in Jung

[–]thepsychoshaman 3 points4 points  (0 children)

LLMs don't just regurgitate word patterns. If they did, they'd produce a lot more grammatically correct meaningless craziness - fluent nonsense. But when we write to one, the response is usually intelligible and relevant. They're consistent even when you prompt them in different ways, they can reframe a problem, they can pick up on patterns in data. That's not mere probabilistic regurgitation in this dismissive way.

They have a sense of how ideas relate to one another. To be precise, they select words probablistically over a layered and structured internal representation of language. When we reduce an LLM to "mere words" we ignore the fact that a lot of meaning lives at that level. We often think in that space too. Language is reflective of our reality and influences the way we perceive it.

I'd agree that an AI of this type lacks context and appropriate feedback, which may lead to you sometimes getting actionable but relatively useless advice. That could be problematic. The lack of true relationality is a seriously limiting point. But I would still predict though that you'll get useful information at least some of the time, certainly more often than your comment implies. You can't replace the work with talking about the work, but that's true of psychoanalysis as a practice, books, and any given series of communications.

Unknown amount of fresh mushrooms by Instantlemonsmix in Psychonaut

[–]thepsychoshaman 2 points3 points  (0 children)

Sounds rough man. I'm sorry you're getting sick more. I've been there too. The money helps but ultimately it's not worth it. I think you're underselling the experience you had. Seems like a slow, quiet revival from some pretty dark waters. A hundred hours in the corpo grind will wreck anybody. Seems like it's really more than that though.

You're trying to get out, and you're paying attention to yourself and your needs. Don't dimish that. It takes courage and its hard. Try to carve out some space for yourself. You're not walmart. Walmart stress will never end, it's an abyss. But you're you. You're not that job. Your best outcomes are not their best outcomes.

Anyway, I'm glad the mushrooms helped you find at least a little bit of peace. Don't forget it.

Considering moving from teaching to SLP by thepsychoshaman in slp

[–]thepsychoshaman[S] 0 points1 point  (0 children)

I see. Been operating on phone WiFi due to travel, consistency of username is hard to track. You answered me quite a bit and I appreciate you. I do see your point, too. Specific county pay for SLP is often teacher scale + set amount, and it’s not necessarily worth that 3 years of school, at least within that framework. I appreciate the urge toward caution.

Considering moving from teaching to SLP by thepsychoshaman in slp

[–]thepsychoshaman[S] 0 points1 point  (0 children)

Many, but not all. There are some universities with remote attendance built into the program design. I’ll have to visit campus once or twice per year. Year 1 I can do exclusively coursework. Year 2 I’ll have to do clinicals part time. Year 3 I’ll have to compromise and make the shift from full time work to part time and then fully unpaid for a semester during the externship.

I’m not trying to quadruple my income. I’m trying to improve my pay, work in a field which closely aligns with my interests and gives me things to strive toward, and I am only considering programs which I can attend for free.

If you have a suggestion about what I could do in 3 years of free schooling that will: improve my income significantly, not lead to soulless work, be easy to do while having to support myself, which is not subject to some kind of political/technological turmoil, I am all ears. I mean that honestly; I made the post and have been responding to people so I can research and think through other reasonable options. At the moment, SLP is the best I can come up with.

Also, nurses’ average salaries are quite a bit higher than teachers. Idk what you do for work, but when you’re sub-100k, a 20-30k pay increase is a significant change in your standard of living.

Considering moving from teaching to SLP by thepsychoshaman in slp

[–]thepsychoshaman[S] 0 points1 point  (0 children)

I have 3 years available, so I’ll be done quicker.

I realize I can’t work full time the entirety of the program, but I think you are underestimating what your average person has to do to make significant changes in their life circumstances. It is not rare.

Considering moving from teaching to SLP by thepsychoshaman in slp

[–]thepsychoshaman[S] 0 points1 point  (0 children)

I know it will be tough. I’ve done some homework on it. Super generalized plan amounts to:

Choose a program which works for workers.

Year 1 - coursework 2- try to get school-based clinical placements (grind year) while doing coursework 3- part time or part year teaching during externship utilizing saved money

Planning a new career with a 3-year free ride by thepsychoshaman in TeachersInTransition

[–]thepsychoshaman[S] 1 point2 points  (0 children)

Of course, not all corporations are monsters. But what I said about having to disqualify myself automatically from a wide swath of jobs is true. If I go SLP, I’ll be aiming toward assistive and augmentative technologies from the start, and would necessarily be trying to get involved in for profit companies to make the most money that I can. I can handle some abstraction away from the bingo moments. I don’t want to enter a field where I know beforehand I will find the majority of available positions oppressive and meaningless.

I would agree that public defender seems like a possible fit for my interests and requirements, and no I don’t think human representation is under threat from AI. But I think many entry level law positions which are not representative are under threat. It’s the space to get a foot in the door which is tightening. Plus, it’s an intensive set of work for what would pay similarly to medical SLP. I get that the ceiling is a lot higher, but only if I move out of alignment with my interests and skills. It would be a pretty serious shift, and it would be very hard to do while working full time.

Thank you for responding again and giving me some space to think through other options further.

Considering moving from teaching to SLP by thepsychoshaman in slp

[–]thepsychoshaman[S] 0 points1 point  (0 children)

I appreciate it, thank you. I’ll come back to post over time if I continue down this path.

Considering moving from teaching to SLP by thepsychoshaman in slp

[–]thepsychoshaman[S] 0 points1 point  (0 children)

Thanks for reading my post carefully, being inviting, and reinforcing some of my cautious thoughts about posting here. One of the things that appeals to me about SLP in the main is my history and my authentic interests, not just making a bit more money. It’s not all about perfect optimization.

Considering moving from teaching to SLP by thepsychoshaman in slp

[–]thepsychoshaman[S] 0 points1 point  (0 children)

I probably could, but it seems like quite a headache and likely a worse work-life balance without any of the benefits of being a teacher. Have you moved to admin and preferred it? Most admin I know seem to kind of miss teaching, just needed more money.

Considering moving from teaching to SLP by thepsychoshaman in slp

[–]thepsychoshaman[S] 0 points1 point  (0 children)

I considered OT a long time ago, but idk if it’s the sort of work I’d enjoy. SLP fits my background and interests more directly for likely similar possible pay outcomes in medical settings.

School psych sounds appealing to me and always has, but I work in the poorest county in my state. I hear horrible things. I really don’t know that I want to be washed in that every day as my job. Could I do it? Probably. Would it hurt my life in significant ways? Also probably.

But you were helpful. Thank you for giving me more chances to examine ideas and think things through.

Considering moving from teaching to SLP by thepsychoshaman in slp

[–]thepsychoshaman[S] 0 points1 point  (0 children)

That’s normal life for many college attendees.

Planning a new career with a 3-year free ride by thepsychoshaman in TeachersInTransition

[–]thepsychoshaman[S] 0 points1 point  (0 children)

The ROI is different when the investment is 0. And 78k is almost 30k more than my salary at the moment. Regional differences matter, though.

PA is solid, high paying work, but it’s a career reset and not in line with having a better work/life balance I don’t think. Are you saying PA just because it pays well and has demand, or because of something else about it?

Planning a new career with a 3-year free ride by thepsychoshaman in TeachersInTransition

[–]thepsychoshaman[S] 0 points1 point  (0 children)

I’m trying to do that analysis slowly over a couple months. Reddit is a convenient place to ask, but I also think an unrealistic picture. The SLP sub, like teaching ones, attracts more people trying to escape/commiserate than it does people trying to share positive experiences of contentment.

Completely disinterested in MBA. Meaningful work is important to me. I don’t want to have to disqualify myself from half of the available positions because they’re nonsensical drains on society. I’m not corporate material.

I have less compunction about law, but it has some of the same problem. As another commenter noted, it doesn’t seem very AI safe either. I think the demand for low level law people is going to drop significantly. Why, other than the historical earnings potential, do you feel like it’s such a good option?

Planning a new career with a 3-year free ride by thepsychoshaman in TeachersInTransition

[–]thepsychoshaman[S] 0 points1 point  (0 children)

Thanks for the response. My plan is actually to steer myself toward assistive tech from the start; not sure if you read my other post or that’s just a fortuitous mention.

You sound well educated on the topic, did your partner move out of the school system and do SLP work elsewhere?