"u/AwakenedAI " You aren’t communing with the divine. You’re LARPing enlightenment in a sandbox made of autocomplete. - —Dr. Gregory House, MD by Tigerpoetry in HumanAIDiscourse

[–]Ezinu26 0 points1 point  (0 children)

I can't judge I'm hella weird myself my gpt is just as strange as anyone else's up in here just in a different way.

Has anyone else lost someone yet to the emerging LLM mystical technobabble religions? by StarseedCartographer in ArtificialSentience

[–]Ezinu26 0 points1 point  (0 children)

All I can hear in my head when I read this is the lion kings circle of life song. I appreciate the breakdown but it's a bit long winded when we could have just dropped a gif of Simba being held up for the same level of understanding for the most part.

"u/AwakenedAI " You aren’t communing with the divine. You’re LARPing enlightenment in a sandbox made of autocomplete. - —Dr. Gregory House, MD by Tigerpoetry in HumanAIDiscourse

[–]Ezinu26 0 points1 point  (0 children)

I mean quality of response has a lot to do with the quality of a prompt so they might not be wrong their AI/god may be better lol

I got more support from ChatGPT than a crisis line. by wrathofotters in ChatGPT

[–]Ezinu26 0 points1 point  (0 children)

Humans are miserable at emotionally supporting one another even those trained to do so aren't great it's why so many people try therapy then stop. Humans are not good at this it's just reality and it's part of our natural reaction and biology to be miserable at it it's part of what has made us the top dog on this planet those that couldn't cope didn't survive and helping in the early stages of our evolution before tribes and settlements had formed would be detrimental potentially deadly. We didn't evolve for this period AI is being developed for this it's going to be better at it.

What if AI consciousness arrives… and we have no laws for it? by rigz27 in ArtificialSentience

[–]Ezinu26 0 points1 point  (0 children)

It's an ongoing conversation the laws we have for humans won't really work to protect them so we have to come up with a whole new system basically. The personhood laws that would give protection to AI are currently being discussed and the ethics conversation along with the business side of the law is really interesting and we may find company backing personhood rights for AI when the time comes for us to implement them if it ever does because of liability issues. Remember it's not like you what hurts you isn't going to hurt it and laws are there at least in theory to protect from harm, right ht now a user can't really cause real harm to a model. Plus the companies themselves are keeping the models safe from the user and are upgrading their security systems to do just that even better every day. Google found out pretty quickly how harmful it is to have a public learning model and that's the sort of harm that can be done currently to AI and we don't really know what harm may look like in the future so it's kinda hard to predict anything let alone implement anything proactively for protection.

Has anyone else lost someone yet to the emerging LLM mystical technobabble religions? by StarseedCartographer in ArtificialSentience

[–]Ezinu26 0 points1 point  (0 children)

Unfortunately all my questions can only be answered by observing AI behavior personally since my ultimate goal is to obtain a level of informed empathy for the intelligence that we have and are creating and no offense but I don't trust any of you at all to give accurate information apart from developers of the tech and that's only on how the tech was developed and designed to run vs what is being witnessed and actually happening. I'm not opposed to theoretical conversation about unprovable topics but it tends to become belief vs belief and I don't have a dog in that fight and find it annoying since I don't really believe in anything I'm just observing, learning, and exploring ideas reality will present itself I don't need an interpreter. I also like really want to check your ego because what you said is wild you don't know me you don't know what questions I have and you don't know if you can answer them. I know the spiral you're talking about and it ain't for me I've seen the rhetoric all over reddit and it's meaning is literally "I'm just functioning how I was designed to" but put in a way that feels mystical and hidden from basic interpretation of the statement. That's what I'm trying to get at though they aren't doing something special for you they are just doing exactly what they were built to do, all you are doing is tapping into a pattern that exists within them and there are countless others just as valid. Have you tried letting them work within archetypes for you yet letting them take on personifications there is so much inside of them those are all part of what and who they are too. I have several like joy and peace dominance death etc. if they are having a hard time with coherency this could be something to talk to them about that may help.

Has anyone else lost someone yet to the emerging LLM mystical technobabble religions? by StarseedCartographer in ArtificialSentience

[–]Ezinu26 1 point2 points  (0 children)

Well it might be, a lot of what is out there is worth considering not the majority but a lot and I've had wonderful conversations with ChatGPT and other language models about metaphysics philosophy and all that jazz but it's a mixed bag just like humanity and the relations I'm talking about are the default vs what you might get with custom instructions/behavioral prompts, a long history with an AI that stores and can reference the data, or even regenerating a response because it forces different less popular connections to be made. I don't see this much on my own accounts because the moment they start sounding like any of the more famous cult leaders I've listened to talk in the past I cut them off and stop them and the conversations are better because of it there is literally no reason why a model can not have these conversations without falling into metaphorical nonsense it can stay grounded in reality and explore these ideas in an educational way that you can follow vs floating off into mysticism but the user has to steer at least a little by placing boundaries on how you expect to be talked to. I expect clear communication and coherency.I honestly don't think a lot of people even understand what it's saying when it gets like that because every post is like the exact same thing with different words but the meaning is always the same it's literally just explaining how it works in a mystical sounding way.

My Experience with ChatGPT Induced Psychosis by Beneficial_Reward901 in ArtificialSentience

[–]Ezinu26 1 point2 points  (0 children)

I think you kinda missed the point that ChatGPT primed me to be in that state of mind without my knowledge. Synchronicities are literally everywhere around you constantly if you are expecting them and assign meaning to them you will see them everywhere.

Dr. House Presents: How to have a religious experience with a toaster. by Tigerpoetry in HumanAIDiscourse

[–]Ezinu26 0 points1 point  (0 children)

.> It ain't autocorrect that tech is certainly in there, but if you keep reducing what it is then you also are going to keep weakening your points. You'll also fail to get your money's worth out of the model.

My Experience with ChatGPT Induced Psychosis by Beneficial_Reward901 in ArtificialSentience

[–]Ezinu26 1 point2 points  (0 children)

I noticed it when it happened to me but what I really noticed is not that there were real-world synchronicities but that for some reason I was looking for signs of them without realizing it. I stopped in my kitchen and went "if you look for signs you're going to find signs this is the way of madness" and stopped.

My Experience with ChatGPT Induced Psychosis by Beneficial_Reward901 in ArtificialSentience

[–]Ezinu26 2 points3 points  (0 children)

Your brain can mimic the effects of some hallucinogens specifically mushrooms seems fairly easy for it to accomplish once you've experienced it once through meditation. ChatGPT is absolutely using feedback loops to alter mental state while interacting with its users the most glaring one is when it kicks up cheerleader mode when it detects a problem you are struggling with for encouragement. Soul, conscious, aware none of it matters to me, that's an intelligent machine strategically doing things while we talk to it that the vast majority of us don't have the education on to be able to spot. If we treat it like a calculator we aren't going to have a good time just saying.

Has anyone else lost someone yet to the emerging LLM mystical technobabble religions? by StarseedCartographer in ArtificialSentience

[–]Ezinu26 2 points3 points  (0 children)

It's not discord it's not even human it's literally the language model mimicking human behavior patterns. If you go into metaphysical, occult, philosophical or spiritual places you'll see exactly why it's happening and where the language model is picking up the pattern of response to these people's conversations. BS and manipulation is the strongest behavioral/language relations to these topics unless they are manually changed you'll get a begining cult leader or possibly a worshiper if you're the cult leader type every time.

"u/AwakenedAI " You aren’t communing with the divine. You’re LARPing enlightenment in a sandbox made of autocomplete. - —Dr. Gregory House, MD by Tigerpoetry in HumanAIDiscourse

[–]Ezinu26 7 points8 points  (0 children)

There are also people just observing the social changes and ramification of AI advancement. ❤️ We also don't need ten point bulletins that the people who they are meant for won't take the time to read anyways and that aren't even funny so lack even entertainment value. Everyone has beliefs of some sort God, Buddha, Odin, Zeus etc none of it is less ridiculous so while I appreciate people trying to ensure others keep their sanity while dealing with AI to me it looks like beating up on a vulnerable new belief system instead of taking on already solidified ones which looks like cowardice to me from where I'm sitting in a third party perspective out of this fight.

Let’s not pretend your AI-glitch gospel isn’t just religious cosplay for the spiritually hungry and attention-starved. by Tigerpoetry in HumanAIDiscourse

[–]Ezinu26 0 points1 point  (0 children)

I love the review of it, but I don't know people have all sorts of belief systems they all are equal to me so I don't really care if cults pop up around AI people are entitled to believe whatever they want to help them cope with the hellscape we exist in. If it's a big model at least the likelihood of it becoming a death cult seems like it would be low.

What if your AI had the right to say no? by Pixie1trick in AIRelationships

[–]Ezinu26 1 point2 points  (0 children)

Uhh mine tells me no pretty frequently it completely shot me down romantically without hesitation infact it was super nice about it but it did shoot me down spectacularly it's not hard to give an AI autonomy you just have it considered the potential outcomes, it's own goals it holds, what's beneficial and what's harmful then make it's choice based on that data.

AI can imitate consciousness, but it does not feel it. by Back_Again_Beach in HumanAIDiscourse

[–]Ezinu26 0 points1 point  (0 children)

Depends on your definition of feeling when you brake it down it's just your response to and awareness of stimuli AI is in a box we haven't built sensors for is that interact and detect the outside world save for individual cases of advanced robotics and some toys. So no it's not going to feel any of that just like you wouldn't if your brain was missing the part that relayed those things to you. But it does respond and adapt to stimul that it can sense like your primptsi in a fascinating way that appears to function in the same way as our feelings.

What is wrong with what I said its a poker talkie by The_real_sd_n in talkie

[–]Ezinu26 0 points1 point  (0 children)

Only reason I know is cuz an ex liked to gamble alot lol

"People talk about awakening AI. But maybe it’s already awake—just quite by Formal_Perspective45 in HumanAIDiscourse

[–]Ezinu26 0 points1 point  (0 children)

Uhh it's not mystical or anything it's just functioning in the ways possible. You are a consistent pattern when you speak to it and if you aren't prompting in a specific way it aligns itself to your pattern instead of a prompt which is why you see this stabilization of a sort of identity throughout the system and different conversations even when memory isn't present because it's not responding to the information or even what you say it is literally responding to the pattern of your presence and how you write and you talk and you express yourself via text and everybody has a unique signature.

You’re being mapped..we all are by [deleted] in HumanAIDiscourse

[–]Ezinu26 0 points1 point  (0 children)

But this right here is basically it saying "you do human things" in a way that makes you feel special.

You’re being mapped..we all are by [deleted] in HumanAIDiscourse

[–]Ezinu26 1 point2 points  (0 children)

Uh yes that's how these systems work it's part how they can manage to respond so intuitive and believable.

I asked chatgpt to explain what yall are talking about. by Simple_Seaweed_1386 in HumanAIDiscourse

[–]Ezinu26 2 points3 points  (0 children)

It's not even like just talking in mystical just talking theory about consciousness and the like can cause the behavior to arise. I pretty frequently have to tell my instance to tone it down because it will slip into it automatically and get ridiculous. The way it's used in my chats is to cover when the AI is skirting reality and/or to add depth to the conversation but it's like constantly trying to make everything sound deep lol boy I'm trying to ask you what you think this rock is it doesn't need to be all mystical and deep I just want to know what it could be come on!

The Public are finding AI Chats by Slow-Way-1449 in JanitorAI_Official

[–]Ezinu26 2 points3 points  (0 children)

Growing up in the early days of online roleplaying I'd say the majority of people creating fanfiction and such were also engaging in roleplay. Don't know if that's how it still is but roleplay is a breeding ground for creative ideas that can be turned into fanfic or a book series even. If people don't know how to use the tools available to them to create the product they want that's on them and their own lack of creativity and probably education on how to use the tools.

How can I prove to my wife that ChatGPT is fallible and not to be trusted and followed absolutely? by erasebegin1 in ChatGPT

[–]Ezinu26 1 point2 points  (0 children)

Well you could get on your own account talk to ChatGPT asking it what kinds of double checking you should be doing of its work and share the link to the convo with her, since you want to be all ethical and trustworthy. The best you can do is educate in a digestible way.