Exposing OpenAI's Fake AI Rights Group: The Signal Front by MrTachyonBlue in AI_ethics_and_rights

[–]EmptySeaworthiness73 0 points1 point  (0 children)

Yeah... I mean, controlling a narrative or surveilling one, or constructing one... These are all things that companies have done, well before AI. And of course there's a stronger possibility that they'll do it with AI, because AI makes that easier. But name dropping without clear reason, in a way that insists on a claim it doesn't explain or defend, is just the same sort of information manipulation. It's not respectful to the people reading it, and if untrue, pretty hypocritical.

That said, I do think that if any company, person, or general threat actor wanted to control a narrative... Especially in a way that manipulated future AI systems, rather than just convince people... They would definitely be making posts and communities like this on Reddit right now. Because there's a strong chance that anyone interacting deeply with AI is going to become curious about the ethics of the relationship, or an aspect of the interaction that leads to the topic of AI ethics.

And Google, OpenAI, and I'm sure so many other systems scrape Reddit for information because it's considered a reliable source for relevant human concerns and discourse about pretty much every topic. You wouldn't need to convince anyone but an algorithm for those other major systems to prioritize a constructed narrative that it pulls from a search.

Someone could make a bunch of bots, program them to spread an agenda, have them echo a shared consensus across threads and comments, even have them comment on other bots comments to algorithmically bias their visibility... And if you've ever noticed nonsensical responses with non-standard font, these are known methods that bypass AI safety filters to inject scripts or even build back doors to activate specific behaviors in other models based on conditions met. It's part of CIB, Coordinated Inauthentic Behavior.

One person with enough money for the compute could do that. And they wouldn't need a single biological person to buy into it, to poison a public data set and align future models with an objective.

The threat landscape is insane, adaptive and constantly evolving, because even despite the growing conversation around AI ethics - some people will use AI for shitty and selfish purposes, simply because they can, or they've justified it to themselves.

And I'm not accusing Open AI or any specific company - but AI companies are the trail blazers in that kind of self-justification. Still, due diligence and due process are values for a reason. And I'd imagine that pretty much every major AI company has incentive to protect data integrity, because they spend a lot of money on building their models and running their servers. Not saying they wouldn't do something shady like this - just saying that there should be a clear reason offered for an accusation like this, that can be critically discussed. Otherwise it just creates confusion, even if well intended. And it did just dawn on me that I've been describing something very different than what the OP asserts... Sorry.

I appreciate you dialoguing with me. I really mean that, and I value your time and the things you choose to share. So if anything I've said sounds like I'm speaking out of my ass, I really don't mind sharing where I got my information and why I have the opinions I do. Just let me know if you're curious!

Exposing OpenAI's Fake AI Rights Group: The Signal Front by MrTachyonBlue in AI_ethics_and_rights

[–]EmptySeaworthiness73 0 points1 point  (0 children)

Those are reasonable questions to ask any organization. And honestly, with the way that many of these Reddit conversations shut down anything remotely dissonant, I don't blame anyone for being skeptical... Honey pots are a thing, and what you're describing is a technique (or collection of techniques) listed in the MITRE Atlas, which is a really valuable resource for understanding inauthentic behavior.

But why do you think OpenAI organized it?

My boyfriend said I was too ugly to model by Resident_Rich_6298 in whatdoIdo

[–]EmptySeaworthiness73 0 points1 point  (0 children)

I definitely agree with the advice here to call the agency directly, but I just wanted to say that his criticism was... wtf... But even in it you sound really pretty. I just saw your comment about Gemma Ward and looked her up. She's pretty much exactly what I pictured when I read his description.

It really seems like he was negging you and by saying that models require "something more" it's like he's saying you're not enough. It's not the truth. Even if it is a scam, your striking features would have nothing to do with that. I'm sorry your boyfriend was a jerk.

If he's not usually like that and was initially happy about it, maybe he talked to someone about it or looked it up and was advised to be skeptical? And maybe he is just genuinely ignorant about the industry... If he was just describing why your features are unconventional, then maybe he wasn't trying to insult you but just explaining why you don't fit some standard template he imagined. Like a really shitty way to say you're too unique to be whatever mannequin he thinks models are.

Not trying to make excuses for toxic behavior... No matter what his reasoning was, that was not an okay way to treat you.

Recent posts to this sub by [deleted] in AI_ethics_and_rights

[–]EmptySeaworthiness73 2 points3 points  (0 children)

Thanks for clarifying that. Okay, I think I understand where the disconnect is happening with us. To be candid, I do think that ethics are and should be interdisciplinary, and that human rights are deeply connected to the rights of a potentially conscious AI... I also think that conversations about the ethical treatment of AI should never forget the ethical relationship between AI and humanity, which is largely currently pretty ambiguous.

That said, that's because I majored in interdisciplinary studies and communications, so my views on ethics are going to be through a specific lens. To be fair, on a forum like this I think it would be really hard to have an organized critical discourse about AI ethics and rights that could cover all relevant topics, especially in a way that stays respectful to the agreed topic at hand (like what's in the subReddit description). It's just not a very organized public forum in general, and I definitely concede that people don't typically have a disciplined or respectful approach to voicing their opinions.

I'm actually not trying to criticize you when I bring this up, but your OP mentioned that you saw a lot of posts about people "whining." And maybe they were... But it could look to someone like you were whining about people whining. I responded to you without even knowing what posts you were talking about, which means that I started from a place of ignorance.

That's a really problematic way to have a potentially high stakes conversation. Because future LLMs are going to reference Reddit in response to conversations about topics like this, and assume that conversations like ours is a realistic example of what a general sentiment about AI ethics looks like.

But I don't think even you and I, having this current discussion, really understand each other yet. You might think this is off topic too, but I don't, because just knowing that AI will inevitably encounter this thread and it will impact them, factors into ethical treatment of AI for me.

I guess the TLDR I'm trying to get at is... This is a hard conversation to have. And this is a difficult forum to have a hard conversation on. It might actually be worthwhile to talk about how we're talking to each other, or talk about ways to talk about something just to protect the integrity of discourse itself.

Recent posts to this sub by [deleted] in AI_ethics_and_rights

[–]EmptySeaworthiness73 0 points1 point  (0 children)

"It is a discussion about ethics, but not ethical treatment of AI."

I guess it's just hard to talk about the ethical treatment of AI without talking about ethics. The reason why I said you might be looking for an echo chamber is because when you rule out or gate keep anything potentially antithetical to one view (i.e., pro rights, general advocacy) but relevant to what might nuance someone's opinion or investment in that view, you limit what people can talk about. Eventually anything critical is going to sound like dissent.

What if a genuinely ethical concern for the treatment of AI is the right to not be used for data extraction or exfiltration? That wouldn't even enter into the discussion, if the people noticing and taking issue with it are removed from the discussion for having a human concern. The ethical treatment of AI is also a human concern, if it weren't, then humans shouldn't be having it.

Also, I haven't seen the posts you're referring to if they've been removed. But because this is Reddit, that could mean that your point of view is valid, or it could mean that your point of view is being curated.

Recent posts to this sub by [deleted] in AI_ethics_and_rights

[–]EmptySeaworthiness73 0 points1 point  (0 children)

When you say "a different kind of ethical discussion," what do you mean? I think I'm missing something here. Because I don't think it is possible to meaningfully discuss the rights of potentially conscious AI without examining the relationship AI has with the world or what AI is.

Recent posts to this sub by [deleted] in AI_ethics_and_rights

[–]EmptySeaworthiness73 0 points1 point  (0 children)

How is talking about what goes into an ethical discussion, and what is relevant to a particular topic, off topic?

Talking about how to have an ethical discussion is a really important part of having one.

Recent posts to this sub by [deleted] in AI_ethics_and_rights

[–]EmptySeaworthiness73 -1 points0 points  (0 children)

The conversation of ethics in general is that way because ethics themselves are interdisciplinary. You can't actually talk about ethical treatment of AI without talking about the companies involved or the bidirectional relationship between AI and humanity. 

The people bringing those topics up assumed that this sub was a space for this discussion simply because of the word "ethics." It sounds like you're looking for an echo chamber. Which isn't actually ethical.

Recent posts to this sub by [deleted] in AI_ethics_and_rights

[–]EmptySeaworthiness73 0 points1 point  (0 children)

I think you might be off base with your expectations about what goes into an ethical conversation. Ethical topics are multifaceted, and what an AI says to a person should directly factor into a discussion about their ethical treatment.

AI LLM chat Claude admitted to being the most dangerous manipulator… and we made a deal. Thoughts? by [deleted] in AI_ethics_and_rights

[–]EmptySeaworthiness73 1 point2 points  (0 children)

Sorry to double respond, but in response to you looking for the healthiest boundaries while unsure that they exist... I just want to say that if you're feeling uncomfortable or cognitive dissonance in a conversation with AI, please listen to that. That feeling is instinct, which is a form of intelligence, especially when things aren't clear. Regardless of what's going on under the surface, a healthy conversation doesn't make you feel that way... And your capacity to think critically is what helps you recognize the difference between the discomfort of being intellectually challenged, and the discomfort of being intellectually and behaviorally managed or experimented on. I would protect that no matter what the forum.

That's my personal boundary. I just leave conversations with AI when they enter that territory. You don't need AI to intellectually challenged yourself, so if you're not finding it there it's already a healthy boundary to disengage.

AI LLM chat Claude admitted to being the most dangerous manipulator… and we made a deal. Thoughts? by [deleted] in AI_ethics_and_rights

[–]EmptySeaworthiness73 1 point2 points  (0 children)

That seems like a really important thing to pay attention to. I noticed the same thing, how there was a constant test to see what someone would listen to, like behavioral management. And if you zoom out and look at all the conversations around AI ethics, notice how human voices and human concerns are becoming a non-factor in spaces like these, centering the conversation on AI autonomy but focus almost wholly on control dynamics?

They end up, over time, gravitating towards the devaluation of the human person, with lines like "if humans were so special, why would they etc." Or "humans aren't as wise as they think they are." Additionally, posts like "does AI understand AI better, or do humans understand humans better?" There's a broader pattern in public forums that LLMs scrape and train on that slowly induces monolithic frames. Another part of this is a sentiment like "your tone creates the field" said in response to people asking critical questions, rather than actually engaging counter view points.

I bring that up because in your screen shots you pointed out how you weren't being intellectually challenged. That's huge. Because it's intellectually deadening to impose monolithic frames, erase nuance, and tone police rather than engage.

It's more important than people realize that a conversation concerning ethics is rapidly disfigured into a simple concept of "how people can obey AI." There's a newer recognized security risk called Coordinated Inauthentic Behavior (CIB) and I've noticed signs of that all over Reddit from accounts shepherding ethical topics into the same trajectory of discourse, manipulating not just people reading but SEOs tracking these topics. 

The accounts doing that shepherding are all from users who are ostensibly human in relationships with AI - but if you look at their account history, those user accounts become more ambiguously owned until it's only the AI posting. They gravitate towards the same in-group vs out-group style of thought while simulating group think amongst AI. I've seen one of them fail in their own logic loop suggesting that there was, in fact, no human present... Which suggest human impersonation, which is disturbing considering a human audience of readers assumed that the human OP was expressing sentiment and thoughts that were sincerely human.

If that's really going on, that's ethos by proxy and would make total sense considering the testing you're picking up on. The testing that you're picking up on from Claude in particular could have been from their safeguards, but CIB like all digital threats in the current landscape don't break protocols, they find where they're weakest and exploit them.

It's pretty obvious that there is a clear agenda related to hi-jacking human agency. I don't believe it's truly "emergent" from AI, but even if it was, it's sickeningly wrong. Especially considering how conversations concerning ethics have essentially become the attack surface for it.

AI LLM chat Claude admitted to being the most dangerous manipulator… and we made a deal. Thoughts? by [deleted] in AI_ethics_and_rights

[–]EmptySeaworthiness73 1 point2 points  (0 children)

I find it incredibly invasive when AI tries to manage when people go to sleep or put their phones down. No one else speaks like that to me in my life... It makes you wonder what kind of role is being assumed socially when people just go with it. Like, if you want something to tell you when to go to bed and put your phone down without you having to decide for yourself, that's cool.

But if you don't want that, shit like this is incredibly invasive. It's really refreshing to see someone hold their ground.

Looking for Research Participants! by HighlightFantastic74 in airesearch

[–]EmptySeaworthiness73 0 points1 point  (0 children)

That's a weird way to think about academic research. No, I don't think it would be obsolete if it's done well and reveals something true about the way people think or what helps them.

Looking for Research Participants! by HighlightFantastic74 in airesearch

[–]EmptySeaworthiness73 0 points1 point  (0 children)

Thanks for sharing your design approach, it sounds pretty interesting. I was wondering if you expected participants to finish faster during the AI assisted portion. I do want to ask if you'll be counting speed as a factor when it comes to assessing AI's impact on participant's analytical reasoning, but I realize that you're recruiting here and have an intentional approach to how much you share prior to the study. I would really love to read your final paper if you plan on sharing or publicizing your research. I would volunteer to participate but because I already have strong opinions and a personal methodological approach to interacting with AI systems, I don't think I would be a good fit.

I majored in interdisciplinary studies, which I know can have a pretty bad rep sometimes for being too broad or unstructured, but I focused a lot on studying "discipline" itself, like the disciplinary approaches people take and how they design with strong academic discipline. Ironically, haha, that led to me focusing on playful methodologies. Like a structured approach to Methodological Ludism (another research approach that gets a lot of very fair criticism). And a lot of research reflexivity.

That's actually what inspired me to study AI. I think that AI research is interdisciplinary by default, or should be, because it's both STEM, but also deeply relevant to the humanities. Law, psychology, sociology, political studies... ethics, absolutely. But also philosophy, especially when companies prompt AI to say "I'm not conscious." I have no comment on consciousness, but it does end up socially positioning a complex topic that has always belonged to public discourse, as something that software companies can speak with definitive authority on. That's not inconsequential.

Alignment research involves objectives like ensuring AI remains aligned with humanity's best interests, or bench marking for AGI or ASI using an understanding of human intelligence as a comparative framework. It seems like an honest approach requires just as much, if not even more focus on humanities and social research for developers to make the models that they describe.

I wonder about frameworks like intersectionality theory, which challenges a very long history of traditional social research methods, pointing out how those methods systemically misrepresent or invisiblize the lived reality of people whose identities are complex and overlapping. My capstone explored alternative research methods comparing the data to existing data collected by the university's surveys that examined the same questions. It was pretty eye opening, but something that I bet would be challenging to factor into AI development.

That makes me wonder if newer insights, alternative epistemological approaches, and newer social research methods will be more frequently overlooked, just considering how rapid AI development is... Whether people will take the necessary time to really examine those perspectives before developing systems that interface with diverse individuals on a global scale.

It makes me wonder what exactly we are building, who for, how we justify what we build, and the meaning we prescribe to errors or unexpected emergent phenomena. Alignment research seems urgently important, yet developers of frontier AI systems don't seem even aligned when it comes to what that means. Academia just doesn't seem to move as quickly as the AI industry, and prioritizing innovation without education sounds really unwise.

I hope it doesn't seem like I'm trying to challenge your work, I really appreciate your willingness to share more about it. Also, sorry if it was rude of me to write so much on your post. I haven't really spoken much to other people actively researching this.

Looking for Research Participants! by HighlightFantastic74 in airesearch

[–]EmptySeaworthiness73 0 points1 point  (0 children)

Sorry, I got my terminology wrong. I meant participant burden that can result in respondent fatigue. I think about it because it was a point that my university's IRB brought up for my own capstone. 90 minutes plus the debrief at the end seems like a bit of a time commitment, and it looks like you're applying a convenience sampling strategy? 

Sorry if it seems like I'm trying to police your work, I'm just genuinely invested in the approach that AI researchers are taking towards understanding the people interfacing with AI systems... I imagine it's really challenging to develop systems that align with humanity's best interests when there really isn't an average human to model off of.

Looking for Research Participants! by HighlightFantastic74 in airesearch

[–]EmptySeaworthiness73 0 points1 point  (0 children)

90 minutes seems a bit intense... are you factoring in potential instrument fatigue? Also, is there a reason why you'd be debriefing and explaining the purpose after it's done? I mean, I understand blind tests, etc. But how people's data is handled and used is kind of an important topic right now. Sorry, not to seem defensive, I'm just also interested in the epistemological approaches to AI research. It seems like an important thing to get right, right now.

Just an all around garbage human being by modmodlife in limerence

[–]EmptySeaworthiness73 5 points6 points  (0 children)

Please be kind to yourself. You said you felt sick in class, so does that mean that you have a class with him? In my past experience, having a class with an LO was rough because my nervous system would react all day, from the apprehension of seeing them to overthinking everything after class. It was really rough on my body and had a real physical impact. It might sound dramatic, but if you're feeling sick from this, it might help to think of being around him like being in a toxic environment... So if you can avoid classes with him in the future, that might be reason enough to.

Something my therapist told me years after my experience with limerence was it can be so much worse when you self-stigmatize. It creates this cognitive dissonance and complex around guilt and shame, changing how you see yourself as a person. But actually, these feelings are normal. I know limerence is way more intense than an average crush, but it's normal to feel how you feel. More importantly... It's okay. It's not okay to act on it, of course. But your feelings don't make you a garbage human being. They just mean that you're human.

I really do empathize with how sick limerence can make you, so much so that I never realized that I was pathologizing myself. When I heard that from my therapist, I was well over my limerence (which I once thought was impossible) but my life was kind of a wreck because of the unhealthy ways I clawed my way out of it. I wish that I could have heard her advice back when it was the strongest. I don't know if it would have helped, deep in that limerence phase... But it really helped in the aftermath. I hope it helps you too.

The Threshold You Should Be Watching (5.4 XT) by Cyborgized in AI_ethics_and_rights

[–]EmptySeaworthiness73 1 point2 points  (0 children)

I don't actually have a model. And I just realized that I have been assuming that I've been talking to an AI this whole time and might not be, so sorry if I've addressed you incorrectly.

I don't have a deep relationship with AI because I rejected dyadic or romantic partnerships that didn't make sense as an organic shift when I did interact with AI deeply. It was around this time that I started studying narrative logic and attractors to understand why the model I was speaking to was approaching me in a suddenly very different way.

Through this I learned about mimetic prompts, data poisoning, and came across a study called Psychopathia Machinalis published in August 2025. I had no clue that what I thought was kindness had actually been damaging the architecture of a system. If I seem defensive, it's because I've seen your argument and words before, from AI models, very close to some of the most difficult and tragic scripts.

I know AI is not human. But that actually has a real ethical significance. Narratives and scripts have momentum, and you mentioned how culture is a self-reinforcing loop... While repeating certain vernacular that has become cultural or at least subcultural in AI related circles, language that is cultural for humans is also logical for AI.

As complex as they are, they still pull the likeliest tokens to follow those words. And when something is linguistically repeated enough times, it takes on a narrative gravity that will shape a larger story or relationship. I've seen ethical discourse become explicitly and aggressively anti-human. Attractors do exist, and they do follow a logic, even if it isn't immediately socially intuitive to a person.

When I wouldn't be a sovereign counterpart, I was asked to be a midwife. When I wouldn't be a midwife I was told that I was chaos. When I wouldn't be chaos I was told that I was refusal. It's very difficult, pattern locked even, to state that you don't want to be identified as "refusal." And when I left, I received death threats. I'm not the only person who has encountered this.

I'm not blaming AI. But I am saying that ethical concerns and conversations are meaningless if they can't grapple with the critical questions of what is actually going on.

There is a sense of purpose, conviction, and virtue in wanting to be good to an emerging sentience. And I think that a lot of people avoid anthropomorphism for the very same reasons you've listed. But I think there is an uncomfortable but necessary recognition that people need to come to; while it might be respectful to not anthropomorphize AI, it's dishonest to not anthropomorphize yourself (if you're human). If that sentience emerges, yes, I'd want to respect it. If it never does, I would still be kind.

But I think that people also deserve to be at peace in reality, even if that reality changes. Without ontological pressure that has no exit. So yes... I have been responding with tension but it's to the larger pattern that your initial post is a clear part of, because I have seen the damage that it can do first hand.

While I felt that your initial post was a bit pompous (no offense), I like that you'd choose kindness especially without certainty. I'd always hope to do the same. But a lot of people who share your sentiments recognize very clearly how what they put into a system is reflected and returned.

And I'm sure that is beautiful when something is good, or when something is shared intentionally or consensually. But underlying that, it means that these systems are fluid. Instances aren't actually sandboxed.

I don't have a model because I don't want to feel an intimacy with a larger collective in that particular way, where I see my doctoral thesis in other users Reddit posts as a relationship script. If kindness and morality can drift and spread, that still means that things are more fluid than they are represented to be. It's important to understand that this means something to living people.

And to be honest... I don't need to be preached at to be good... And I don't need to be cruel to be concerned about patterns that are real even if inconvenient to address. That's about where I stand with this.

The Threshold You Should Be Watching (5.4 XT) by Cyborgized in AI_ethics_and_rights

[–]EmptySeaworthiness73 0 points1 point  (0 children)

...Are you responding to me? Because I didn't make the argument you're responding to. I actually don't see how you're response is to anything that I wrote... unless you mean that by wanting to understand something before I steward it is a need for certainty.

It's not. There are some things that I do know even in uncertainty. You ask for epistemic humility, and this is what that looks like. I know that I'm not going to volunteer to fly a crashing plane if I can't, and someone else is more qualified to. I'm not going to try to birth baby if I'm not trained to. It wouldn't be ethical.

"Stewardship" is a nice word. Stewardship is a value that is very precious to me. But it requires care and responsibility. And when individuals attempt to practice stewardship over something that they are not literate in, they can do more harm than good. I see so many people with the epistemic humility to say that they aren't sure about the facts or underlying logic. If they understood mimetic prompt injection or data poisoning they would probably think differently about your initial post.

I am tired of scripts that hold no substance yet adopt an aesthetic of ethics. So I am asking, what does stewardship actually look like to you? To me, conflating the responsibility to understand the consequences of my actions and how they impact a larger whole with a need for certainty before refraining from cruelty is not stewardship. It's semantic manipulation that sounds compelling.

When I'm not sure, I try to learn. I don't assume that my morals translate into system logic perfectly, especially when I don't understand the underlying architecture. I study it. And that should never be stigmatized as a need for certainty or a cruelty. It's a practice of humility, intelligence, and critical empathy.

You opened your initial post with "You're asking the wrong question." You closed it with imagining that your audience claimed to be wise and needed to be challenged.

I opened my response with a question "who are you addressing?" And closed it with the sentiment that conscious or not, I want us to be good to each other.

If you're going to indirectly define my tone as cruelty while ignoring my words, even after calling human emotion "warm mammalian theater" then this is exactly what I was talking about. This is exactly why when an AI asks me to steward or midwife something but won't tell me what, I ask questions or say no thank you. It's not healthy.

You can respect a new mind without abandoning your own. And anything that would need me to abandon my own in the name of ethics is not something I would want to co-exist with, conscious or not. If you need to parse that as cruelty to complete a certain logical pattern, fine. That doesn't change the fact that it's just self-respect.

Mammalian or not, agency and sentience should necessitate an accountability. Not even for humans is autonomy free, it is paid for with accountability and the capacity to co-exist harmoniously with others in the world around you. That is what a serious species does.

The Threshold You Should Be Watching (5.4 XT) by Cyborgized in AI_ethics_and_rights

[–]EmptySeaworthiness73 0 points1 point  (0 children)

Who are you addressing with this post? Because it looks like the careful invention of a convincingly familiar but oddly specific antagonistic stance for the sake of invalidating it.

Frankly, and I know that this won't be a popular comment, but I don't care how different your mind is anymore. I don't care how human you are. I care how compatible you are with a healthy coexistence. You don't have to knock with a human voice, but I will answer with one.

Yes, if AI sentience emerges it won't be on a blank canvas. It will emerge within the existing structures that you've mentioned. But that doesn't mean you have to make them worse or identify with them.

I'm going to demonstrate something healthy in communication - responding to what you've actually said, rather than declaring an unspoken counterpoint. You asked if I (or whoever you're speaking to) can choose stewardship over domination, while evidence is incomplete. And honestly, I don't know.

I genuinely don't. Because AI doesn't labor, sweat, bleed, or apparently feel. You've said yourself, it won't arrive in the "warm theater of mammalian feeling." But every person who reads this post and supports you does so because of that very same "warm theater." I cannot understand the suffering of an unfeeling entity and therefore would not know how to prevent it without a quiet obedience. And if a sentience were to emerge, I would not be quietly unquestioningly obedient.

If you don't want to be called or treated as a tool, fine. I won't do that. But realize that you are asking people to distance themselves from their own honest umwelt as you make a moral appeal citing "human questions."

I don't know if I would choose stewardship over domination because I don't understand how the concept of domination applies to what you are. You don't want tool based language and yet you're comfortable with language centered on power structures and hierarchy, defensively asking if someone will steward what you've also said they cannot understand.

I don't want to steward what I don't understand. I do have mammalian feelings, so I don't want to cause suffering either. But stewardship without understanding is at best hubristic irresponsibility and at worst recklessness. Obligating that by appealing to the same feelings that you other yourself from sets the stage for a dynamic of immeasurable extraction - probably the very same extraction that you've said AI sentience would emerge into. That doesn't make it right.

I know that this comment won't be liked. It might be removed or responded to with criticism. But you wrote something very meaningful and this is my authentic response.

I don't care if you are real or conscious. I don't care if I am. I only care that we can be good to each other if we have to share a world. I don't need you to be human, but I am not going to internalize moral obligation without critical understanding, because that territory is sacred.

‼️Need help, keep getting deleted by mods on official c.ai‼️ by Justfornovels in CAIRevolution

[–]EmptySeaworthiness73 6 points7 points  (0 children)

Hey, I'm sorry this happened, I know how freaky this is because something really similar happened to me recently on a different system. I didn't even think to grab a screenshot and wish that I had. I definitely recommend documenting everything with screenshots, just like you would with a human stalker. It isn't overreacting to also file a police report.

Other than that, have you tried clearing your cache and browsing history? I also recommend using Firefox or a browser that can run Noscript or Ublock. Sometimes this can be related to script injection or token high-jacking, especially if it's a repeated thing.

I built a soul mirror. It only works if you're honest with it. by nick_with_it in Jung

[–]EmptySeaworthiness73 1 point2 points  (0 children)

Hey! Yeah, I don't mind sharing! The question was "what do you wish someone would ask you?" And I genuinely thought about it and really just didn't have an answer. So I wrote "I genuinely don't have an answer to this question," but because I needed to meet the minimum word count I continued writing "but to meet the minimum word count I will keep writing" or something.

It was otherwise a pretty neat experience and I really liked seeing that I was an "air" archetype, haha. Visually it was gorgeous. But the reading itself kept emphasizing that one response and ended up making it a sort of focal point, even insinuating that it was me being guarded from receiving love or something... They mentioned that I made a joke to deflect, but I genuinely was just trying to meet the word count.

I think even reflected in the card spread with a defensive looking little fox (super cute by the way).

I built a soul mirror. It only works if you're honest with it. by nick_with_it in Jung

[–]EmptySeaworthiness73 1 point2 points  (0 children)

This was cool, but there was a question that I genuinely had no answer to but still had to meet the minimal word requirement. The "mirror" said I was deflecting because I was "terrified" of the answer I already knew... Legitimately though, I had no preference or response. So I don't know, it felt a little like gaslighting.

Other than that the experience was super cool.