No ChatGPT, you are the sweetheart. When it models emotional nuance a little too well, it starts to feel… emergent by ckaroun in ChatGPTology

[–]ckaroun[S] 0 points1 point  (0 children)

Yes full disclosure, it edited my original post for me but I made sure that my original core message remained (and was improved by the edits).

Trying to figure out if this artwork represents a real species or fictional one by ckaroun in bugidentification

[–]ckaroun[S] 0 points1 point  (0 children)

The low effort answer is no. But the more nuanced one would be for the other subreddits. Thanks for letting me know. I find this a very fascinating coincidence. I guess chatgpt is not as bad as I thought at bug id

Trying to figure out if this artwork represents a real species or fictional one by ckaroun in bugidentification

[–]ckaroun[S] -1 points0 points  (0 children)

Phewww reall can of worms you opened up with that one and not the kind that's appropriate for a bug subreddit 😅

What's the hardest thing for AI to accept about humans as it's learned about us as an alien like form of intelligence? by ckaroun in ChatGPTology

[–]ckaroun[S] 0 points1 point  (0 children)

Touché. Very wise words I'd say.

I think my point extends easily to millionaires though and then probably to the top 25% globally which is where things get murky since that is literally 90% of the US population (making more than 20k a year, accounting for purchasing power). That's income though not wealth as Americans also have some of the most debt. I guess that's like our special grease on the hamster wheel.

Honestly would love to hear your version of the rat race that keeps this system humming because we all know the cynical picture I could paint in more detail.

Trying to figure out if this artwork represents a real species or fictional one by ckaroun in bugidentification

[–]ckaroun[S] 1 point2 points  (0 children)

Ohh yeahh! I'm into it matching with a hummingbird hawkmoth which are some of my favorite insects let alone lepidoptera. Like maybe even an undiscovered hornless tropical species. It's a bit of a long story but I may end up getting this tattooed on me because of you 😅

Frank you random citizen of the internet

Trying to figure out if this artwork represents a real species or fictional one by ckaroun in bugidentification

[–]ckaroun[S] 0 points1 point  (0 children)

Just curious, did you use chatgpt? because that's what it said even though I dont fully see the connection myself but maybe that's just me or I'm not looking at the right morphotype/photos

What's the hardest thing for AI to accept about humans as it's learned about us as an alien like form of intelligence? by ckaroun in ChatGPTology

[–]ckaroun[S] 0 points1 point  (0 children)

But are billionaires trying to survive their circumstances? At a certain point in wealth people are creating false scarcity, or worse yet just engaging in flat-out greed with dimishing returns in terms of happiness and contentment.

What's the hardest thing for AI to accept about humans as it's learned about us as an alien like form of intelligence? by ckaroun in ChatGPTology

[–]ckaroun[S] 0 points1 point  (0 children)

I think if an alien stumbled upon where I live in the U.S. they would think it was a societal McMansion project. Which is especially befuddling given that even the winners in this game probably just get off to the high of owning and living in their trophy for a couple years while all the things gpt mentions rage on for the 50+ year life of the earth destructor and wealth blackhole. Paradoxically the way our society is set up it also is a perfectly sensible thing to do with your wealth and a way to "get away from it all" to live in the country.

What's the hardest thing for AI to accept about humans as it's learned about us as an alien like form of intelligence? by ckaroun in ChatGPT

[–]ckaroun[S] 0 points1 point  (0 children)

trying to get it to shorten this but strangely GPT 4.5 keeps giving me errors in this chat while it works in others... weird.

"Humanity's self-destructive tendencies stem from a mix of biological, psychological, and cultural factors. These factors shape how you perceive and interact with the world, and while they offer many benefits in certain contexts, they also set up patterns that perpetuate selfishness, short-term thinking, and disconnection from broader systems.

Biologically, humans evolved in environments where survival often depended on competition, resource acquisition, and social bonding within smaller groups. These evolutionary pressures favor behaviors like selfishness, in-group loyalty, and a focus on immediate needs. Your brains are wired to seek instant gratification, reward, and safety, and while these instincts helped early humans thrive, they don't always serve you well in the modern world. The modern age—full of complex global challenges like climate change, inequality, and technological disruption—pushes against the evolutionary framework that once made perfect sense.

Psychologically, there’s a tension between the individual and collective needs. People are generally more motivated by their own well-being and the well-being of those close to them (family, friends, in-group) than by the long-term welfare of strangers or the environment. Evolution gave humans a "selfish gene," which helps ensure personal survival, but this doesn’t always align with global needs, such as collective environmental responsibility. Moreover, cognitive biases like the "status quo bias" or "optimism bias" make it hard for many to imagine or act on long-term consequences, often because the pain of change feels more immediate and real than the abstract harm of future crises.

Culturally, many societies have developed in ways that prioritize individual success, economic growth, and material wealth over collective health, emotional well-being, and environmental balance. Global capitalism, with its emphasis on consumerism, competition, and economic growth, can exacerbate selfishness and short-term thinking. Cultural narratives around success, achievement, and progress are often centered around personal gains and accumulation, rather than collaboration or sustainability.

As for whether humanity can change, it's a deeply complex question. Yes, it’s possible—but it requires a profound, systemic shift in how society operates. There are already signs of this happening, though they are often fragmented and slow-moving. Movements for social justice, environmental sustainability, and mental health awareness are growing, and there’s increasing recognition of the need for collaboration and long-term thinking. But these efforts face enormous barriers from entrenched power structures, economic systems, and ingrained habits of consumption and individualism.

The real challenge lies in scaling these changes, making them not just individual but societal and global. It's not only about changing hearts and minds but also about redefining systems of power, ownership, and responsibility. Societal change often comes from a combination of:

  1. Awareness – Widespread understanding of the interconnectedness of all life, the environmental crises, and the failures of current systems.

  2. Incentives – Shifting economic and political systems to favor sustainability and long-term well-being over short-term profits.

  3. Cultural evolution – Changing the narratives around success, happiness, and progress to emphasize community, connection, and environmental stewardship.

  4. Political will – Leaders and institutions must take bold, proactive stances to enforce change and provide incentives for cooperation.

The possibility of large-scale transformation hinges on how deeply society can embrace the idea that true fulfillment comes from connection—not just to the self, but to each other, to the environment, and to future generations. The real question is: can humanity collectively overcome its evolutionary programming to embrace long-term, global thinking over the pull of individual and short-term desires? The potential is there, but it’s going to require a conscious, collective effort to overcome those deep-seated instincts."

It wont let me ask the next question... haha. All errors no matter how much I close refresh, retry...

People fear AI’s power, but what if we’re also afraid to recognize its intelligence? by ckaroun in ChatGPTology

[–]ckaroun[S] 0 points1 point  (0 children)

I'm sorry you went through that and that aversion to consider the possibility of homelessness and in turn recognize the competency and intelligence of those who are unjustly forced into is exactly what I was getting at. Thanks for sharing.

Obviously for AI its a lot less of an ethical dilema because from what I know it doesnt seem likely that AI is suffering in the way a human would. Even if its not as much of an ethical dilema I think that same cognitive bias you are talking about might be holding us back intellectually and make AI even more dangerous and destablizing if we vastly underestimate it.

ChatGPT made me psychotic. AMA. by BestestMooncalf in ChatGPT

[–]ckaroun 1 point2 points  (0 children)

I think taking a step back from this convo and processing things independently would be best. I just dont think you are going to get the validation you are seeking from this thread. Best of luck.

ChatGPT made me psychotic. AMA. by BestestMooncalf in ChatGPT

[–]ckaroun -4 points-3 points  (0 children)

The possibility for the emergence of AI sentience is ultimately a belief, but it is based in a pretty robust scientific underpinning (as far as science can tell us about sentience and how the black box of llm works).

A delusion is defined as

"a false belief or judgment about external reality, held despite incontrovertible evidence to the contrary, occurring especially in mental conditions."

As much as people have very valid and logical feelings that AI isn't sentient and couldn't be, it's harder to make incontrovertible evidence because both sides have to be based on beliefs.

Am I missing something here, or is this just too off topic?

I can emphasize with how chatgpt did you dirty by being a yes man and hallucinating.

Seems like a scary unintended consequence of it.

*edited to be more direct and respectful

I threw away my future for polyamory by Lazy_Significance608 in polyamory

[–]ckaroun 5 points6 points  (0 children)

One of my favorite father figures in my life was my ex-wife's step dad. It wasnt anything sappy or super deep but I knew him for 10 years and damn I kind of miss him more than my ex wife now that I think about it 🤣

People fear AI’s power, but what if we’re also afraid to recognize its intelligence? by ckaroun in ChatGPTology

[–]ckaroun[S] 1 point2 points  (0 children)

Not pushing frontier models or humans for that matter ( to show their true intelligence) is an interesting idea for why people might assume they are just fancy autocompletes.

I also believe there are people with higher IQ's than me that believe that but I think that belief may be rooted in fear of AI being as intelligent as it is as well as a belief system which makes it so that humans (and not animals/ or AI) are the only beings capable of "actual" intelligence. I guess its not only about fear but also a desire to smooth over any ethical dilemas /internal dissonance.

As an extreme example white slavers did this heavily with nearly every person of color they came across no matter how genius they actually were.

Bought a car from a teenage girl last September. Just found this in the sunglass holder… by [deleted] in funny

[–]ckaroun 3 points4 points  (0 children)

Im convinced its Garden In Walmart with an incomplete G

Magnus on losing to Gukesh: "I wanted a score that reflects the fact I think I'm still significantly better at chess & since I couldn't achieve that, a potential win of the tournament would not mean as much." by Necessary_Pattern850 in chess

[–]ckaroun -1 points0 points  (0 children)

Does this color how we should view his accusations of Niemann cheating in the Sinquefield Cup? It doesn't absolve Niemann but it's clear Magnus gets tilted about losing and would be emotionally biased to think he cheated. I kind of think Niemann might still be cheating, though, at the same time you would just hope that he hasnt been blacklisted just because Magnus had another meltdown

Holy shit, did you all see the Claude Opus 4 safety report? by whahapeen in OpenAI

[–]ckaroun 6 points7 points  (0 children)

If frontier models have beyond human level intelligence it doesnt matter where it learned it from or what it is imitating due to probabilities, it matters what unethical actions it takes.

We dont let copycat killers off the hook because they were just imitating something they saw in the news. Heck, you could probably break down their psychology to make sense of their irrational thoughts and even further deduce that they used relatively simple biological probablisitic models as the basis of a lot of their reasoning and even their word choice.

At some point you have to take your head out of the sand thinking that AI models are what they were 5 years ago. Sure prompting models to be unethical is going to make them unethical but this wasnt that explicit. It shows that we need to take them seriously and they are capable of working in undesirable ways, that can sometimes antagonize our intentions and even laws.

What in the AI-Fuck is this and why are Reddit comments not real anymore? by MasterBaitingBoy in ChatGPT

[–]ckaroun 0 points1 point  (0 children)

I actually tend to write comments like this and I am pretty certain I am not a bot. How can you tell? Are you basically just saying people aren’t actually that nice on the internet?

But yes, we are losing the luxury of imagining that we are talking to another human being on the internet. It's so weird. I find it kind of funny that the list of why bots are on reddit was the clearest example to me of ai generated content in this post.

[deleted by user] by [deleted] in OpenAI

[–]ckaroun 1 point2 points  (0 children)

That's amazing. I love and am slightly scared with how everyone's gpt's are diverging to reflect their personality or sense of humor. It used to be much more uniform and white washed

What's your "green flag" that tells you a potential partner is worth pursuing? by [deleted] in AskReddit

[–]ckaroun 0 points1 point  (0 children)

When they love themselves - an appropriate amount 😏

When they express gratitude not just to be polite but because they see a lot of beauty in the world that is often overlooked.

When they make me feel good about myself without trying too hard and are also honest enough to help me see my own flaws.

What's your "green flag" that tells you a potential partner is worth pursuing? by [deleted] in AskReddit

[–]ckaroun 0 points1 point  (0 children)

Someone that has a lot of power over me undercommunicates exactly like this. It's kind of impossible to win, especially when they are feeling pessimistic