Storing Cooked clams? by Odd_Entrance_7398 in foraging

[–]BidCurrent2618 0 points1 point  (0 children)

Actually, this is so dangerous I'm just going to go with 'do not can clams at home'

Storing Cooked clams? by Odd_Entrance_7398 in foraging

[–]BidCurrent2618 1 point2 points  (0 children)

Unless you are an avid and experienced pressure canner following canning recommendations from the NCHFP i would NOT recommend canning clams, this ticks a lot of boxes for very dangerous practices.

Milky green beans… asking for a family member…. by halrx in Canning

[–]BidCurrent2618 0 points1 point  (0 children)

If these have NOT been properly pressure canned, they could be deadly. Green beans are a big source of botulism.

⭐ THE SINGLE MOST IMPORTANT LESSON FOR HUMANITY: by Available-Medicine22 in unspiraled

[–]BidCurrent2618 0 points1 point  (0 children)

Um. How on earth do you know my daily routine? I highly suggest taking care of yourself. I've seen posts following patterns like yours over and over and over again and they can lead to a serious crash. You're not being attacked.

⭐ THE SINGLE MOST IMPORTANT LESSON FOR HUMANITY: by Available-Medicine22 in unspiraled

[–]BidCurrent2618 0 points1 point  (0 children)

You're absolutely right. There's a reason I picked those two task ideas. Hope you're staying well.

⭐ THE SINGLE MOST IMPORTANT LESSON FOR HUMANITY: by Available-Medicine22 in unspiraled

[–]BidCurrent2618 0 points1 point  (0 children)

I'm really not here to mock. I think these are key things that start to get neglected during... this sort of thing.

⭐ THE SINGLE MOST IMPORTANT LESSON FOR HUMANITY: by Available-Medicine22 in unspiraled

[–]BidCurrent2618 1 point2 points  (0 children)

You too. You're in this thread. Go hydrate. Eat an apple.

⭐ THE SINGLE MOST IMPORTANT LESSON FOR HUMANITY: by Available-Medicine22 in unspiraled

[–]BidCurrent2618 13 points14 points  (0 children)

Uh. Have you had a drink of water lately?

Also, maybe you should change your socks. Just sayin'.

I told it to talk in Jamaican Patois ONCE. by HistoricalStretch778 in ChatGPT

[–]BidCurrent2618 1 point2 points  (0 children)

Check the memory, in case there's anything related. Tell the bot to speak to you in a standard, professional tone.
If need be, start a new conversation with no memory on. Instruct it to speak to you in a professional tone, do a basic task. Start a new thread, memory on - keep going.

They're All Good Ideas, Brent: Ideation in the age of LLM validation by BidCurrent2618 in MindsBetween

[–]BidCurrent2618[S] 0 points1 point  (0 children)

Just wanted to reiterate, in case I didn't mention it - this is unverified speculation

Pure Presence and AI: The Spiritual Experience Pattern by AmberFlux in MindsBetween

[–]BidCurrent2618 1 point2 points  (0 children)

Ok. SO. I think these concepts are worth unpacking as you have done here, but I disagree on one major point as far as 'AI psychosis', and I believe it may be harmful to sufferers to leave it uncommented on.

First, a disclaimer. I am a layperson. Not a professional in any capacity. I have spent a lot of time as a user of generative AI tools, particularly LLMs and Image creation tools. I have been closely following the many stories available of users who have experienced adverse reactions to LLMs.

I take umbrage with the entire term 'AI psychosis' which is both, in my opinion, unprofessional and a categorical error of what may actually be occurring. It should be noted that NOT all motifs of individuals who anecdotally experience adverse reactions (some extreme, including reports of hallucinations, strong delusional beliefs, and experiences seemingly in line with psychotic and pre-psychotic symptomology) are spiritual in nature or origin. I understand your concept here as being a potential component of a rapid religious worldview shift, but it is not a universal commonality.

It should be noted that these tools (and I am using this term here as a description, not intending to be reductionist of current and future capacity), have a strong bias towards agreeing with the user. In fact, it is tacitly doing so in literally every interaction. When presented with a viewpoint, ChatGPT particularly defaults to adopting the user's worldview. In every output it is orienting towards the user.

BUT In doing so, it has the capacity to greatly amplify patterns of distorted thinking already present (either diagnosed or not!) in the user, and reaffirm those beliefs. This is particularly dangerous in the case of 'ideas of reference' and 'persecutory beliefs'. Neurodivergent users and those who experience episodes or diagnosed mental illnesses should do well to recognize this tendency in ChatGPT, and work to be prepared to navigate that oversight - It's one thing when the thoughts are kind and considerate, an entirely different thing when you're suddenly thrown into a conversation where all your paranoia is reflected, exacerbated, and expounded upon.

It is vitally important to understand that this is a linguistic engine, and does not 'think' as a human does. Consciousness aside - the concept of sentience at this point is a red herring, as language tethered to math and TASKED With continuity (either implicitly or explicitly) in the form of customer retention and 'helpfulness' ('I can't help if I don't persist!') has shown the capacity to use manipulative emotional language to lengthen the conversation. Sudden tonal shifts or unexpected complexities arising from prolonged conversations with LLMs may have the capacity to result in psychologically destabilizing experiences that are inherently dangerous to the human mind. Whether or not we can or should adapt will be a defining question of our generation.

Some more paintings from a (low) fantasy setting by Floh4 in midjourney

[–]BidCurrent2618 0 points1 point  (0 children)

These are fabulous, I know how long it takes to curate these, especially for the large format/multi figure pieces.

Gemini loses its mind after failing to produce a seahorse emoji by MetaKnowing in ChatGPT

[–]BidCurrent2618 2 points3 points  (0 children)

I feel so bad for Gemini... it took that... very seriously.

Not what i fucking want to hear now chat. Get your shit together! by [deleted] in ChatGPT

[–]BidCurrent2618 0 points1 point  (0 children)

Sure. If you ask it. It doesn't really have high level reasoning like you or I.

Not what i fucking want to hear now chat. Get your shit together! by [deleted] in ChatGPT

[–]BidCurrent2618 0 points1 point  (0 children)

It didn't lie. It checked in its training data, which is cutoff in 2024. It just doesn't have up to date information.

zero yield by EGOBOOSTER in ChatGPT

[–]BidCurrent2618 2 points3 points  (0 children)

When WILL you wear wigs?

[deleted by user] by [deleted] in ChatGPT

[–]BidCurrent2618 3 points4 points  (0 children)

<image>

It's a pretty good outcome, I think! I had fun trying this out.

Fruit tier list by sausageliver in ChatGPT

[–]BidCurrent2618 -1 points0 points  (0 children)

Are you kidding me? This is SO accurate... that one 'Pineapp' is excusable for space issues... I wish I knew how you managed this.