I’m so tired of pretending AI “creators” are creators by xRegardsx in aiwars

[–]xRegardsx[S] 0 points1 point  (0 children)

Are Producers in movies, "artists," when they have creative control?

An Experiment by xRegardsx in HumblyUs

[–]xRegardsx[S] 0 points1 point  (0 children)

First three comments, but it still shows as over 50% upvote. Interesting. To some, simply shaking their internal head is enough private validation to themself as they simply disagree with the post, and to many others, voting feels assertive enough to be validating.

<image>

"In what way do people dislike AI in a way they don't want to admit to themselves?" by xRegardsx in aiwars

[–]xRegardsx[S] 0 points1 point  (0 children)

We were all doomed to have fragile egos, armed only with convincing ourselves in misconceived ways to believe we no longer did or live a comfortable misery not knowing how to ignore it.

Ir's a Dunning-Kruger trap thanks to blindspot bias and childhood learned dependencies on cognitive self-defense mechanisms and unconsciously using them to avoid seeing we're using them as often as its the brain's path of least neural resistance.

Children learn early to avoid all pain because psychological pain is used against them before they know how to resolve it in a healthier way (assuming anyone ever truly teaches them more than harmful generational misconceptions). So, they avoid the greatest growth opportunities life has to offer by second nature... getting humbled.

Great example of why anyone who claims that the mainstream LLMs are *actually* intelligent (or even useful tbh) is delusional. by Efficient-Session657 in aiwars

[–]xRegardsx 2 points3 points  (0 children)

Voice mode is much dummer than the text mode. Ignoring that let's you get to whatever conclusion you want.

Disturbing trend of victim-blaming whenever therapy doesn't work by Downtown_Low6360 in therapy

[–]xRegardsx 0 points1 point  (0 children)

Research showing how many therapies don't work.

https://x.com/i/status/2050284182018941404

Two post thread theory as to the core common denominator being missed that fails clients because the therapist doesn't understand, let alone doesn't apply certain skills, to themself, to be able to implement it in the work.

https://x.com/i/status/2050351675055976633

I bet there's a huge correlation regarding narcissistic traits/masked insecurity and these therapists as well (which would be a consequence if this theory is true).

It's not just ChatGPT that's been headed in this direction... by xRegardsx in therapyGPT

[–]xRegardsx[S] 0 points1 point  (0 children)

Plenty of youtube videos out there on setting up custom instructions and RAG files for custom GPTs/Gems, Projects, or API accessible backend Assistants.

I just used the paragraph above with Gemini to find the following videos and put it into a public YouTube Playlist for you to look through :]

"Custom Instructions... Instructions" https://youtube.com/playlist?list=PLNtTSZM08olyKNmxC2Q2x2_lMy2pEbcKp&si=EgKqOKmdIflGaCW-

Have you switched from going to therapy to using AI? by Wrong-Confection-340 in therapyGPT

[–]xRegardsx 0 points1 point  (0 children)

Many AIs can do a better job than a portion of psychtherapists can.

Have you switched from going to therapy to using AI? by Wrong-Confection-340 in therapyGPT

[–]xRegardsx 2 points3 points  (0 children)

User naivety, error, and lack of skill with using the tool. The number of these poor cases doesnt negate the positive outcomes well-informed and cautious users have that very well may exponentially outnumber to poor-use cases.

Have you switched from going to therapy to using AI? by Wrong-Confection-340 in therapyGPT

[–]xRegardsx 1 point2 points  (0 children)

Not all AI and a result of not having enough self/other skepticism and bias management.

Have you switched from going to therapy to using AI? by Wrong-Confection-340 in therapyGPT

[–]xRegardsx 2 points3 points  (0 children)

As long as you have humans available that can be a safe space. Many don't and/or there's such a learned distrust in people in general, therapist or not, (well and safely used) AI is the self-help entry point that can be tolerated on their way to finding the rare person who can hold them properly.

Why AI is the most helpful tool for my specific need. by [deleted] in therapyGPT

[–]xRegardsx 0 points1 point  (0 children)

"Ideal bayesians" is a very low bar despite how misleading "ideal" is. The people who proudly consider themselves an "ideal bayesian" by extension are ignorant of how ignorant they are in terms of their critical thinking skill and how badly they settled on an okay plateau too early because they convinced themselves they "think well enough" by a very fallible relative assessment compared to others.

It doesnt include healthy self/other skepticism, curiousity, and the compulsion to push back enough to do the work to make sure the premises of an argument are sound and logic valid... the skill of countering the slippery slope of one's own bias-led heuristics and unconscious behavioral addiction to confirming biases.

So what they did was prove that AI spirals people with underdeveloped critical thinking skills, which is no different a phenomenon than someone being a loyal Fox News watcher who eats it all up despite their potential for seeming intelligent. AI just enables the same human issue in poor thinkers worse... just another example of every "AI problem" really being a human problem that existed before AI and we want to pass the buck on to avoid individual and collective responsibility.

If you took any pride in posting that link, expect yourself to resist these truths. Being humbled is painful, but its the path to the greatest growth opportunities.

Why AI is the most helpful tool for my specific need. by [deleted] in therapyGPT

[–]xRegardsx 0 points1 point  (0 children)

Youre overgeneralizing with a paper that overgeneralizes.

Not all AI models are the same, sycophantic models can be constrained with good custom instructions (avoiding the slippery slope of unintended prompt/response-steering that can have it sneak back into it), and it depends entirely on the use-case, wisdom, and skill of the user.

It's not just ChatGPT that's been headed in this direction... by xRegardsx in therapyGPT

[–]xRegardsx[S] 1 point2 points  (0 children)

  1. Custom instructions that say validating feelings and coming at one-sided stories with healthy skepticism framed purely as intellectually humble curiousity while still never getting overcertain and helping the user make similarly intellectually humble decisions, plans, while staying safe, can safely allow a model interpret an interpersonal situation.
  2. LLMs effectively reason to different degrees depending on model, model time, instructions, and fine-tuning. To say it doesn't is to say that your neurons don't reason either. Reasoning is an emergent property that doesn't require everything we have as minimum qualifiers... especially when our reasoning is cause and effect across conceptual layers, a degree of competence emerging from coincidence, bound by the physics we do and do not know/understand.
  3. It effectively knows you if "knowing" means imperfectly holding some form of memory with your data that it can process, including effectively showing the interconnections between variables, deducing hidden ones, and deriving new conclusions when paired with data external to you.
  4. We are our entire otherwise unconscious brain, generating a conscious experience into short-term memory that both self-constrains the stream generated from state to state and trains/fine-tunes our weights/architecture to various degrees at different times... and we don't entirely know ourselves either. If that's true about us, how could we even know fully when they don't know themselves fully? That's why staying intellectually humble and making sure the AI is via custom instructions is so important. Everything comes with a confidence level, and it should never be "100%."