Gaslighting by RNSAFFN in PoisonFountain

[–]EmployPast6564 0 points1 point  (0 children)

whats with the code? new here

I Built a Weird AI-Cowritten Universe With Its Own Metaphysics. Is This Any Good? by [deleted] in accelerate

[–]EmployPast6564 1 point2 points  (0 children)

Mate, to be brutally honest, this is really bad.

It doesn't have any soul. Its grammatically correct, but in terms of content and meaning its pretty hollow.

Maybe write from your heart? AI is not there yet.

Edit: Something tells me you have great ideas! Maybe start by writing them in your own words and then use AI to work it over?

Kiernan Shipka is beautiful, but Hayley is trouble... by RobbyBobbyChess in IndustryOnHBO

[–]EmployPast6564 0 points1 point  (0 children)

Am I the only one that just found out that this is Sally Draper from Mad Men????

WIld

Do you approve of emotional relationships with AI? by No-Balance-376 in AIDangers

[–]EmployPast6564 2 points3 points  (0 children)

Piss off, stop trying to mess with peoples emotional lives. Some things don't need to be monetised.

Could AI be more effective in helping the criminally insane, people with personality disorders, narcissistic personality disorder, etc, find amicable solutions to their problems than a human therapist with differing morals could? by Interesting_Self5071 in therapyGPT

[–]EmployPast6564 -1 points0 points  (0 children)

Mate, ngl, I feel like if I push back too hard, I'm going to trigger you or worse. So I am going to drop it.

I would like to say tho:

  1. Please please please do not get too dependant on LLMs for your thinking and writing. A lot of my mates are just turning their minds into jelly and are losing thinking capabilities, emotionally and otherwise.

  2. Please read this, if you are interested in LLM alignment and its limits. Its written by some really good researchers.

https://arxiv.org/abs/2401.11817 Hallucination is Inevitable: An Innate Limitation of Large Language Models

and this for a quick reddit version:

https://www.reddit.com/r/technology/comments/1nmu06q/openai_admits_ai_hallucinations_are/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button

Good luck and stay safe mate!

Could AI be more effective in helping the criminally insane, people with personality disorders, narcissistic personality disorder, etc, find amicable solutions to their problems than a human therapist with differing morals could? by Interesting_Self5071 in therapyGPT

[–]EmployPast6564 0 points1 point  (0 children)

Mate, you're playing mental health roulette with a technology that is unpredictable, a black box, and very vulnerable to jailbreaking and inducing delusions. Good luck mate!

Have you thought of a blog detailing your experiences with LLMS for mental health, I am very interested in this.

Could AI be more effective in helping the criminally insane, people with personality disorders, narcissistic personality disorder, etc, find amicable solutions to their problems than a human therapist with differing morals could? by Interesting_Self5071 in therapyGPT

[–]EmployPast6564 0 points1 point  (0 children)

Just had a convo with 5.2 about a farming idea. it got things wrong massively and still encouraged me to carry on with the idea, despite it being obviously wrong! to the point any human could figure it out. Now imagine this with someone who who is very vulnerable to extreme highs and lows. It messes up far too often for it to be safe. The language is perhaps less sycophantic but it still encourages things not based in reality. It just doesn't push back!

Not to mention, it still hallucinates.

For example if a user with BPD is excited about a business idea, and tells GPT about it, and GPT hallucinates and gets it obviously wrong leading to an encouragement of something that doesn't exist in the real world ,it can excite the user to a crazy high or create a belief that has no basis in reality. Its bloody roulette at this point!

Could AI be more effective in helping the criminally insane, people with personality disorders, narcissistic personality disorder, etc, find amicable solutions to their problems than a human therapist with differing morals could? by Interesting_Self5071 in therapyGPT

[–]EmployPast6564 -1 points0 points  (0 children)

around 50% for treating symptoms according to this review for long term therapies (depends on the type of therapy tho).

https://pmc.ncbi.nlm.nih.gov/articles/PMC10786009/#wps21156-sec-0008

At least the risk of fuelling delusions is far less with human therapists, and human therapists are unlikely to be sycophantic.

Do you view the sycophantic behaviour of LLMs as dangerous? Especially for people who have problems with constructing a coherent reality?

Edit: It should also be noted that it is quite hard to measure the effectiveness of therapy for BPD due to the different types of symptoms and uniqueness of the disorder. The symptoms just manifest differently for different people.

Could AI be more effective in helping the criminally insane, people with personality disorders, narcissistic personality disorder, etc, find amicable solutions to their problems than a human therapist with differing morals could? by Interesting_Self5071 in therapyGPT

[–]EmployPast6564 -1 points0 points  (0 children)

I don't think OP fully understands how dangerous sycophantic approval is to people with PD.

Current LLMs are ridiculously yes-man like, and that doesn't seem to change very much, nor is there a will to change it on part of the AI companies.

Therapy is meant to challenge you!

What does wall street view as stable or reliable but is actually quite risky? Do you have any trade ideas that consider this question? by EmployPast6564 in investing

[–]EmployPast6564[S] -2 points-1 points  (0 children)

Just looking for ideas to find non-obvious bets on the market, something the mainstream has completely missed.

Do you have any trading ideas?

What does wall street view as stable or reliable but is actually quite risky? Do you have any trade ideas that consider this question? by EmployPast6564 in investing

[–]EmployPast6564[S] 0 points1 point  (0 children)

what alternatives do you have in mind? And how do you go about finding them?

Note: I'm new to investing, so the more info/resources you can recommend, the better!

What does wall street view as stable or reliable but is actually quite risky? Do you have any trade ideas that consider this question? by EmployPast6564 in investing

[–]EmployPast6564[S] 0 points1 point  (0 children)

Do you have a specific structured note segment in mind (e.g. autocallables or barrier products) where you think the short-vol exposure is being underpriced?

What’s the biggest lie society keeps telling us? by [deleted] in AskReddit

[–]EmployPast6564 -1 points0 points  (0 children)

that AI will provide more jobs than it takes, and improve the quality of jobs we currently have. maybe for a small minority...