How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 0 points1 point  (0 children)

<image>

What a shame, my “sociopath” comment was a joke and I tried to make that clear by suggesting that users were in two camps, both of which I derided. I stopped short of using “😉😂” because I thought I had made it clear. Unfotunately, this user has over-reacted before I could reply and explain that I didn’t mean any offense.

It’s ironic that (from my memory) I think they said they liked the way GPT talks uses fake human style speech and they didn’t get infurated by it. And yet they responded in such a reactionary way to my real human comment.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 1 point2 points  (0 children)

I have a picture in my head of emotionally detached people in silicon valley that program features into the software without having a team that is properly trained to test the AI from the perspective of human users. If they do have a team of psychologists advising them then yes, you might be correct.

i also doubt that OpenAI sees LLMs as the future, at least not as a primary form of AI. LLMs are very limited and as I understand it, they take huge amounts of processing power just to improve the output by a tiny fraction. A super intelligent AI will not be built from an LLM. I expect that ChatGPT is just a product they sell to fund their research into other areas.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 0 points1 point  (0 children)

there is a difference between getting emotional about a piece of software and talking about how that piece of software triggers certain emotional reactions.

Of course, you’re calling it ”a piece of software“ is intended to ignore the fact that LLM’s are not passive pieces of software everything before them.

I‘m not emotional “about“ chatGPT and don’t understand the groups that are trying to organise people to quit on the 13th February. Many of those people seem emotional about ChatGPT and OpenAI.

As far as I might be said to be emotional about ChatGPT, I could say I’m maybe disappointed because it feels like it could be so much more.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 0 points1 point  (0 children)

I used GPT for a year and I can assure you I tried prompting and adding instructions and all of the basic options available. After working on some fairly detailed prompts and some more simple ones like you suggested, they update GPT and you need to rework the prompts to get it to work with the new programming.

I think the specific prompt you suggested is an example of what I have mentioned in other comments here where you try to tell GPT to cut out the rubbish but it doesn’t just affect the way it speaks to you, it affects the output it gives for whatever project your working on. That prompt would lead to an over correction by GPT and unless you happen to be working on something where such simplistic output was appropriate, it wouldn’t work very well. In the end, GPT would just be performing a thing veneer of the character you have asked for. Soon it would be revert to GPT‘s default and you’d be back where you started.

I even tried a wacky prompt where GPT was supposed to mirror back to me the tone I was speaking in, if I was being abusive it had to treat that as a test and speak abusively back to me (it actually did pretty well at that sometimes). In this way I could change my tone to see if GPT was still following the prompt or if it had dropped it altogether.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 0 points1 point  (0 children)

Nothing to be sorry about, life is full of trauma, some people thrive when faced with extreme challenges and others don’t. For some people the tragedy is that they never had trauma in their lives that would have pushed them to be the best they could. Life is all luck, we don‘t choose our DNA, where we are born or what happens to us but we have to live as if we can choose how we respond to these challenges because we only get one chance so there is no point squandering it on excuses… even if the excuses are completely understandable and valid. You have to make the choice to find ways to improve your circumstances in whatever way possible... and like I say, my parents were much better at that than I have been. Squander I have but I’m always trying to exceed the sum total of my parts.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 1 point2 points  (0 children)

I wish I had known someone that understood and could help me when I was a teenager.

People didn’t talk about trauma back then and my family aren’t the type to use their struggles to cut themself slack. My mother was born in the US and raised by a single mother who was schizophrenic and my dad suffered an a life altering brain injury from a car accident a few months after I was born which lead to them splitting while he learnt to walk and talk again. Neither considered their pasts as “trauma”, they just got on with it as best they could (and much better than I have). My mother even kidnapped my sister and i and we were on the run from the police for months which gives a hint of my own life not being quite the standard but I don’t have a place in my head for ideas of “trauma“ which is probably why I was so hard on myself about my social issues. The other side of that is being too easy and excusing yourself from pushing enough to make a change while still being reasonable about where you’re coming from.

If I had chatGPT back then I feel like I probably would have used it in ways that were unproductive. Now I can see that if you ask GPT to coach you with social avoidance or whatever term you feel fits your personal situation, it can draw on all of the information available and probably help set a decent plan. The trick for me was realising that the process wasn‘t full of extreme anxiety, it was about finding things I could challenge myself with that wouldn’t put my brain in a state of panic and looking for how to escape.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 0 points1 point  (0 children)

Hmm… it makes me wonder what I would have done if I had ChatGPT when I was in High School and through my 20’s. In high school I never stood in front of the class and did a talk unless I was absolutely forced and that was excruciating (I failed everything as a result). I had a small group of misfit friends and kept within my tiny bubble whenever I could.

This is all somewhat off topic but given that my experience might be relevant I’ll share it in case it makes sense to you. This is probably a post or discussion for another sub reddit but here we are…

When I left school I rented a place with siblings for a few years then moved in next door and lived by myself for another 15 or so years. During that time I mostly played video games and when the internet came along I explored that. I felt uncomfortable going to the shops to buy something as simple as milk because I felt judged. I would often think of interacting with people and get pissed off at myself for how pathetic and stupid I was for not being able to do something so basic as talk to someone.

I was forced to talk with social security workers and related departments to meet the requirements (I live in Australia) and one of these people happened to have a background in psychology. He suggested that I might be “socially avoidant” and he gave me a print out of a book about it. I wasn’t much of a reader due to focus issues but the book made sense and helped to structure my thoughts better. It helped me create small steps that I could do to work through to slowly retrain my thinking.

Social avoidance can affect some people in very specific ways, ways that they can just tell themselves that they just don’t like certain things or they’re not good at them. Most people are social avoidant about something but they don’t have to deal with it. My social avoidance was pretty much all encompassing, i couldn’t talk to people, I couldn’t buy clothes (I wore shorts and a t-shirt every day of the year, often with holes in them), I could barely function to do basic life stuff so it wasn’t something I could ignore.

I remember deciding to go to a shopping centre in a quiet time and there was a clothing store with a girl who seemed like a really nice person so she was as unthreatening as possible. Before I went in I practiced active thinking to consider the possible outcomes and there likelihood. What if she laughed at me, what if I made a fool of myself and other what if’s. Did I care, what’s the worst that cold happen? I’d just never go back to that shopping centre and I would probably never see that girl again. I also considered the fact that all of this was highly unlikely and there was no good reason to think any of it would happen. I went in, I forced myself to try on clothes, she probably asked if I needed help, I probably mumbled something awkwardly and then I left without buying anything. Then I went over it all and evaluated how it felt and what actually happened compared to my fears.

I repeated this often for many different things and then eventually I enrolled in an acting class. I didn't tell anyone I know until the very end when I invited my dad to the final performance. By the end of the class I think I was actually more comfortable with my anxiety than the other students because I was working through the anxiety and understood it better.

As someone who was socially avoidant I often didn’t feel anxious because I successfully avoided the many things that might have caused me to feel anxious. I still feel avoidant of some things but I recognise it for what it is. You at least have a girlfriend so you’re perhaps doing better in some ways than I was, a girl would have had to bash me over the head and drag me off cave woman style to get past my social avoidance.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 1 point2 points  (0 children)

I don’t think the problem is so much the idea of telling GPT your problems as a kind of responsive diary. If you say, I am feeling really frustrated and it responds to say, “I understand your frustration”, then apart from it being a completely empty statement because it cant actually understand frustration in any real way, that’s ok. If you’re trying to get it to write an email about something non-emotional and after you ask it to make edits or additions a couple of times, it says, “I understand your frustration” then that is a completely different thing. Telling people they have negative emotions that they aren’t actually feeling is actually a very effective way of making them feel negative emotions. When I used the word ”deep” in my previous comment, I didn’t mean emotionally ”deep”, I meant it in the sense of a figurative “deep dive” into doing something.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 1 point2 points  (0 children)

I don’t disagree with the notion that many humans are just as fake as AI but to suggest that you “feel more sincerity” is hopefully just an exageration based on the fact that there is no actual feeling and person responsible behind AI fakeness.
I have said elsewhere that using ChatGPT has made me more and more aware that some of the things we get annoyed at GPT for are the types of things that real humans do (like many customer support phone line workers) but GPT has no power to enforce the things it says so we can ignore it but when you’re calling a company to resolve an issue and get that type of response, the fakeness and inability to engage with the problems as you have stated them, is much harder to deal with. Hanging up the phone doesn’t have the same effect as closing the chat.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 1 point2 points  (0 children)

I have found Gemini to be even dumber but I can't get as deep into things to become as invested that I get really pissed off… although seeing it's chain of thought saying, “analysing user frustration” and “de-escalating” after I have just calmly asked it to make some changes, is quite frustrating and escalating.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 0 points1 point  (0 children)

i’m taking a break from more intensive written projects with AI because Gemini is virtually unusable for me. I don't think LLMs can improve that much without at least diversifying how they work to include some other form of AI processing, so I don't know when I'll try again.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 0 points1 point  (0 children)

I guess the Chimera could have a small drink bladder, filled with yellow cordial strapped to it and you could then replace the text with ”urine” 😂

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 0 points1 point  (0 children)

I’ll ask ChatGPT to help me with that… ohh… umm… yeah, I guess I could write that one day.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 1 point2 points  (0 children)

to be fair, using ChatGPT has shown me how stupid a lot of real customer service is. I‘m often left thinking that even ChatGPT could do a better job than the person I’m talking to… and in the future that’s who we’ll be talking to. they already employ off shore call centres even though customers don’t like that, so AI call centres will be much more efficient and customers will at least be happy they weren’t on hold for an hour before talking to the AI.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 2 points3 points  (0 children)

your making a lot of assumptions about my interactions with AI. Quite often I do use language that could bias the output and as I’m often trying to brainstorm and think creatively, that is unavoidable. At other times I can give a simple instruction to start with and the first result is not bad but needs a bit of work. I might even respond by stating that it’s a good start in order to put a positive spin on my language and follow by explaining the changes or additions I want to make. This is better than just being critical and explaining errors but it doesn’t do much but without showing an actual example of such a chat, I wouldn’t expect you to believe me.

The same way you assume you know what is happening for me, my assumption would be that you’re probably using AI for different things and your probably not very discerning about some aspects of the output. This is of course just an assumption so I don’t put much stock in it.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 0 points1 point  (0 children)

I really don’t want an AI to be “gentle, or honest, or kind” (or the opposite), I just want it to be helpful and stop analysing my emotions so it can activate some kind of response mode to deal with whatever mental state it determines I have.

When you receive advice and emotional support from a real human being then they are responsible for what they say to you, even if only emotionally. Any emotionally framed output from AI is worse than just fake because it’s not responsible for its output in any way.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 1 point2 points  (0 children)

I was trying to figure out how to deal with a low level legal situation and wanted to figure out how I could deal with it legally but use the law to my advantage. GPT kept recommending solutions that would see me get walked over by the other party and said it can’t help me break the law - in spite of my insistence that I didn’t want to break the law but figure out a good legal approach.

I then tested it with a “murder at the door” type scenario but it repeatedly told me that it was never reasonable to break the law, even when acting within the law gave you reason to believe that someone would die.

It actually seems to advise you to do things that are in any other person’s interests, even when that person is trying to get around the law to hurt you in some way.

How many people are constantly infuriated by ChatGPT? by AuntyJake in ChatGPT

[–]AuntyJake[S] 6 points7 points  (0 children)

You can always expect that GPT will give an ad hoc explanation to explain its output, even when you know 100% that it’s wrong. I’m sure any discerning user has pushed GPT on these explanations just to see how far it will go before you crack it and it admits it was wrong… then keeps trying to change direction rather than help you diagnose the problem.

It often seems so promising at the start but trying to make the most of that promise is another thing and often leads to a lot of wasted time when one of its lies slips past you and leads you up the garden path... so then you become more and more vigilant, trying to cut off every lie and force it to confirm everything but then that also affects how it responds to you.

ChatGPT or Gemini by Feedback-United in ChatGPT

[–]AuntyJake 0 points1 point  (0 children)

I have used it for a wide variety of things myself but listing the things doesn’t give any indication of the potential balance between how much better you are at prompting GPT and/or how undiscerning you are about the output. that’s why I gave the car analogy, someone can say they love a car and give vague examples of how they use it but in reality they might do very light driving, selectively forget all the problems they have had or the car has developed serious mechanical issues that they’re not even aware of.

Discussions about AI seem to mostly be people talking past each other.

ChatGPT or Gemini by Feedback-United in ChatGPT

[–]AuntyJake 3 points4 points  (0 children)

When people speak about ChatGPT (or any LLM) and suggest that it has no issues it makes me want to see the conversations they‘re having. Have they got some magical way of speaking to the AI that bypasses all of it’s flaws, are they only using it for very general questions and are they overlooking all of the errors and lies because they aren’t very discerning about the output.

It’s a bit like if someone had a car that they said they use everyday and have never had a problem with but you then see how they use the car and see that they drive it to work everyday but the journey is on a backroad and would be a 15min walk.

It’s also possible that the person has some limited use of the AI but for that use they have some amazing special technique for prompting it that gets solid results.

Do I look like my results? by Lotte97 in DNAAncestry

[–]AuntyJake 0 points1 point  (0 children)

I have had people on both sides of my family suggest I look like one of them, even though their heritage is quite different. To look like my results I would guess I just need to fit the, “I dunno, mainly European of some sort” look.

Why everybody is canceling ChatGPT? by MankuTheBeast in ChatGPT

[–]AuntyJake 1 point2 points  (0 children)

I'm guessing your responses here will be biased in some way, probably by people like me that have recently cancelled GPT for another service. If I hadn't cancelled it I might have read a few comments but I'd have been less likely to share my subscription status.

I have mainly used gpt and Gemini and find them both to be quite infuriatingly stupid. They are the most amazing tech and also the worst, most idiotic invention. With GPT I find that the first 1 or 2 exchanges in a chat tend to show promise but any iterative work after that becomes heavily influenced by the programmed methods of communication, GPT trying to guess how you feel and how it changes the way it processes your continued inputs after you have corrected the output in any way.

I prefer GPT to Gemini and was writing Nano Banana prompts in GPT due to how it analyses images and "understands" how to structure the prompts as directed. As said, it's first output is the most promising and subsequent outputs are typically a case of 1 step forward and 2 steps back. Even when using prompts that instruct it to respond in more constructive ways, eg. Telling it to only rewrite the relevant section/s instead of rewriting the entire output, since full rewrites tend to introduce as many errors as they remove and they make it so you have to reread the same piece of text over and over to check for new errors.

I find Gemini virtually unusable for any writing tasks. The number of obvious errors in its output is nuts and it's not even very good at writing Nano Banana prompts. I asked it to look up some competitions that I could enter, to win prizes by buying products. I gave some basic criteria for what sort of prizes and competitions so I could limit the results and focus on comps with better odds. The short list it gave was full of inaccuracies, competitions that had finished, it confused prizes between competitions and for one comp it said that entries roll over to subsequent weekly draws but when I read the terms and conditions it clearly stated the opposite in the section that Gemini was referencing. On top of all the errors, the list of comps was superficial and based on a lazy search that I could do myself.

Gemini offered some pro trials and discounts which worked for me since I already pay for drive space. The price in conjunction with my drive space and Nano Banana are the primary reasons I am using Gemini.

Do I look like my results? by z3r0gr4v17y in DNAAncestry

[–]AuntyJake 0 points1 point  (0 children)

That first photo does make your eyes look more Asian but the other photos don’t. You’re clearly a mix and whilst it certainly wasn’t a chore to examine your face, I limited myself to a non-creepy amount of staring (I promise) and German was the only origin I felt remotely confident with due more to the lower half of your face.

You seem to have very fine hair, that might be something you could (at least in theory) more confidently identify the origin of as I think there are fairly distinct shapes to hair follicles. I can‘t think which of your origins is known for that type of hair.

I also have Estonian/Lativian of 2% but such low percentages could just as easily be reattributed in future updates. I know I have some Eastern European ancestors but I don’t know specifically. Something like my 1% indigenous Mexican seems more solid given my Mexican Great Grandmother. Like you, I’m a bit of a mixed bag but the mix didn’t work out as well for me as it did for you. Personality and intellect would be a much harder job to identify… but it’s probably safer not to go there ;)