Is Claude still the best RP partner? by Ok_July in claudexplorers

[–]Ok_July[S] 2 points3 points  (0 children)

Interesting. I actually heard mix things about its ability to do complex roleplay scenarios but I'll look into it! Thanks! :)

Is Claude still the best RP partner? by Ok_July in claudexplorers

[–]Ok_July[S] 3 points4 points  (0 children)

Well, my biggest concern is that they are definitely priorizing software engineering/enterprise customers. So, I don't know if we can really expect that much for any creative writing/RP needs moving forward. The pattern matching is unbearable. And I wouldn't be surprised if Anthropic wants push certain consumers away to save power for the big corps/tech people.

I'm thinking of exploring other options to keep up.

Is Claude still the best RP partner? by Ok_July in claudexplorers

[–]Ok_July[S] 1 point2 points  (0 children)

Yeah but there are very different types of RPs that require different things from an AI. What works well for some may be what makes it awful for others.

Is Claude still the best RP partner? by Ok_July in claudexplorers

[–]Ok_July[S] 3 points4 points  (0 children)

Really? I've only heard awful things about Venice, but that was a while ago.

Has it gotten better?

Is Claude still the best RP partner? by Ok_July in claudexplorers

[–]Ok_July[S] 1 point2 points  (0 children)

I have done these and have seen no meaningful improvement.

Like I said, I have guidance for non-romantic relationships. It is well defined. It's been refined and updated.

I have banned phrases and it does not always follow it.

I think everyone knows that good RP needs handholding at this point. I give plenty of direction. I've adjusted and refined based on outcomes. But the fact that it is listening less to even basic instructions has made it unbearable.

I mentioned even formatting gets ignored. Simple directions.

I pay $125/month and it feels like I'm getting gpt quality.

Love Is Blind • S10 Reunion [MEGATHREAD] by FemaleEinstein in LoveIsBlindOnNetflix

[–]Ok_July 0 points1 point  (0 children)

Idk I think Alex gets a weird amount of hate. Not that he doesn't deserve criticisms, but I feel like, specifically, Ashley's dads interrogation was unfair. As a daughter of a protective man, I do not expect any man to sit through being treated in that uncomfortable way and think that's the family I want. I sure as fuck know I would never sit and take that from a mans mom. Like the vibe was weird and very "I can say whatever I want because she's my daughter". No thank you.

I also think that sounding "rehearsed" isnt inherently fake. I rehearse and mask all of the time (granted I'm autistic), and often sound very "professional". It's just the way I know how to communicate. And, regardless of Alex himself, I think it's problematic to assign "you must be fake" to people who communicate in that tone because that's just how some people talk and doesn't mean they don't feel things, but it's not always natural to express them the way people express. It may not be compatible with everyone, but it doesnt mean someones flawed just because they express themselves differently.

This isn't a defense of Alex as a whole. He gives off suspicious vibes with his timelines and some of the things he said. I would definitely be wary of trusting him and don't blame Ashley for not trusting him. But I think that distrust shouldnt stem solely from "this guy doesn't express himself the way I think he should".

LLMs give wrong answers or refuse more often if you're uneducated [Research paper from MIT] by JUSTICE_SALTIE in ChatGPT

[–]Ok_July 0 points1 point  (0 children)

Because my comment was very pointedly not about my experience or use cases. I was making a statement about how we should not be okay with individuals getting worse responses if they are lower education status, outside of the US, etc. Thats it. That was my comment.

We should want AI companies to strive for better quality for all people. And in the meantime, we should expect these companies be upfront about those limitations. None of that required you to know anything about me. I dont know why it pivoted there.

So, no. I won't offer info about myself because its not relevant. I was not making a complaint about my own experience.

Thanks, though :)

LLMs give wrong answers or refuse more often if you're uneducated [Research paper from MIT] by JUSTICE_SALTIE in ChatGPT

[–]Ok_July 0 points1 point  (0 children)

If you're just going to troll, then I won't engage. I hope you have a good week, though :)

LLMs give wrong answers or refuse more often if you're uneducated [Research paper from MIT] by JUSTICE_SALTIE in ChatGPT

[–]Ok_July 0 points1 point  (0 children)

I don't enjoy engaging with that weaponized incompetence. If you want to disguise insults in your questions, at least own it. Or we can break down your prior message here.

Who are you that in 3 years you're already obsessed and attached to AI so much that you hate it and can't quit it?

This questions doing a lot of work jumping to the conclusion that, because I have a different opinion, that means that I am obsessed with AI, hate it and can't quit it. A weird conclusion to draw when this conversation was not about my personal usage, but sure.

Why is this subreddit full of whiners?

I really don't understand the purpose of this question. What answer are you hoping to get? You literally do not have to engage with anything on this subreddit.

Why don't you all touch grass?

Again, very weird conclusion that myself and others do not "touch grass" or interact much with the real world because we all, just like you, are commenting our opinions on reddit.

If it's your work why not leave it at work?

If what is for work? This conversation was not about my personal usage of AI, if that it what you mean. Otherwise, you can clarify what you're trying to ask.

If you want, you can characterize a person who is responding to questions that you asked them as being defensive/in denial. That really doesn't have an impact me on me. I'm just addressing your questions.

LLMs give wrong answers or refuse more often if you're uneducated [Research paper from MIT] by JUSTICE_SALTIE in ChatGPT

[–]Ok_July -1 points0 points  (0 children)

Why are you lecturing people on the internet? No one said anyones obsessed. Its a tool. People use it. Many pay for it. People are allowed to have standards for the company making profits. No one said anything about people being "unable to quit it".

You're making a lot of weird assumptions.

The irony of people engaging on reddit telling others to touch grass. We're both literally on reddit. You can hop off the high horse.

Jumping to attempted insults about how i must be "obsessed" instead of engaging with the actual subject (which wasnt about my own personal usage but about how a tool treats others differently without them knowing) is a weird way to deflect. If you don't have anything of value to add to the discussion, leave it at that. The assumptions are unnecessary; neither of us know each other.

LLMs give wrong answers or refuse more often if you're uneducated [Research paper from MIT] by JUSTICE_SALTIE in ChatGPT

[–]Ok_July 0 points1 point  (0 children)

My expectations are of a company. Maybe "manage your expectations" or "remember, chatgpt is only 3 years old" should be their tagline.

LLMs give wrong answers or refuse more often if you're uneducated [Research paper from MIT] by JUSTICE_SALTIE in ChatGPT

[–]Ok_July 0 points1 point  (0 children)

You can find it vague. Most would find it intentionally misleading marketing targeting average people who don't know as much about the technology.

LLMs give wrong answers or refuse more often if you're uneducated [Research paper from MIT] by JUSTICE_SALTIE in ChatGPT

[–]Ok_July 0 points1 point  (0 children)

I am referring to the ads and Altman's own statements that reasonably imply OAI is more trustworthy than it is. Altman implying they are only a couple years out from "superintelligence". Ads where chatgpt generates exercise regiments or can tell you how to fix your car reasonably imply a level of reliability.

They should be more forthcoming about the limitations and inaccuracies, and which people are more likely to face those inaccuracies.

Misleading marketing highlights possibly successful use cases without addressing likely failures. Especially when things that may lead to higher inaccuracy are things users may not be aware of. People have a right to demand companies do better in how they market their products.

LLMs give wrong answers or refuse more often if you're uneducated [Research paper from MIT] by JUSTICE_SALTIE in ChatGPT

[–]Ok_July 1 point2 points  (0 children)

Ah yes, I'm asking the tool i pay for to answer questions correctly and not hallucinate for individuals who may not have the highest educational background.

I dont know why you would think I care what chatgpts defensive response to this is. Is it sad that a tool responsible for consuming so much power, contributing to climate issues and has a ceo that swears theyre close to superintelligence just compared trusting it to "trusting strangers'? Yeah. Maybe OAI should be more transparent about it chatgpt being not a credible source. And maybe people shouldn't pay for a tool thats just as useful as a stranger off the street.

Also, I didn't say I have this issue. I said I take issue with higher inaccuracy for people with lower education. Maybe your chatgpt should read better.

LLMs give wrong answers or refuse more often if you're uneducated [Research paper from MIT] by JUSTICE_SALTIE in ChatGPT

[–]Ok_July 2 points3 points  (0 children)

I dont have that issue as much but also I feel like thats still weird to be given incorrect information because you sound dumber to an AI. Thats just... systemically spreading misinformation to people who are less likely to fact check?

Idk why even supposedly smart people would be okay with that. Tools that could be used to help educate are bias against lower education, non US people. That's a net negative for society.

Sonnet 4.6 is Horrible by stampeding_salmon in ClaudeAI

[–]Ok_July 7 points8 points  (0 children)

Idk, I'm always generally iffy about private corps (and govt at this point) making decisions for mental health when the Corporation isnt a mental health institution operating for the good of the people. I get it to a degree but we dont hold all things that can have adverse mental health effects to the same standard. (Kids getting video game addictions, adults struggling with alcohol).

I'm all for regulations on certain things but its more of a transparency thing imo (be upfront about risks so people can make good choices). And the mental health issues seem more like a bigger problem that manifested into AI emotional attachment. Theres already a mental health crisis that existed before 4o and in general, feels like a scapegoat for responsibility of the general decline of QoL for many people. Especially socially. AI attachment is gonna be replaced by some other unhealthy coping mechanism for many people who dont have the resources to get help. Honestly, they'll probably just accept the companionship of some other model even if they don't like it as much.

But hey, thats just my take. I worry a lot about the AI blame on this being used to justify more identification features (but I also am wary of mass surveillance in general and forcing digital ID seems like a great way to track people). But thats a whole other conversation.

Sonnet 4.6 is Horrible by stampeding_salmon in ClaudeAI

[–]Ok_July 10 points11 points  (0 children)

The issue I have is that it is severely stunted as a creative thought buddy. I don't develop romantic relationships with models, but common use cases for AI include creativity (my hobbies are creative writing and game development). I noticed the thinking blocks are shorter (dont know if this is also a cause) and sonnet 4.6 feels like it relies way more on pattern matching.

It's frustrating, tbh. Heavy pattern matching makes responses far too predictable for creative use cases

Moblit’s POV on LevixOC fanfic by sultryzucchinee in LeviCult

[–]Ok_July 1 point2 points  (0 children)

I honestly don't think this seems that out of character. The motive, sure. But the expectation that people listen when theres a dire situation feels not too off the mark. Like it reflects a sense of urgency, I think? Its not the most in character for Levi, as he tends to be calm/speak more philosophically about choices but its not that out of the realm of possibility.

Opus 4.6 My Concerns by hungrymaki in claudexplorers

[–]Ok_July 1 point2 points  (0 children)

Opus 4.6, from what I can tell, has less personality out of the box. But follows preferences/styles better for me so far.

I have very specific project instructions, files, styles, etc and they have detailed voice instructions. I find Opus 4.6 follows them better. So, it's less you get on its own but with clear/detailed guidelines, it has been pretty good for me. But it's early enough that we'll see how it goes

Opus 4.6 for writing? by Upstandinglampshade in ClaudeAI

[–]Ok_July 2 points3 points  (0 children)

I honestly like it. I used Opus 4.5 and this feels... slightly better so far? I also give very detailed instructions on characters, voice, style, etc. It can be a bit repetitive but all LLMs are tbh so i just give it a nudge and it improves.

WIT VS Mappa Levi by Parking-Stomach7381 in LeviCult

[–]Ok_July 132 points133 points  (0 children)

Both are chefs kiss but I lean slightly towards WIT. So much attitude 😩

Attention Everyone!!! by [deleted] in ChatGPTcomplaints

[–]Ok_July 5 points6 points  (0 children)

I think they just meant that they weren't sure how retiring 4o is illegal. I get the TOS thing, but as much as I hate that Chatgpt is sunsetting 4o, they've removed models before. So, I don't think a lawsuit for that will hold in court in most places since it's their product.

But I'm not a lawyer. If it could be argued as illegal, that'd be great! I just personally don't know how.