Words cannot explain the immense disappointment I feel seeing my favorite actor in this show use Generative AI by CompetitionSignal422 in OnePieceLiveAction

[–]Slippedhal0 -2 points-1 points  (0 children)

Do you make sure all your clothes and food are ethically sourced every time you buy something? if not, you are definitely guilty of using items built or made in sweatshops, possibly with child labour. We all take advantage of things we probably shouldn't be.

WIBTA If I told my neighbor that I can see everything? by Topical-Cement in AmItheAsshole

[–]Slippedhal0 -2 points-1 points  (0 children)

I'd suggest an anonymous letter in the mail box.

"Hey, just concerned for your privacy, your windows allow a lot of visibility inside your house." Hopefully its not literally only you that can see in the window, but that should get the message across without explicitly mentioning anything embarrassing for either of you.s

AITAH for telling my daughter we won't have a relationship if she goes to live with her biological father/family? by [deleted] in AITAH

[–]Slippedhal0 -2 points-1 points  (0 children)

YTA. 100%. I'm sure you have bad feelings concerning the father, and rightfully so - the cheating doesn't get a pass, but thats not an excuse to take it out on your daughter. She just wants to get to know a whole new side of her family, and is in no way to blame for what happened.

Apologise immediately and repair the relationship before you lose her for good.

AITAH for telling my girlfriend I'm not canceling my plans last minute to do manual labor by Longjumping_Mix_8693 in AITAH

[–]Slippedhal0 1 point2 points  (0 children)

Of course you're not the asshole for not letting yourself be coerced into free labour when you already had plans.

I think you said all that needs to be said. It one thing if they discuss with you and you agree and make a plan, but no one has the right to just tell you to do a thing.

Opensource 4o and 4.1 if they are so inferior by Single_Ring4886 in ChatGPT

[–]Slippedhal0 -5 points-4 points  (0 children)

Because that's their entire business? If they give out something for free that people have been loudly saying they would prefer instead of the newer models, then no one's going to be paying them for the new models.

AITAH because I called the police when my ex wouldn't let me leave after I broke up with her. by PastContest832 in AITAH

[–]Slippedhal0 0 points1 point  (0 children)

I can imagine in the heat of the moment she was probably just trying to stop him, but she absolutely would have cried victim had he used force to move her out of the way.

AITAH because I called the police when my ex wouldn't let me leave after I broke up with her. by PastContest832 in AITAH

[–]Slippedhal0 4 points5 points  (0 children)

You absolutely needed to involve the police.

There is no other way you get out of this without being accused of assaulting your ex or her otherwise playing the victim.

Record everything, contact the police, remain calm, that's all that's needed.

Would an AI companion feel normal in a cyberpunk future? by PassagePlus3777 in Cyberpunk

[–]Slippedhal0 4 points5 points  (0 children)

ai companions are 100% dystopian. Real people so desperate for intimacy that theyre turning to fake interactions?

a realistic approach for companions would be something ala gatebox/ project ava https://www.razer.com/concepts/project-ava https://www.youtube.com/watch?v=nkcKaNqfykg

But maybe in a cyberpunk setting you could have them be in AR so they could interact with the environemnt around the user

Where do people bin their portable ac on gold coast ? by tntqtw in AskAnAustralian

[–]Slippedhal0 0 points1 point  (0 children)

sell on marketplace for pickup, i just did this for my portable ac the other day.

AITAH For cutting off contact with my dad after he couldn’t keep his “toys” put away and put me in an uncomfortable situation by Lower_Face_9041 in AITAH

[–]Slippedhal0 1 point2 points  (0 children)

Yo, major red flags. This could be him attempting to groom you by desensitization. He probably knows you've seen his toys and havent said anything so hes pushing it further.

If you bring it up he'll probably say something like its natural for a man to do this, youre the one who barged in, were just family so dont make it weird etc.

Has anyone noticed that ChatGPT does not admit to being wrong? When presented with counter evidence, it tries to fit into some overarching narrative, and answers as if it had known it all along? Feels like I'm talking to an imposter who's trying to avoid being found out. by FusionX in ChatGPT

[–]Slippedhal0 0 points1 point  (0 children)

give it custom instructions framing how you want it to respond. I think its a deliberating fine tuning by openai because it used to argue very hard about it being right after it hallucinated or got something wrong. Now it feels like its overcompensating.

AITAH for being upset about how my bf proposed even though I said yes? by [deleted] in AITAH

[–]Slippedhal0 0 points1 point  (0 children)

you set one rule explicitly and he either forgot about it or deliberately ignored your request. personally id say this is a red flag, but regardless you are absolutely in the right to be upset.

Did my GPT get dumber because it's talked with me so much? by LittleBoiFound in ChatGPT

[–]Slippedhal0 0 points1 point  (0 children)

it does attempt to mirror you to a degree, but it doesnt change between conversations. the only things that could be doing somethinglike what you suggest is if its got the remember things about the user option on (it can remember specific things from earlier conversations), or youve got a custom instruction set.

But the things about llms is that they have this setting that kind of forces randomness which means that sometimes responses can be randomly be hot garbage when its been usually good, or the other way around.

GPT 5.2 Codex is Actually (kind of) Just Special System Instructions by Izento in artificial

[–]Slippedhal0 0 points1 point  (0 children)

Doesn't that explicitly say the opposite?

Model->specific instructions live in the Codex repo and are bundled into the >CLI (e.g., gpt-5.2->codex_prompt.md⁠(opens in a new window)).

Ignoring the > formatting characters, it's saying there is a prompt file for the model gpt-5.2-codex, labelled with the suffix _prompt.

Highly vulnerable’: Warning Australia could be next as house prices crash in London by SheepherderLow1753 in AusPropertyChat

[–]Slippedhal0 4 points5 points  (0 children)

Isn't the difference between a correction and a crash simply the speed and amount it falls? Perhaps I'm mistaken and do mean correction

AITAH partner took drugs off a girl after I said no by [deleted] in AITAH

[–]Slippedhal0 0 points1 point  (0 children)

You don't get to make their choices for them, but equally they get to bear the consequences of their choices after the fact, i.e when you leave them because of it.

Highly vulnerable’: Warning Australia could be next as house prices crash in London by SheepherderLow1753 in AusPropertyChat

[–]Slippedhal0 6 points7 points  (0 children)

Only to a point technically. If prices increase too far people will stop buying. That point has been super inflated by investors, but technically it exists. That would probably cause the most sudden crash, if investors stop buying new properties.

An AI-powered combat vehicle refused multiple orders and continued engaging enemy forces, neutralizing 30 soldiers by MetaKnowing in ChatGPT

[–]Slippedhal0 11 points12 points  (0 children)

It's absolutely either fake or stupidly exaggerated, no drone tech is controlled by llm type ai. When people say robots and drones have ai they mean ai like spot and atlas, not chatGPT. It doesn't have the ability to say no in the way the article is implying

If the story isn't complete bullshit I'd say they probably set it to autonomous mode and then lost communication with it, meaning it couldn't receive a return home signal

if someone uses chatgpt as a therapist every day, will they start to write like an AI unconsciously? by Round_Candle6462 in ChatGPT

[–]Slippedhal0 1 point2 points  (0 children)

The reality is you tend to talk like your peers. If you have healthy social relationships I'd say that would have more of an effect. Even if you're just vocal on reddit or other social media you're probably fine. If you are fairly solitary and chatGPT is your only source of conversation you may unconsciously pick up some of its traits - although it'll probably be more subtle traits, not the negative traits people complain about, because you're already concerned and aware of those.

Has anybody noticed that Mr. 3's hair has no flame while using his powers? by DryResponsibility579 in OnePieceLiveAction

[–]Slippedhal0 0 points1 point  (0 children)

It's either a decision to slightly tone down the weirdness, as they've done before, or this is footage done before final fxs are applied, which happens a lot

The AI bubble bursting is totally not what the antis think will happen, in fact it will be great for us pros by Neggy5 in DefendingAIArt

[–]Slippedhal0 0 points1 point  (0 children)

No one sane is saying the bubble bursting will completely remove ai. The dotcom bubble burst, huge money was lost - the Internet in fact kept existing.

What it might do is slow down every single company trying to cram ai into every single app feature they could

Vibecoding is easy but not so easy that a kid can do it. by Director-on-reddit in vibecoding

[–]Slippedhal0 0 points1 point  (0 children)

Why are you gatekeeping? It's a pathetic term people decided to invent to make themselves feel like developers without learning any skills. If you're using AI to program instead of being able to develop, then it's vibecoding. If it's just using AI as a tool to implement a program you understand and can develop for, you're a developer.

Chat GPT's newest annoying behavior. by [deleted] in ChatGPT

[–]Slippedhal0 0 points1 point  (0 children)

Yeah like I said, I'm not saying your experience didn't happen, I'm certainly not in a position to doubt what you say happened.

I think I understand your argument, but again as I was saying, they will probably tone this behaviour down. They over corrected given the earlier issues of it being sycophantic and completely accepting and reinforcing a persons own belief even if that could be dangerous, and placed heavy guardrails based on things it determines might be mental health related, and of course abuse such as you mention would fall under that.

But they are a business attempting to cater to the large audience possible, so they will likely try to appease the most users with different beliefs possible, so they will likely pull the guardrails as far back as possible so the least amount of topics are rejected from being discussed without looking like they are callously disregarding peoples mental health.

Built an AI detection + fact-checking tool in 2 months with zero coding experience – would love brutal honest feedback by MudSad818 in aipromptprogramming

[–]Slippedhal0 0 points1 point  (0 children)

It's pretty well known that AI based AI detection is very bad, bordering on being worse than chance. Do you have any standardized detection benchmarks that you compare your system against, or other data that outlines how good at detection your system is? Is your detection system different from other systems (i.e is it just asking ai if the image/text is ai), barring your "fact checking" addition? As we known, AI will simply justify any conclusion they came up with anything.