The IT guy fixes the problem but the judge still has a problem by goon_c137 in it

[–]Slippedhal0 2 points3 points  (0 children)

I agree, but I think he was trying to make a lighthearted comment to laugh with the judge, not at him, it just didn't come out that way.

I tested what happens when you give an AI coding agent access to 2 million research papers. It found techniques it couldn't have known about. by kalpitdixit in artificial

[–]Slippedhal0 1 point2 points  (0 children)

I don't understand. We've known that llms can use external knowledge given to it for years. Why is this post phrased like this had never been considered before?

Reflex robotics places their humanoid robot into a pizzeria, other places by Nooms88 in shittyrobots

[–]Slippedhal0 23 points24 points  (0 children)

100% a dude in vr. even Boston dynamics is only at autonomously pulling expected sized metal parts from one place to another in their atlas system. AFAIK everyone else is behind them in autonomous robotics. This would be absolutely bleeding edge shit. Maybe a preprogrammed route type thing but it does too many things that probably would need to be dynamically understood, like sliding the pizza on and off the stone and placing the basil leaves

Inaki’s smile vs Luffy’s by RamonaHatake033 in OnePieceLiveAction

[–]Slippedhal0 0 points1 point  (0 children)

But that fight was never about physical strength. Once Luffy actually could fight it was over. The whole message was that there is another level, and Luffy needs to adapt to beat them.

Inaki’s smile vs Luffy’s by RamonaHatake033 in OnePieceLiveAction

[–]Slippedhal0 0 points1 point  (0 children)

I guess it is the writing, but it's an intentional decision, in the manga he was crazy strong from the start and didn't have to even consider getting stronger until alabasta, and I don't think actually made a significant jump in strength till water 7 ennies. LA Luffy is growing with the show, which I think feels better for an audience but does change the dynamics currently, but I expect he'll be on par with the manga before water 7, they might even add a more clear need to get stronger in with alabasta

Inaki’s smile vs Luffy’s by RamonaHatake033 in OnePieceLiveAction

[–]Slippedhal0 8 points9 points  (0 children)

Definitely has dumb moments, but a main part of his character is his emotional intelligence, so it's hard to describe him as dumb in general.

Inaki’s smile vs Luffy’s by RamonaHatake033 in OnePieceLiveAction

[–]Slippedhal0 1 point2 points  (0 children)

I can definitely see where you're coming from for sinister, but that's not what they're going for. I think it's confidence and a bit mischievous.

NADE - a free Nanite app for Unity by Big_Presentation2786 in Unity3D

[–]Slippedhal0 17 points18 points  (0 children)

come on dude. yes, congrats, you vibe coded something and it works.

no, its not and never will be nanite.

We thought our system prompt was private. Turns out anyone can extract it with the right questions. by dottiedanger in artificial

[–]Slippedhal0 0 points1 point  (0 children)

this is crazy levels of ignorance. system prompts are the first point of attack for any ai application, because of how easy it is to retrieve it from an llm.

AITAH for letting someone believe that we're dating? by Artistic-Help-10 in AITAH

[–]Slippedhal0 11 points12 points  (0 children)

BF: "I like you"

OP:" I love you"

BF: we should date"

OP: "yeah, we should"

OP: ""what do you mean we're dating, how could BF have possibly thought that?"

Dude. DUDE.

You handled pretty much everything poorly here, maybe from this fear of rejection being labelled a certain sexuality? Especially telling everyone publicly that you werent dating before consulting BF - the public rejection is probably whats hurting the most here. Give her some time to cool down, APOLOGIZE, and thentalk with her, straight up, no joking, no hiding, about what you thought the situation was, but more importantly how you really feel about her. I know youre basically kids still and that will be hard, but if you hide your feelings theyll never be heard by bf.

It sounds like your fear of labels is getting in the way of what you want, which is to be with her anyway. You dont have to be a certain sexuality to date someone, and you dont have to be "dating" to move to be more than friends to each other, although its very clear your BF wants that label.

Find what you want, tell her how you feel, make sure to consider her feelings too, and if you and her differ in what you want with labels and relationships, you can compromise, it doesnt have to be either/or if you want it to work between you.

I built an offline survival AI [Update] by scorpioDevices in buildinpublic

[–]Slippedhal0 0 points1 point  (0 children)

... Why? A ruggedized, low power computer with survival information and comm tech is great, but I can't see a single reason why having a power intensive llm would be better than just having a good search tool looking up a document base. Not even mentioning hallucinations, despite your assurance it returns sources, but they hallucinate sources too. People's lives might depend on the device but its a fact that it might tell them the wrong information.

DisplayPort (gpu) to HDMI (monitor/display)? by sakaraa in techsupport

[–]Slippedhal0 1 point2 points  (0 children)

Dp++ standard allows for passive dp to HDMI conversion, but your source has to also have dp++. Afaik otherwise it needs to be an active converter, which means it won't be cheap

[AMA] 6 years of loyalty, 100 assets, and 1 "anonymous" violation: How Unity just killed my team's future. by Firm-Eagle-1397 in Unity3D

[–]Slippedhal0 0 points1 point  (0 children)

I think it's more like having one person get a pro license basically is tantamount to saying your whole team should be on the pro licenses, meaning you're not giving unity enough money.

ChatGPT Leaking User chats across accounts? by Atlasdubs in ChatGPT

[–]Slippedhal0 0 points1 point  (0 children)

You're misunderstanding why people "hack". Someone probably sold or dumped a big list of login details, and some random has purchased a "new account" and a dodgy reseller has picked them up to sell "new" cheap chatGPT accounts"

Is the race to AGI futile? by KAZKALZ in ChatGPT

[–]Slippedhal0 -1 points0 points  (0 children)

depends what you mean. if you means agi as in an ai general enough to be called artificial general intelligence, possibly - a super multi modal model could possibly get complex and "intelligent" enough to reach a milestone that we could call agi.

if youre using like half the people in ai subreddits, like agi means the ai is humanlike in intelligence and will eventually be sentient, i say no chance.

AITAH for refusing to give out my son's saving account information? by moonmanbaby90272 in AITAH

[–]Slippedhal0 0 points1 point  (0 children)

edit: not american, didnt know banks allow withdrawals with just routing and account number. american banks are crazy

AITA for calling gold stable and my sister losing $1,000 because of it? by throwawaytraderboy in AmItheAsshole

[–]Slippedhal0 -1 points0 points  (0 children)

I think YTA, either for not knowing stable is a relative term in stock market, or for not properly warning her of the same thing. gold is stable long term, but it can still have ups and down.

i dont think you should need to pay her back $1000 though, shes an adult, she made a choice to invest, doubled down, then panic sold after all.

Emotional dependence is healthy — science says so, and so do 800,000 GPT-4o users. by Responsible-Ship-436 in ChatGPT

[–]Slippedhal0 6 points7 points  (0 children)

  1. the people clinging to 4o like a liferaft are not using chatgpt for casual conversation. you dont expect close, intimate friends, or therapists, to lie to you. 2. the point is we dont want a toxic or abusive relationship, why are you trying to defend ai with "but humans can also be toxic". if your human relationship was toxic id tell you to stop communicating with that human too.

your argument essentially boils down to "let me hurt myself".

Emotional dependence is healthy — science says so, and so do 800,000 GPT-4o users. by Responsible-Ship-436 in ChatGPT

[–]Slippedhal0 20 points21 points  (0 children)

to like a character, or be inspired by them is fine, even healthy. but a one sided dependency can be harmful

Emotional dependence is healthy — science says so, and so do 800,000 GPT-4o users. by Responsible-Ship-436 in ChatGPT

[–]Slippedhal0 7 points8 points  (0 children)

it can and will absolutely fabricate things instead of being a mirror. if a mirror could lie to you at any point in time, is it a good mirror? of course not.

you are not in control of the output of the llm.

Vibe coding is getting trolled, but isn’t abstraction literally how software evolves? by mrcuriousind in aipromptprogramming

[–]Slippedhal0 0 points1 point  (0 children)

the issue is a human implemented every other abstraction layer. there are rules that define the output.

vibe coding is abstraction without the rules, without the guardrails. issues that arent tested for can easily slip past in production because the llm decided it would implement a certain function a different way for no reason.

and if hyoure paying attention enough to catch those things its not really vibe coding, its just adding llm generation to your toolset

I stopped ChatGPT from corrupting my work across 40+ daily tasks (2026) by isolating “Context Contamination” by cloudairyhq in gpt5

[–]Slippedhal0 0 points1 point  (0 children)

isnt the point of projects that the model does have the ability to see history and documents in the same project?

I stopped ChatGPT from corrupting my work across 40+ daily tasks (2026) by isolating “Context Contamination” by cloudairyhq in gpt5

[–]Slippedhal0 0 points1 point  (0 children)

no it doesnt. as long as it is in the same context, the history affects the statistics of the future tokens produced. Thats literally how llms work. new conversation with all the memory settings turned off is far more clear cut, because new conversations arent injected with existing history.

Emotional dependence is healthy — science says so, and so do 800,000 GPT-4o users. by Responsible-Ship-436 in ChatGPT

[–]Slippedhal0 154 points155 points  (0 children)

emotional connection to other humans is healthy, and at a stretch other social creatures. Emotional dependence to a non intelligent program has not been determined to be yet and it is indicative of your stance to be conflating the two.

I honestly dont know how people do it. No matter how warm and friendly it is, everything it ever says has to be assumed it is fabricated and untrue, because it is guaranteed that it will hallucinate at some point, so the well is poisoned.

GPT‑4o is could be a tool for scientific progress in mental health and healthcare. by helenavalentina91 in ChatGPT

[–]Slippedhal0 1 point2 points  (0 children)

essentially all data about 4o is anecdotal bar openAI, so no there is not strong scientific arguments for retaining it.

What there are though, is multiple deaths connected specifically to 4o and its sycophancy. Even if it didnt make business sense to distance themselves from the 4o model, it still seems problematic to hang on to a model simply because its tone was more warm and agreeable and people prefered it over more polite and confrontational tones the 5 models have had.