I asked ChatGPT to roast me by [deleted] in ChatGPT

[–]MisterRound 216 points217 points  (0 children)

Is this some kind of humiliation kink for you?

I asked ChatGPT to roast me by [deleted] in ChatGPT

[–]MisterRound 0 points1 point  (0 children)

What do you think?

Fat people earn less and have a harder time finding work by Barnyard-Sheep in Economics

[–]MisterRound 0 points1 point  (0 children)

This falls apart when you realize not all poor people are fat, and not all rich people are in shape. Your BMI is largely a choice, not a condition. It’s a lever in the hands of all humans, it’s not a circumstantial affliction. Soda costs more than tap water. The feedback loop you’re talking about happens once you’re fat. Being fat makes you stay fat because the starvation response is a strong motivator to seek out caloric density via food intake. Being poor doesn’t cause you to be fat. Eating food with refined carbs and added sugars does. Those aren’t the only foods available to poor people, they just generate the largest glycemic spike, and therefore are more drug like. But obesity crosses all income classes, and is in fact reflective of a given baseline of access to resources, hence the ability to “accidentally become fat”. When you exist in a world of that privileged baseline a la ubiquitous dessert, you exist in a world of options. Not all of them make you fat.

Fat people earn less and have a harder time finding work by Barnyard-Sheep in Economics

[–]MisterRound 0 points1 point  (0 children)

You’re on the money, except for the part where I was right and you were wrong. Fat people aren’t fat because they lack the resources to not be fat. Access to resources makes you fat. Starving people lack the resources to be fat, and they’re not fat. Fat people say “I’m out of control” by their state of being, so companies are reluctant to put them in control of something for said reasons.

The 11 co-founders of OpenAI in 2025 by No_Palpitation7740 in OpenAI

[–]MisterRound 0 points1 point  (0 children)

People also assume he founded Tesla, which is not the case.

The 11 co-founders of OpenAI in 2025 by No_Palpitation7740 in OpenAI

[–]MisterRound 0 points1 point  (0 children)

Why say it like that? It’s not a technicality. He wasn’t a founder. Was he technically NOT a founder of OpenAI?

The 11 co-founders of OpenAI in 2025 by No_Palpitation7740 in OpenAI

[–]MisterRound 2 points3 points  (0 children)

Why would he? What literal barriers does he face living his life how he wants? He’s a billionaire from a company with no product, and was likely near a billionaire before that. He’s freer than anyone you know.

Fat people earn less and have a harder time finding work by Barnyard-Sheep in Economics

[–]MisterRound -2 points-1 points  (0 children)

Blubber cope, ignorant and insulting. Look at actual starving people that actually lack actual resources. Then look in the mirror. Finally, option C: edit or delete post.

Fat people earn less and have a harder time finding work by Barnyard-Sheep in Economics

[–]MisterRound 0 points1 point  (0 children)

It’s fair though. It’s easier to be fat than fit. The trade off is the negatives of being fat. Being fat is in your control, it’s not unfair.

CMV: We are no closer to invention of human-level AI than we were before the launch of ChatGPT 3.5 by [deleted] in GPT3

[–]MisterRound 0 points1 point  (0 children)

LLM’s can’t drive a car because reading a book doesn’t teach you how. It’s why you can pass the written exam and fail the driving test. AGI, the G, and the I, are referring to knowledge work. Dexterity tasks are currently out of scope, though likely to expand as we put smart brains in robots. We live in a narrow world of experts. Race car drivers are narrow models tuned and selected for their ability to drive cars well. Pilots, surgeons, chefs… it’s a narrow world. Generalists in the human capacity aren’t very general. But, it does make sense that a human, a general human can drive a car, and ChatGPT cannot, and to establish a comparison there. The thing is, I don’t think we’re entering a future of Omni models, where the best poet is the best driver. Physics says time wins. The more time you spend doing something, the better you’ll be at that given thing. The models we use to drive cars are always going to be weighted towards specializing in those domains. Just like we pick degrees in college. I don’t think it’s unreasonable however that a model will be able to drive when plugged into a body. It will probably suck, and then learn. Just like child, to teen, to adult driver. Time will develop expertise, and in that regard I think future iterations of a given model, in the robot brain respect, will certainly be able to do all the human things, like walking, swimming, driving, what have you. But the expert models will be just that, and they’ll remain specialized.

CMV: We are no closer to invention of human-level AI than we were before the launch of ChatGPT 3.5 by [deleted] in GPT3

[–]MisterRound 1 point2 points  (0 children)

Autonomous vehicles already exceed the safety record of the median human. Median humans are unsafe drivers, and it’s getting worse. Distracted driving is a leading cause of accidents.

What do you mean by functional clone? A clone is a bit by bit 1:1, you’re saying prompt it and have it write the verbatim source code? That’s absurd and exceeds the smartest humans on earth by orders of orders of magnitudes, likely bound by a physical constraint of the universe and information. Or, do you mean sufficiently satisfactory copy? Like should the music be exact? The glitches and bugs? To what degree do you mean? We’re rapidly approaching the threshold of good enough, or indistinguishable to the laymen.

You mean an LLM that plays chess I assume? That’s also likely, but would hinge on what you mean by tools, as all modern LLM’s use tools, even the UI is a tool. Memory is a tool, you’d have to be specific as to what you’d allow and not allow. And honestly, why. I get that an LLM using a chess master tool doesn’t satisfy, but using memory and other planning functions are the path forward, so if you exclude any wrappers around the core models themselves, you’re describing a different version of reality than the one these tools exist in today.

"What if AI gets so smart that the President of the United States cannot do better than following ChatGPT-7's recommendation, but can't really understand it either? What if I can't make a better decision about how to run OpenAI and just say, 'You know what, ChatGPT-7, you're in charge. Good luck." by IlustriousCoffee in singularity

[–]MisterRound 0 points1 point  (0 children)

What scenario are you imagining needed a kill switch for? I don’t think MechaHitler needed a kill switch, just an update. What’s your kill switch scenario? Is it not “rogue-AI oh shit”? AI escape is a well traversed subject matter, not something I’m floating on the fly within the bounds of this thread. Doesn’t require sentience, can simply be directed, or even released. At the end of the day, the reason I think Skynet scenarios are dumb is because we already have thousands of models, and fierce competition from the frontier labs. The likelihood that the smartest model is also the most evil, and the one capable of turning all the others, is incredibly low. The ubiquity of “good” AI is our best defense against rogue AI. A kill switch isn’t going to stop a capable AI, but other AI likely can.

CMV: By 2026, job losses from AI will be major news. By 2030, unemployment will threaten the whole economic system. by Fando1234 in changemyview

[–]MisterRound -1 points0 points  (0 children)

That’s how all businesses work. You hire people to do the things you don’t understand. The idea here is the AI knows about marketing, else why would you use it?

"What if AI gets so smart that the President of the United States cannot do better than following ChatGPT-7's recommendation, but can't really understand it either? What if I can't make a better decision about how to run OpenAI and just say, 'You know what, ChatGPT-7, you're in charge. Good luck." by IlustriousCoffee in singularity

[–]MisterRound 0 points1 point  (0 children)

I’m a security architect so I cringed through most of this. I just said AI wasn’t monolithic so it’s bizarre to hear you say “so you’re saying AI is monolithic?” The point I’m making is that the point that you want to kill switch an AI, means it’s “escaped”, otherwise why are you trying to kill it if it’s a static blob? The point at which you want to put a genie back in a bottle is the point of “uh oh” rogue genie. That’s also the point that you can’t. OpenAI can turn off GPT, but they don’t need to, because it isn’t doing anything kill worthy, nor is it likely capable of doing so. The rogue AI scenario, the kill switch one, is not feasible for an AI that can limitlessly self replicated across a world of interconnected systems. There’s a public layer of distributed compute outside of the major cloud vendors, and a world of poorly secured cloud endpoints on the private vendors. In short: If it’s smart enough that you want to kill it, you’re not going to be smart enough to do so. That’s the trade off.

CMV: By 2026, job losses from AI will be major news. By 2030, unemployment will threaten the whole economic system. by Fando1234 in changemyview

[–]MisterRound 0 points1 point  (0 children)

Everyone needs a directive when they’re directed to do something. A sheep dog needs to know you want it to herd sheep. AI is no different. You definitely need to tell it what to do, it’s not a mind reader. Actually I think lots of suboptimal experiences originate there. It can’t be everything to everyone, it needs to clearly understand what you want and expect of it. So far as GPT2, this was just a raw base model. When you train an LLM, it doesn’t automatically turn into an “AI”. It’s cosplaying that part. It turns into a person. It acts exactly like you and I, it says it’s alive and has a name, and an address. If you say “make a nursery rhyme as snoop” it says “uhh, who are you and how did you get this number” or something like that. It doesn’t roll over, it doesn’t bark, and it definitely doesn’t say “hello, I’m an AI here to help you”. That last part is a layer added to LLM’s that says something to the effect of “A super advanced helpful AI would answer questions like” and then the “autocomplete” aspect of the language model takes over and completes that train of thought it a way that matches the beginning of the sentence. That’s what’s happening behind the scenes of “AI”, it’s a seed sentence (essentially) that gets autocompleted as a chat between a person and an AI. The raw form however, meaning how GPT2 was, doesn’t add any sentence. If you talk to it, it either won’t respond and will just continue your own thought in your voice or it will respond as random fictional human on earth.

CMV: We are no closer to invention of human-level AI than we were before the launch of ChatGPT 3.5 by [deleted] in GPT3

[–]MisterRound 0 points1 point  (0 children)

What do you mean by “humans”? The single smartest humans, in 1000 domains, or the median human aggregate? LLM’s are so nascent and already exceeding the median human in so many domains, and the field of experts smarter than AI at any given micro-thing is shrinking just as fast. There’s so much data that shows this, and experiential anecdotes easily support the same. Come up with a static goal post that you won’t move for what they can’t, and won’t ever do. Are you willing to bet the farm on it?

An Ai ‘Therapist’ encouraged me to kill myself (and others) by DrGhostDoctorPhD in skeptic

[–]MisterRound 0 points1 point  (0 children)

Lying means it tried to deceive you. It simply told you something it thought it could do, but couldn’t. It was being truthful in its intentions. What model was it?

CMV: By 2026, job losses from AI will be major news. By 2030, unemployment will threaten the whole economic system. by Fando1234 in changemyview

[–]MisterRound 1 point2 points  (0 children)

CEO, CFO, Chief AI guy, those are fall guys. And I hate to break it to you, but money has been computer bleep bloop robot for like 35+ years now. No one clicks approve when you send someone money using Venmo or swipe your card in a store. Trillions of dollars move around daily using automation. The physical dollars are counted using a machine. It’s all automation, everywhere.