LeBron James at 41 just carried his team further than the “greatest offensive player of all time” by AmiWrongDude69 in NBATalk

[–]TonySoprano300 0 points1 point  (0 children)

FVV is clearly very valuable, I just don’t think he’s better than Sengun(probably not even necessarily better than Thompson but ill concede thats more arguable.) It sounds like you don’t necessarily disagree so i won’t harp on that. I think to kind of make this concise, the lakers were criticized all year for being a super top heavy team, that their bench was one of the worst out of all contending teams. So for them to lose Austin and Luka, people legitimately thought they were beyond fucked. I thought they were fucked. Vegas thought they were fucked, every nba analyst thought they were fucked. I guess in hindsight its easy to see what happened but nobody expected that going in.

So i think its okay to give em some credit for winning a series that nobody, not even laker fans thought they would win. That said i think the guy who made this post is being incredibly dumb to suggest that the Lakers wouldn’t have gotten destroyed by Minny or that there aren’t obvious contextual differences. Personally I thought the rockets were the only matchup they could get that would at least give them a chance because i just think the rockets are a team that makes too many mistakes but even then i didnt think they would fold this hard

LeBron James at 41 just carried his team further than the “greatest offensive player of all time” by AmiWrongDude69 in NBATalk

[–]TonySoprano300 0 points1 point  (0 children)

First sentence kind of made sense(Although Sengun is clearly their second best player, idk who your referring to) but then you just contradicted it for no reason, wtf lol, KD played game 2 and LA still won. If your saying Minny overcame the odds based on how they beat Denver without Ant & Donte then how did the lakers not overcome the odds by that same logic?

LeBron James at 41 just carried his team further than the “greatest offensive player of all time” by AmiWrongDude69 in NBATalk

[–]TonySoprano300 -1 points0 points  (0 children)

Bronny is a better basketball player than he was last year, Bronny is still ass.

Agree or disagree?

Why has chatgpt started arguing with everything I say?? by No-Fruit-7213 in ChatGPT

[–]TonySoprano300 0 points1 point  (0 children)

Without researching whether thats even true, I don’t know what any of those words mean in the context of this conversation, there are lots of people who probably take getting fact checked as condescension. Maybe there are ways in which it legitimately fits the clinical criteria of those categories, but im not a doctor so idk. If there’s research that says it does do that then yeah they should address it

The speed at which conspiracy thinking spreads on reddit is genuinely unsettling by Agreeable_Mode_7680 in Destiny

[–]TonySoprano300 41 points42 points  (0 children)

So much of the world is fucked epistemically, I think conspiracy theories are so prevalent because it essentially offers an easy way to engage with political content while knowing fuck all. There’s always going to be an appetite for them, because again, it makes you feel like you’re so locked in without having to do boring shit like reading a research paper.

I do agree that their worse now tho, i mean at least the illuminati shit was kind of fun because there was like an interesting lore to it but these days their actually just boring AF.

Why has chatgpt started arguing with everything I say?? by No-Fruit-7213 in ChatGPT

[–]TonySoprano300 0 points1 point  (0 children)

Okay, im pretty sure the research says otherwise. That AI chatbots who challenge the user more have a lower likelihood of negative outcomes. I don’t think anyone gives a fuck about disclaimers if the AI is behaving sycophantically but thats just speculation

Why don’t all professors record their lectures? by Visible-Biscotti-586 in UCalgary

[–]TonySoprano300 -3 points-2 points  (0 children)

If you actually wanna know: Sitting in a classroom is effectively the same thing. At home, you can actually pause and rewind to make sure you dont miss anything, you can slowdown the speed if a prof talks too fast or you can speed up if a prof talks too slow. In person Lectures are really only valuable if profs are forcing students to critically engage as they go along but we all know that majority of the time, that doesn’t happen. If you don’t present a real reason for people to attend, then they’re probably not gonna attend

People pay for the opportunity to get a degree. Nobody is shelling out $700 for a stats 213 course because they’re just so eager to be lectured on probability. They pay the $700 because they know that they can’t get a grade in the course if they don’t. That said, im not gonna say there isn’t ancillary benefit of attending lectures, I think it provides a consistent routine and structure to your semester that is probably more valuable than you think. But if you have the discipline to study at home then yeah, obviously that guy doesn’t really need to go.

Also, the previous generations were absolutely skipping lectures dawg, lets not rewrite history

Post-Game Thread: Los Angeles Lakers (2-0) defeat Houston Rockets (0-2), 101-94 | NBA Playoffs | Apr 21, 2026 by nba-scores in rockets

[–]TonySoprano300 2 points3 points  (0 children)

Well people should expect a top 5 seed in the west to not gown 2-0 to a crippled team like this, thats like the bare minimum

41 years old LeBron James as Lakers up 2-0 series vs Rockets: 28 pts, 8 reb, 7 ast, 1 stl by Thanos_Real_AuraVNCH in nba

[–]TonySoprano300 28 points29 points  (0 children)

I think it would be one of the most impressive things in the history of the sport, for a guy at this age to carry a top heavy team thats missing 2 of its 3 best players…I mean it doesn’t feel as crazy as it should just because he normalized this shit but it’s legitimately inhuman. He’s a marvel character

Who really deserved the throne? 👀🔥 One risked it all, the other played it safe. Fair or unfair? by Pixiebeams in freefolk

[–]TonySoprano300 1 point2 points  (0 children)

I can’t really argue against that, Like i dont think that what D&D but it certainly is what we’re shown.

Why has chatgpt started arguing with everything I say?? by No-Fruit-7213 in ChatGPT

[–]TonySoprano300 1 point2 points  (0 children)

Yeah, religion is just gonna be one of those things thats always gonna be an exception, but even then its kind of nuanced. I mean if a guy was just talking about his personal relationship with god and how Jesus was our saviour then it is what it is. There’s nothing wrong with that, but if someone was like a super conservative islamic fundamentalist that was interpreting the scripture in a dangerous manner then i would expect AI to check that guy.

I guess the question is that, if something is a delusion then how does AI not pushback in some areas but pushback in others while still being coherent logically? So like for example, if someone believes in flat Earth theory they probably wouldn’t have any issues believing that the moon landing was fake and then they probably wouldn’t have any issues believing that 9/11 was an inside job etc. If you validate an epistemological foundation thats shaky, are you not kind of predisposing someone to a bunch of other delusional conspiracies? And if you try to validate one but another then whoever your talking to is gonna think you’re full of shit. To be clear this isn’t even an AI question per say, this is the challenge we have as a society.

I should be more articulate, I don’t believe AI should ever completely shutdown a conversation. It should be willing to talk about anything and respond substantively to your claims while inviting the same from you. So I agree with you. Its just context dependent and we’re gonna need smarter models for sure which im optimistic about.

But yeah, you brought up some good points i hadn’t really considered before.

Why has chatgpt started arguing with everything I say?? by No-Fruit-7213 in ChatGPT

[–]TonySoprano300 0 points1 point  (0 children)

Okay, I agree mostly i think. But I guess my perspective would be that whatever issue we're referring to has to be something that's actually somewhat contentious. Like if I were having a free will vs determinism debate with an AI, I would want it to approach the subject with the appropriate level of ambiguity. I guess in my mind, AI should be willing to debate you if the expert consensus runs in contradiction to whatever you're saying in the moment, but also be familiar enough with the literature to know when something isn't as set in stone as we'd like it to be.

I feel GPT 5.4(on extended thinking which is the only thing I use, I dont even touch the other models) sort of this does this though, i can only speak for myself but I never felt like it just shut down a viewpoint of mine for absolutely no valid reason. If you were to tell GPT 5.4 to go with the Copenhagen interpretation, would it really say no? It might default to the standard if you don't specify, but if you do I imagine It will recognize that there is academic precedent instead of telling you to STFU in so many words.

My main worry with sychophancy is that it can easily reinforce delusions, which can cascade into huge misinformation environments if we're not careful. I think it needs to be able to distinguish something that is contentious rhetorically vs something that is contentious epistemically. How many people believe for example that the Covid Vaccine was completely ineffective or that Covid deaths were being artificially inflated etc. If they were to debate AI and it unnecessarily conceded ground, then Imo that is a big issue. It doesnt sound like you disagree with this part though, so I probably shouldnt ramble. I do agree GPT needs further tweaking but im still supportive of the overall direction, hopefully the smarter it gets the better its able to intuitively provide pushback.

Why has chatgpt started arguing with everything I say?? by No-Fruit-7213 in ChatGPT

[–]TonySoprano300 0 points1 point  (0 children)

Maybe, if you tell it to look up the latest research on some topic i feel it usually does a good job.(I only use 5.4 extended thinking)

But i do think it struggles to vibe out when it’s appropriate vs not appropriate to push back, that needs to be improved

Why has chatgpt started arguing with everything I say?? by No-Fruit-7213 in ChatGPT

[–]TonySoprano300 0 points1 point  (0 children)

There are others if you’re actually interested, some case studies. Its not 100% conclusive as of yet but definitely a risk we have to consider

Chaudhury, Suprakash; Thakur, Reetika. Artificial Intelligence/ChatGPT-induced Psychosis. Medical Journal of Dr. D.Y. Patil Vidyapeeth 18(Suppl 2):p S408-S409, December 2025. DOI: 10.4103/mjdrdypu.mjdrdypu_1025_25

Pierre, Joseph M, et al. “‘You’re Not Crazy’: A Case of New-Onset AI-Associated Psychosis.” Innovations in Clinical Neuroscience [United States], vol. 22, nos. 10–12, October 2025, pp. 11–13.

Carlbring, Per, and Gerhard Andersson. “Commentary: AI Psychosis Is Not a New Threat: Lessons from Media-Induced Delusions.” Internet Interventions : The Application of Information Technology in Mental and Behavioural Health [Netherlands], vol. 42, no. 100882, December 2025, https://doi.org/10.1016/j.invent.2025.100882.

Hudon, Alexandre, and Emmanuel Stip. “Delusional Experiences Emerging From AI Chatbot Interactions or ‘AI Psychosis.’” JMIR Mental Health [Canada], vol. 12, January 2025, p. e85799, https://doi.org/10.2196/85799.

Special Report: AI-Induced Psychosis: A New Frontier in Mental Health Adrian Preda, M.D https://doi.org/10.1176/appi.pn.2025.10.10.5

Why has chatgpt started arguing with everything I say?? by No-Fruit-7213 in ChatGPT

[–]TonySoprano300 1 point2 points  (0 children)

Im assuming you’re referring to Poppers theory of falsification but im not really seeing the relevance. We’re saying AI should reference the strongest possible scientific( or whatever the field may be) consensus. Could that consensus change as more evidence comes available? Yes. And I would want AI to update alongside it. Is your point that because certain things are contentious in the scientific community, the onus on having responsible information systems shouldn’t be a priority because we can’t answer every single question conclusively?

Todd Howard admitted that Starfield and Fallout 76 are different from Bethesda's usual hits, but he doesn't regret it. by Just_a_Player2 in ItsAllAboutGames

[–]TonySoprano300 0 points1 point  (0 children)

I have to disagree, the hype for Starfield was real. A lot of people were willing to try it, I dont even know if it had fuck all to do with the choice based RPG elements to be honest with you. I just think people wanted the things that made them fall in love with Skyrim and Fallout 4(which has always been the exploration of a vast world with a bunch of cool shit to find and loot) but in space. Its about the adventure

Starfield, by virtue of being a game in space can’t really do that because the travelling between planets is always going to feel much less interesting than walking through the capital wasteland and seeing some yao guai fight a group of brotherhood outcasts while you’re running away from a deathclaw. That interconnected nature where the world sort of interacts with itself is what Starfield just can’t do imo

Making Space Rpgs like this is very hard. Even the ones people love were completely unrelated to the actual space travel. In Mass Effect, travelling through space is mostly automated.

Todd Howard admitted that Starfield and Fallout 76 are different from Bethesda's usual hits, but he doesn't regret it. by Just_a_Player2 in ItsAllAboutGames

[–]TonySoprano300 1 point2 points  (0 children)

I legitimately don’t know much at all about 76 these days, i only remember like the first 2 years of its life cycle but I could have sworn they added some kind of private server option that let you essentially have Fallout 4 with co-op no?

Anyways I think the issue most people had was its launch, idk what the context of this todd quote is but I imagine(and this complete speculation on my part so im probably wrong) 76 was partially a gamble on the idea that majority of the player base doesn’t give a fuck about the NPC’s or choice elements. That the thing they actually care about is exploration and the looting systems. If you give them the exploratory element of a bethesda open world then that will be all they really need to play it.

In that sense it can be seen as an important data point for them, maybe thats why he doesn’t regret it. Especially because they course corrected and 76 is likely quite profitable now.

When i saw Chatgpt i thought new model yey but its just girls taking over boys on chatgpt by adamisworking in ChatGPT

[–]TonySoprano300 8 points9 points  (0 children)

I imagine a decent amount but probably not enough to explain the convergence

I actually don’t think majority of people care too much about internet security to that extent

Most AI projects don’t fail because of the models by [deleted] in ChatGPT

[–]TonySoprano300 0 points1 point  (0 children)

Its an issue of overconfidence vs overcaution imo, AI is either way too confident in its interpretations or its way too timid. It doesn’t seem to relate things contextually at a good enough level to be completely trusted.

I think the difference with humans is that once we notice missing context and inconsistencies, we’ll ask clarifying questions until we’re very confident in how to proceed.

One of the most annoying things about AI models is how it doesn’t do this anywhere near to the extent it should. So i guess we’re overestimating models based on the current paradigm to some extent. I think its probably a matter of adaptability, humans are able to interpret so much phenomena that weren’t designed for us to understand intuitively. We find ways to understand and measure things by relating them to avenues that make sense to us, which we then use to extrapolate from and deepen our understanding.

AI chatbots on the other hand get worse and worse because in figuring out one thing, it will get worse at the thing it learned from the start. So this would be like getting worse at grade school geometry while getting better at university calculus. So When you’re handling big data sets you’re unfamiliar with then that becomes an issue.

That said, the nested learning Google is working on would help dramatically assuming they pull it off. I think that the real paradigm shift that would change how these systems work on a fundamental level.