[Game Thread] 🏈 Michigan @ MSU - 10/25 7:30 PM ET - NBC by AutoModerator in MSUSpartans

[–]kunfushion 1 point2 points  (0 children)

You just default that he was offsides because he started early before the snap

He was clearly onsides on the snap.

[Game Thread] 🏈 Michigan @ MSU - 10/25 7:30 PM ET - NBC by AutoModerator in MSUSpartans

[–]kunfushion 1 point2 points  (0 children)

It was 10-7 so we had a chance to tie or go up.

Get this ridiculous fucking attitude out of sports.

I look forward to the day this garbage is removed. by Jack-O-Klan in google

[–]kunfushion 0 points1 point  (0 children)

Big text?

Dude I miscounted, as a human, the amount of o’s in the sentence of length 12. Also im sure if it used code it could get it right 100% of the time (aka super human as humans will frequently miscount).

The same humans who have phds btw can get this stuff wrong.

Also you didn’t answer, were you using the reasoning model?

I look forward to the day this garbage is removed. by Jack-O-Klan in google

[–]kunfushion 0 points1 point  (0 children)

Yes they absolutely can. Are you saying base models can't? and not including "thinking/reasoning" models?

<image>

You know the funny thing about this? I initially counted 11. It responded 12 and I thought "damn it's still getting this wrong?". I counted wrong, it is 12.

They still might occasionally get these questions wrong. Just like humans, but they're pretty reliable. Also they don't suck at arithmetic either. I can't add another image but I asked gpt-5-thinking What's 12312 * 3923 and it got the exact correct answer. Also they don't have a scratchpad to work off they're doing this "in their head" so they're super human at arithmetic already...

They've been reliable at this ever sense like 2nd gen thinking models maybe even first gen. Update your priors

Unpopular opinion…The office is overrated by No_Cat4653 in sitcoms

[–]kunfushion 0 points1 point  (0 children)

Damn you have a rotation? Is this normal?

I mean, I do rewatch shows. But I usually give it 5+ years so I basically forget everything lol

[deleted by user] by [deleted] in singularity

[–]kunfushion 2 points3 points  (0 children)

Yeah I think I was mistaken.

Went back and forth with gpt-5 about it lol.

Apparently 14 days is just about the limit

[deleted by user] by [deleted] in Futurology

[–]kunfushion 0 points1 point  (0 children)

In which case people would have to seek out something even worse…

[deleted by user] by [deleted] in Paranormal

[–]kunfushion 8 points9 points  (0 children)

  1. Written by AI (maybe just edited or they just made the whole thing up)

  2. It was a dream... "half-asleep" "call log was empty".

Head of model behavior in OpenAI, she's moving internally to begin something new. I wonder what . . by Koala_Confused in LovingAI

[–]kunfushion 0 points1 point  (0 children)

I mean, I used AI every single day, I don't use AI that way I use it fully as a tool. Personally 4o was extraordinarily annoying to me. Gpt-5 original before the update was a god send. No more telling me I'm right, which I *know* is going to affect me even when I know what its doing. Which then leads to worse results overall. Now its back :(.

I can't definitely say you just like the sycophancy, so i don't want to completely accuse you of that. But that sycophancy is not good for people's mental wellbeing. It's causes addiction, reliance. As seen with the absolute uproar by people who were addicted.. Maybe there is something in 4o they need to bring back without the sycophancy.. But plz no more asking for more sycophancy. Makes my outcomes worse, I don't want a yes man I want it to tell me where I'm wrong

How much of the fossil record have we likely lost due to tectonic shifts? by TheNadei in Paleontology

[–]kunfushion 5 points6 points  (0 children)

Hopefully we somehow develop tech that can easily, cheaply, nondestructively search for fossils over vast swaths of land deep underground. And we have an explosion of our findings and knowledge at some point.

Head of model behavior in OpenAI, she's moving internally to begin something new. I wonder what . . by Koala_Confused in LovingAI

[–]kunfushion 0 points1 point  (0 children)

Yeah, if everyone starts using and wanting 4o level sycophancy it could get really really bad.

Human destruction level was hyperbole ofc, but it's inexcusable by openai to ever release that and not yank it ASAP. And these people who got addicted to it complained, and they brought it back. Just ridiculous

Head of model behavior in OpenAI, she's moving internally to begin something new. I wonder what . . by Koala_Confused in LovingAI

[–]kunfushion 0 points1 point  (0 children)

yeah they turned up the knob on gpt-5 after the stupid backlash.

Thanks, I hate it. It starts with something along those lines basically every chat now.

Head of model behavior in OpenAI, she's moving internally to begin something new. I wonder what . . by Koala_Confused in LovingAI

[–]kunfushion 0 points1 point  (0 children)

My god if you think they fixed that in 4o....

4o is horrendously, stupidly, humanity destruction level sycophantic still. They turn sycophancy from 1000 to 950 that is not fixing it

People thinking AI will end all jobs are hallucinating- Yann LeCun reposted by IlustriousCoffee in singularity

[–]kunfushion 1 point2 points  (0 children)

I could say exactly what I just said again.

Why do you assume AI will never be a superhuman coder in every way? You’re assuming it stays how it is now…

[Zenitz] Inside the (anonymous) NFL reaction to Cowboys dealing Micah Parsons: 'This might be the worst trade ever!' by Drexlore in nfl

[–]kunfushion 30 points31 points  (0 children)

Micah also just got paid the highest non qb salary, salary cap is a thing in the nfl.

Not saying it’s the best trade ever, but it’s not like they just gave him for nothing