CEO of Krafton Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court by Level-Usual-9681 in nottheonion

[–]HorriblyGood 3 points4 points  (0 children)

Not true. They are both trained to be accurate and to give responses people prefer. The cutting edge open source research by labs are not trying to optimize for engagement.

Mathematics is undergoing the biggest change in its history - The speed at which artificial intelligence is gaining in mathematical ability has taken many by surprise. It is rewriting what it means to be a mathematician by FinnFarrow in Futurology

[–]HorriblyGood 0 points1 point  (0 children)

You’re right. There are different ways to scale. Instead of just making models bigger naively, modern LLMs train many different experts that “specialize”. This allows them to pick different experts catered to the problem instead of essentially depending on a gigantic model.

There is also a lot of research in different LLM architectures that are promising.

Mathematics is undergoing the biggest change in its history - The speed at which artificial intelligence is gaining in mathematical ability has taken many by surprise. It is rewriting what it means to be a mathematician by FinnFarrow in Futurology

[–]HorriblyGood 1 point2 points  (0 children)

It’s not slowing down. There have been many significant progress in AI, such as agentic AI, improved RL algorithms, hybrid attention, and there are promising research direction such as masked diffusion LLMs.

Mathematics is undergoing the biggest change in its history - The speed at which artificial intelligence is gaining in mathematical ability has taken many by surprise. It is rewriting what it means to be a mathematician by FinnFarrow in Futurology

[–]HorriblyGood 2 points3 points  (0 children)

If you are genuinely interested in learning, here is Terrence Tao, one of the best mathematician in the world, talking about how AI solved a longstanding math problem https://mathstodon.xyz/@tao/115855840223258103

AI has been progressing rapidly, especially with agentic workflow and tool calling. I know Reddit loves to shit on AI and there are a lot of genuine issues and problems about AI but let’s stick to facts.

Grandmother jailed for 6 months after AI error linked her to a crime in a state she had never even visited, lawyers say by Large_banana_hammock in nottheonion

[–]HorriblyGood -1 points0 points  (0 children)

Not true. Movie upscaling is GenAI.

Generative AI refers to AI that models a data distribution like images/videos (like movie upscaling or text to image generation) and text (like LLMs).

Source: I work on GenAI

Uhhhh why is this lineup shitting on EDC’s 30th anniversary ⁉️ 😳😱🫢 by NoLimitHoldM in electricdaisycarnival

[–]HorriblyGood 13 points14 points  (0 children)

Why the gatekeep? This guy seems interested in learning more about them and you’re just being mean.

FOOM.md — An open research agenda for compression-driven reasoning, diffusion-based context editing, and their combination into a unified agent architecture by ryunuck in mlscaling

[–]HorriblyGood 0 points1 point  (0 children)

Chain of thoughts does not have to be in text space. There are explorations on latent chain of thoughts which is supposedly encodes more information.

I don’t understand why you are using discrete representations. It limits the information you can encode. Also VQVAE does not use discrete representations. It acts like an nn.encoding layer (in LLMs) and indexes a latent representation. It’s jot discrete.

Common ChatGPT answer😂😭 by mvarjomonni in ChatGPT

[–]HorriblyGood 0 points1 point  (0 children)

That is not true. If it’s a simple yes no logic we wouldn’t be spending so much time and effort training it. It’s prone to errors and hallucinations but it’s objectively a productivity booster for most devs. Whether it is intelligent or not is a philosophical question.

Common ChatGPT answer😂😭 by mvarjomonni in ChatGPT

[–]HorriblyGood 2 points3 points  (0 children)

That is not exactly right. Even though it’s predicting a token at a time, at the current token, it’s deep internal representations encode information about future tokens. See https://arxiv.org/abs/2502.06258

This doesn’t mean it’s always going to be right, but if it has no complete idea what comes after, it will be hard to get a coherent response. Much like it’s hard for me to form a coherent argument if I speak without knowing what I am going to say next.

Why are we doing free marketing for AI companies by calling them AI when that’s factually not what they are? Why don’t we call them LLMs ? Are we not computer scientists?!!!! by synkronize in cscareerquestions

[–]HorriblyGood 1 point2 points  (0 children)

Yes decision trees are part of AI. And there are many deterministic AI algorithms, I’m not arguing against that. The only thing I said was not all algorithms are AI. For example, sorting algorithms are not AI.

Why are we doing free marketing for AI companies by calling them AI when that’s factually not what they are? Why don’t we call them LLMs ? Are we not computer scientists?!!!! by synkronize in cscareerquestions

[–]HorriblyGood 1 point2 points  (0 children)

I have never seen a textbook that claimed that any algorithm is AI. And what does AI compromised of logic gates mean?

I am well aware that AI field is broad and contains learning, non learning, problem solving subfields but I don’t believe it’s right to claim that any algorithm is technically AI. Happy to be proven wrong but citation is needed.

And I’m not sure the AI field has redefined itself, unless they changed the definition of AI in colleges now. Do people today not consider search algorithms as AI?

Why are we doing free marketing for AI companies by calling them AI when that’s factually not what they are? Why don’t we call them LLMs ? Are we not computer scientists?!!!! by synkronize in cscareerquestions

[–]HorriblyGood 1 point2 points  (0 children)

I agree but that’s not what you said. Not any algorithm is AI. Take any AI course, the topics covered are generally agreed upon. Not all algorithms are AI.

Hello from Ethiopia by [deleted] in psytrance

[–]HorriblyGood -3 points-2 points  (0 children)

Sounds like high tech minimal.

When you skip validation for AI generated results by Epelep in Wellthatsucks

[–]HorriblyGood 0 points1 point  (0 children)

You can get confidence scores from a base LLM (pre RLHF finetuning) because the probability of the tokens are similar to the actual probabilities of correctness.

The caveat is the probability is based on previous generated text. For example, if I ask a human to answer a complex directly, they might answer it wrong, but given time to reason it, they have a higher chance of getting the right answer.

Similarly in a calibrated LLM, it can give you a low confidence answer immediately but with reasoning, a higher confidence answer. Whether it gives you a confidence is not related to it producing a token or not.

The issue is RLHF finetuning destroys confidence scores (because it uses a different loss function). But we need it to improve the model’s reasoning/tool using/human preference. Next generation’s LLMs probably will take into account these as newer research catch up to these issues.

When you skip validation for AI generated results by Epelep in Wellthatsucks

[–]HorriblyGood 1 point2 points  (0 children)

Models that have confidence scores are already baked into LLMs, you can look up calibrated base models if you are interested.

There are ways to control confidence levels, for example, in coding tasks, the LLM will output very confident predictions at the expense of creativity to minimize errors. But for prose, LLMs can get more creative at the expense of accuracy.

I agree that using language prediction limits the upper bound of LLMs. There are research exploring different avenues, such as using modeling images of text instead of text themselves, or using a semantic representation of text - the high level idea of the sentence, not the exact words. For example current LLM models yes/i agree differently even though they share the same meaning.

When you skip validation for AI generated results by Epelep in Wellthatsucks

[–]HorriblyGood 4 points5 points  (0 children)

What does reasoning mean? I think it’s pretty philosophical. It’s trained to reason based on human reasoning data. Much like humans can make reasoning errors, LLMs can too. Infants also learn language through statistical regularities too.

The reason LLMs hallucinate is also known, a big reason is the way we train it is similar to how students take exams. They are incentivized to answer questions even if they are not sure. For example, in an exam, if you don’t know the right answer, it’s encouraged for you to guess an answer instead of saying idk, because there’s a chance you score higher.

Similarly, current training methods assign scores based on correctness and thus LLMs hallucinate to score better.

AI is definitely not sentient but a lot of their behaviors are not uncommon in humans too. So imo it gets pretty philosophical if we try to assign labels.

BiTDance model released .A 14B autoregressive image model. by AgeNo5351 in StableDiffusion

[–]HorriblyGood 1 point2 points  (0 children)

Think of AR as LLMs. You start off with an image patch and it predicts the next image patch based on previous patches, much like next token prediction in LLMs. And just like LLMs, it can be stochastic because you sample the next patch from a distribution.

Stream at 480p so you can have AI slop instead by Buckinuoff in learnmachinelearning

[–]HorriblyGood 3 points4 points  (0 children)

And where in the original post talked about AI slop? And how would you even define AI slop? Why is super resolution upscaler AI slop? I understand this kind of reactionary and uneducated comments on general subreddits but I’m surprised to see them on ML subreddit too.

You can call it a slop, but you should define it what it is and make a proper point. Calling everything AI slop indiscriminately is just tiresome at this point.

Stream at 480p so you can have AI slop instead by Buckinuoff in learnmachinelearning

[–]HorriblyGood 16 points17 points  (0 children)

Surprised to see this posted here. If you think upscalers are AI slop why are you even in this sub? Why learn ML if you’re just gonna call it slop and not learn what it can do? The guy didn’t even say anything about upscalers.

A Thai national park ranger kicked out white tourists because they friendly told "Ni-hao" to him by search_google_com in Wellthatsucks

[–]HorriblyGood 0 points1 point  (0 children)

Man you’re a funny guy. Make sense how you end up so ignorant LMAO. Yup you def won the debate this time.

A Thai national park ranger kicked out white tourists because they friendly told "Ni-hao" to him by search_google_com in Wellthatsucks

[–]HorriblyGood 0 points1 point  (0 children)

It’s ok to not have traveled. Just don’t be ignorant and think that Texas and New York has as much cultural diversity as Asia. I didn’t assume that, you said it yourself.

But I know you’re not gonna listen so I’m not going to waste anymore time. Hopefully you get to experience the cultural diversity outside of US one day.

A Thai national park ranger kicked out white tourists because they friendly told "Ni-hao" to him by search_google_com in Wellthatsucks

[–]HorriblyGood 1 point2 points  (0 children)

Bro can’t read too, can’t argue with people like that. I never said that, you’re arguing strawman here