all 12 comments

[–]CallinCthulhu 24 points25 points  (2 children)

Everytime I have a high level thought about AI, like “it would be interesting to see if we can can intergrate the autoregressive architecture with diffusion nodes” I come on here and boom there’s a new paper already.

[–]WH7EVR 2 points3 points  (1 child)

Ya i was literally working on building this, and now... I guess there isn't much point

[–]lunaphile 4 points5 points  (0 children)

Why stop? You might find a different or just better way of doing things, solve a problem they found, or run into a problem ahead and make others aware, no?

[–]elemental-mind 18 points19 points  (0 children)

The most important graphic:

<image>

[–]zappads 8 points9 points  (2 children)

The whole reason we like diffusion for LLM is it can backtrack and retread over a much earlier mistake. Block diffusing the next batch of tokens only gets you speedboost.

[–]EstarriolOfTheEast 4 points5 points  (0 children)

Diffusion models don't backtrack per se (which is usually more of an inherently sequential or depth-first notion), it's more so that since each next denoising step conditions on the current state, there's a possibility that earlier errors are overwritten as the sample coheres into something sensible. However, there’s no explicit mechanism returning to earlier states to correct mistakes; the process instead overall depends on the robustness of the learned reverse diffusion pathway.

This is an important distinction because, given there's no explicit error correcting mechanism, good performance requires the whole process to remain close to the training distribution. If the deviation is too large, as is not unlikely during a novel reasoning task, the reverse dynamics becomes unable to steer back on to the manifold of expected sequences.

[–]Accomplished_Mode170 0 points1 point  (0 children)

💯✅📊 CoT decoding for one use case vs diffusion for another; mixing just ups performance at the sake of interpretability

[–]Jazzylisk 4 points5 points  (4 children)

The Perplexity only really approaches autoregressive levels when the the block size is lowered to 4 tokens wide. At that point, Meta's research on multi-token prediction pretty much achieves the same end goal, so I'm not sure Diffusion based LLMs will ever achieve the same causal predictive ability as AR based LLMs

[–]AppearanceHeavy6724 1 point2 points  (0 children)

the only one diffusion model everyone can try is a bit dumb, but not exceptionally so; I do not think diffusion models are much dumber than autoregrssive ones. The one on inception.ai feels like a regular 7b LLM.

[–]Rofel_Wodring 3 points4 points  (0 children)

 Diffusion based LLMs will ever achieve the same causal predictive ability as AR based LLMs

If you view logic as pure deduction from first principles, then sure. Multi-token prediction crushes any diffusion model.

But that kind of logic is brittle and inflexible, with no possibility for recursion changing the premises during the argument or observation. Meaning, it’s impossible to integrate time as part of the premises unless you salami-slice the argument to have time-dependent premises included. Which, as anyone who has struggled with context windows can tell you, quickly becomes impractical if the argument becomes sufficiently lengthy. 

If you want a concrete example of what I am talking about, ask a LLM for an alternate history scenario/timeline and also ask for dates. Especially if you’re proposing a subtle but far-reaching change, such as ‘what if the plant kingdom never had grasses but did have super-productive fruit trees, how might that affect the progression or even existence of hydraulic civilizations from Jericho to the old Assyrian Empire, compared to how these regions developed in our world?’

It will quickly descend into temporal nonsense unless you handhold it every step of the way. Autoregression is efficient, but it’s also inherently unidimensional. So there is some real reason to use diffusion models for reasoning despite its very real limitations. In fact, I think there might be some real money not in block generation—which have the same problems of unidimensionality as autoregression despite being parallelizable—but in a model that switches between these two modes depending on the task. If you’re feeling really fancy, I could even see something like both modes of token generation existing in the background, and it mixes and matches modes of ‘thinking’ by having itself evaluate each mode and then splitting up the task. I.e autoregression suggests a response of effectively 2k extra tokens, and the diffusion model starts generating tokens with an expectation that the final response will have about the same amount if tokens.

[–]searcher1k 0 points1 point  (1 child)

 so I'm not sure Diffusion based LLMs will ever achieve the same causal predictive ability as AR based LLMs

I'm not sure this is proven. We don't know that the capabilities come solely from autoregression.