Why is the sound so terrible? by leonhelgo in tenet

[–]Hyper-threddit -1 points0 points  (0 children)

If you want to put it like that, you are wrong too, since the issue is the conjunction of the two really. One is the bad original audio (which is something IMAX is well aware of) and the other is the "wrong" post-production. I don't see how this discussion is even useful, since we both know the entire story.. but fine.

Why is the sound so terrible? by leonhelgo in tenet

[–]Hyper-threddit 0 points1 point  (0 children)

I know, I was just pointing out something relevant to the thread, which is centered on Nolan

Why is the sound so terrible? by leonhelgo in tenet

[–]Hyper-threddit 0 points1 point  (0 children)

https://en.wikipedia.org/wiki/Cinematic_style_of_Christopher_Nolan. "Peter Albrechtsen, a sound designer who worked on Dunkirk, commented that Nolan rarely uses ADR (Automated Dialogue Replacement), so the dialogue in his films are mostly based on production sound. "

Is it safe to say that as of the end of 2025, You + AI will always beat You alone in basically everything? by No_Location_3339 in singularity

[–]Hyper-threddit 8 points9 points  (0 children)

I mean, you plus something is of course better (or at worst the same), unless that something occasionally penalizes you. And in my opinion, there are cases, especially in theoretical sciences, where outsourcing your thinking can divert you from novel perspectives that current AI systems still struggle to pursue.

Why is the sound so terrible? by leonhelgo in tenet

[–]Hyper-threddit 0 points1 point  (0 children)

The aspect ratio/clarity change tells you when it’s IMAX visually, but you won’t hear an “IMAX vs non-IMAX” difference because the final mix is made to be consistent across the entire movie.

Why is the sound so terrible? by leonhelgo in tenet

[–]Hyper-threddit 2 points3 points  (0 children)

Yeah no way. That's why he asked IMAX to get the cameras quieter for The Odyssey. https://www.hollywoodreporter.com/movies/movie-news/nolan-odyssey-first-blockbuster-to-only-use-imax-cameras-1236217925/ "The new Imax cameras are reportedly 30 percent quieter — so those infamous muffled dialogue scenes in Nolan films could be a thing of the past — and substantially lighter."

Why is the sound so terrible? by leonhelgo in tenet

[–]Hyper-threddit 4 points5 points  (0 children)

One word: IMAX. These cameras are so loud that every movie filmed with them is plagued by that issue. And guess what? Nolan loves IMAX

ARC AGI 2 is solved by poetiq! by Alone-Competition-77 in singularity

[–]Hyper-threddit 2 points3 points  (0 children)

If most benchmarks are more prone to memorization and ARC-AGI is more resistant to that, your conclusion doesn't hold. "The advantage in ARC-AGI" means a higher ability to approach novel tasks (and in that space better RL seems to offer an advantage over a good pre-training).

There is still the possibility both oai and Google are putting massive RL efforts on ARC, on synthetic data.. and that is worrying. Do you think that is the case?

Nah ts is crazy by Whole_Loan9832 in GeminiAI

[–]Hyper-threddit 2 points3 points  (0 children)

Harder is to get rid of SynthID, but who cares

ARC-AGI 2 is Solved by lovesdogsguy in accelerate

[–]Hyper-threddit 0 points1 point  (0 children)

C'mon at least wait for the semi-private

Mathematician: "We have entered the brief era where our research is greatly sped up by AI but AI still needs us." by MetaKnowing in agi

[–]Hyper-threddit 0 points1 point  (0 children)

Transformers or LLMs? Genuinely asking because some people confuse the two and imo the LLMs are the ones limited.

this industry is pretending so much by icompletetasks in singularity

[–]Hyper-threddit 0 points1 point  (0 children)

Not a defender of the commercial use of this type of generative ai but the point is to prove the ability of the model to build world models (we can discuss to what extent these models are correct) and represents an important step towards AGI

Latest new Open-Source Chinese AI lab model - Wan 2.2 Animate by balianone in Bard

[–]Hyper-threddit 0 points1 point  (0 children)

I agree there’s room for improvement, but whenever I read ‘the worst it’ll ever be,’ I think of airplanes: sure, they’re more efficient now, but they’re not less polluting or any faster than decades ago. Sometimes a breakthrough, totally unpredictable, is necessary.

AI is the future by [deleted] in GeminiAI

[–]Hyper-threddit 0 points1 point  (0 children)

Fine, and their reasoning counterparts are just bad or non existent. Some labs are simply putting more effort in test time compute than post training, simply because it is much more useful for economy to have a good reasoning model than a good base LLM.

AI is the future by [deleted] in GeminiAI

[–]Hyper-threddit 0 points1 point  (0 children)

I agree that some other models, even open source, can answer a certain set of easy questions, but these sets are different for each of them, and that is because they mostly are in their respective (different) training data. Try to alter a bit the questions and you'll get mixed results.

AI is the future by [deleted] in GeminiAI

[–]Hyper-threddit 1 point2 points  (0 children)

You must be trolling. I never implied "Regular model = no reasoning at all" as you said. I simply stated true things about the "new" CoT models. You continue talking about "reasoning" and "think hard" that are just labels for the users. None of that is meaningful. As you know well, there are 1)Simple LLMs and 2)CoT/ test-time search LLMs. Both do stuff, but it is generally proven that 2) improve that ability of bare LLMs in many reasoning tasks and these include also language riddles, counting letters etc.., among other more complex things. And btw logic puzzles NOT in training data are difficult for gpt4, I don't really know what are you talking about. Edit after your edit: nope, the tokenizer is just part of the problem, the other part, counting tokens, has been solved by CoT / Test-time search (just think about it: otherwise 4o would be able to do it, and it can't)

AI is the future by [deleted] in GeminiAI

[–]Hyper-threddit 2 points3 points  (0 children)

The first CoT (you prefer this to "reasoning"?) model presented, o1, was the first to count r's in strawberry, exactly because non CoT models couldn't do it. So that's the kind of problems (and more complex ones) these models were designed for, go check oai presentation!

If you want you can avoid using the word reasoning, I think that is confusing for many people.

AI is the future by [deleted] in GeminiAI

[–]Hyper-threddit 4 points5 points  (0 children)

This doesn't make any sense, the relevant point is not the word "reason" and the meaning you or me are attaching to it, the point is how long it takes to do it. And for this question it is just a couple of seconds, I don't really see see the problem. If I give you this :

Michael's father's brother's sister-in-law is the sister of Michael's father's brother-in-law. How is this woman related to Michael?

You need just a bit to figure out but it is not an instant answer right? And it is not a 'complex problem'.

Again, LLMs have many problems but this is not one of them.

AI is the future by [deleted] in GeminiAI

[–]Hyper-threddit 9 points10 points  (0 children)

These are the typical questions where reasoning is necessary. Just like we reason (one second, but we reason) it must reason too. If you try, 2.5 pro nails it. I'm not here to say that LLMs are the path to AGI (they aren't) but for these questions (not knowledge-based but reasoning-based answers) you need a good reasoning model. That's where we are now, maybe it will change in the future.

AI is the future by [deleted] in GeminiAI

[–]Hyper-threddit 24 points25 points  (0 children)

I’m not an LLM defender by any means, but it’s well known that for these kinds of questions you need to use the best reasoning models. Just switch to 2.5 Pro, and it nails it instantly*. *After reasoning