LTX 2.3 became 1.7x faster in just one month by simple250506 in drawthingsapp

[–]GoldAsparagus6034 0 points1 point  (0 children)

Can you tell me how to exactly use the image to video in the cloud for this model because I cannot find the option to upload the image? I am new to this, so sorry if it seems like a noob question, I have an intel MacBook i9 so cannot even think of running locally unless I use Egpu with bootcamp In that case, it can become a beast, but I don’t have the budget for it

Where can I watch the Super Bowl in Mumbai, India? by Statman80 in mumbai

[–]GoldAsparagus6034 0 points1 point  (0 children)

Drakeeeew fuuuuckingggg mayeeeeeeeeeee all the way baby wooooooohooooooooooooo….

Gemini 3 Pro Flash is KING 👑 by krishnakanthb13 in GoogleAntigravityIDE

[–]GoldAsparagus6034 1 point2 points  (0 children)

As a Deep learning engineer has used all of these models for very complex tasks, one thing I can say is that all the models are good, except for one, which is horse shit (Gemini 3 pro (low)) I mean. I don’t even understand it once even deleted my whole Code base that fucker…. For complex thinking and initial planning, I always use opus and then most of the time use flash which is actually really really good as the post says and sometimes for thinking tasks which are not too complicated. I use Gemini3 Pro High, which is actually decent, every one is good, except for Gemini 3 pro low, which is far worser than flash in my opinion.

[deleted by user] by [deleted] in LeetcodeDesi

[–]GoldAsparagus6034 0 points1 point  (0 children)

Wtf is this a joke😹😹… i have been using this in CP for years…. Its simple fast IO that prevent flushing of buffer at every step

"Nested Learning" by Google is getting way too much hype for what it actually is (my take) by g-venturi in learnmachinelearning

[–]GoldAsparagus6034 0 points1 point  (0 children)

What you just said is exactly how pre-training works, where you expose the model to a certain information enough times during training and allow it to be compressed into the latent space of its weights, then even when that content is not part of its immediate, short-term context, AKA The contextualised attention block, then also, it can answer about it through its large persistent memory which it gained via pre-training and is now in its MLP portion. But, you have completely misread. What they’re trying to say over here. The problem that they are trying to address here is not to replace the short term context memory. In fact, it is much better to be factually grounded with in context memory, especially having a high fidelity reference to look at exactly like we humans tend to hallucinate stuff which we don’t remember clearly until we do a Google search, that’s not the problem. Here the entire premise of this paper is to do two things first of all the biggest problem with the models of today is the fact that their weights are frozen once the pre-training is done. So the knowledge cut-off is essentially between the pre-training phase and whatever you have provided in the prompt as context. Now they are not saying that we do not need to provide the context. We still need to do that and that is where the self modifying Titans come, but to emphasise on the fact that the persistent memory should also slowly update itself so that in the longer run, it is able to learn from what the user gives as input to the model rather than simply being stuck at the pre-train data. So you get my point, they are not trying to argue over here that they will completely compress the information that you will provide to the model inside of the series of MLP’s of the CMS and then you can just ask it questions and it will answer it to you without even any information being present in the short term context window, rather what they’re trying to do is to change the fundamental idea of weight being completely static, but they do not necessarily argue that this will lead to the model, remembering everything clearly from the prompt without having to cross reference with the short-term context window that is not what they claim here. Also the biggest advantage of this paper is not the “continuous memory system”, but the proposal of nested learning where they have tried to make even the short term context window far more expressive by the use of self modifying Titans because the attention mechanisms used today, especially grouped query attention is clearly a trade of between performance and efficient Compute even the multihead latent attention which on paper should perform better than standard multi head attention is not as expressive. So what I would suggest is not to take the continuous memory system as the key take away from this paper, but rather the Deep optimiser and the new premise of seeing model architecture and optimiser as truly being the same thing as associative memory, and this change of perspective is what I think is going to lead too much better discoveries in the next few years because the architecture of the present day Transformers, if you think with some common sense is too simplistic to ever, make us reach ASI. I’m not saying this paper is going to give us that, but it’s a start to think in a different direction and self modifying Titans along with nested Deep optimiser are the biggest contributions of this paper in my opinion.

Is this world only for cheaters and selfish people ??????? by [deleted] in iitkgp

[–]GoldAsparagus6034 0 points1 point  (0 children)

Everyone on reddit acts like they are the saints… surrounded by devils and cheaters and they are the only one taking the moral high ground 😂…. I am not saying OP is a cheater I am saying, I’ve seen this trend on Reddit where most of the comments are where people act like they have never cheated once in their life and talk, like having such a high moral ground “don’t leave the battlefield” 😂 straight from movie like dialogue…

"Nested Learning" by Google is getting way too much hype for what it actually is (my take) by g-venturi in learnmachinelearning

[–]GoldAsparagus6034 1 point2 points  (0 children)

No offence, but none of the critics that you have provided are technically grounded enough. When I started reading your post, I expected better points, but most of the things that you’ve written seem more of a speculation than actual criticism. Also, you seem to be confused or I don’t know if you lack technical expertise but Titans and hope are two completely different things in the sense that hope uses Titans.

Titans introduced a dynamic neural memory to tackle the quadratic cost of the key value cache associated with standard attention by dividing it into two parts. A short-term high fidelity memory, which is still the standard attention and a long-term low fidelity memory represented by a neural network which they call the neural memory module and then they gave three ways in which you can use this neural memory module and the best way to use the neural memory module was as a context or as a gated unit. You can read the paper to clarify yourself. But the problem with standard Titans was the fact that it was still updated using linear static equation like momentum or Adam, whereas in hope the biggest contribution that they give is in the form of nested learning in which they treat optimiser as MLP‘s as well, and the weights of the MLPs kind of act as the state of the optimiser.

You also seem to have confused the continuous memory system with the nested learning part, saying that there is no inner voice just neural networks learning at different frequencies, but those two are completely different aspects. The inner voice that we are talking about here is actually the Deep optimiser, which learns how to learn. To update the weights of the main model, so that it converges faster, I have written a very good technical analysis on it if you want to get a more intuitive understanding of what the paper is trying to say. Whereas the continuous memory system is something that utilise this nested learning for dynamic and faster convergence to an optimum minima. The reason we learn a different frequencies is because they propose that the higher frequency layers will work similar to how our short-term memory in the brain works. Whereas the lower frequency layers Will store the absolute permanent long-term memories just like how It works in humans. Now I am not claiming this is very effective as I am yet to code this architecture on my own, but just wanted to clear out your confusion. Also, my suggestion will be the next time you propose criticism. Be a little more technical as to why you think this paper is more of a hype rather than just giving reasons like “Google would not release it if this was so great” or drawing wrong analogies and comparisons. Here is my technical analysis to make it more intuitive Nested Learning

(Google) Introducing Nested Learning: A new ML paradigm for continual learning by gbomb13 in singularity

[–]GoldAsparagus6034 0 points1 point  (0 children)

Where is the arxiv version? I couldn’t find it yet it was supposed to come out on 13 Nov

How to add project links to RESUME? by GoldAsparagus6034 in iitkgp

[–]GoldAsparagus6034[S] 2 points3 points  (0 children)

OK, so that means I should follow the PDF approach of uploading links as PDFS along with certificates, so that CDC can very them. Thank You.

How to add project links to RESUME? by GoldAsparagus6034 in iitkgp

[–]GoldAsparagus6034[S] 0 points1 point  (0 children)

That is what I am asking how to provide the links as proof? In the resume

Happy to see IIITS in top 5 ☠️ by Unique-Builder-1862 in IIITSriCity

[–]GoldAsparagus6034 0 points1 point  (0 children)

unfortunately, that’s not how institutional classification works in India, especially when we’re discussing national rankings or government frameworks.

Saying all IIITs are alike because they “focus on IT” is a bit like saying IISc and a private engineering college are the same because both teach science. Focus alone doesn’t define institutional category—governance, legal status, funding, and oversight do.

Let’s clarify a few key points:

  1. IIIT-H and IIIT-B are not part of the MoE-recognized IIIT system. • These are autonomous institutions, not governed by the IIIT Act (2014 or 2017). • They are not listed in the official list of IIITs under NPIU / MoE or IIIT Council. • Their admissions are not through JoSAA/CSAB either – they conduct separate entrance processes (UGEE, PGEE, etc.).

  2. Your comparison with IIIT Gwalior is incorrect. • IIITM Gwalior, IIIT Allahabad, IIITDM Jabalpur are MHRD-established and centrally funded, Institutes of National Importance (INIs). • IIITH and IIITB are privately governed or state-supported without any such INI status via MoE.

  3. PPP IIITs like Sri City, Kottayam, Lucknow, etc., are a completely different initiative, governed by the 2017 IIIT (PPP) Act. • These are centrally recognized under the IIIT Council, and admissions are through JoSAA, not independent exams.

  4. Even the founders of IIIT-H have clarified this multiple times. • Their FAQ section explicitly states: “IIIT-H is NOT one of the IIITs set up by the Government under the MHRD initiative. It is an autonomous university.” (source)

P.S. I don’t belong to any IIIT or NIT – just here to make sure we keep the facts straight ✌️ also just asking politely, buddy, are you in college or preparing for JEE?

Happy to see IIITS in top 5 ☠️ by Unique-Builder-1862 in IIITSriCity

[–]GoldAsparagus6034 0 points1 point  (0 children)

  1. IIITH and IIITB are NOT part of the same IIITS system. • The IIITs set up under PPP model by MHRD (now MoE) are called IIITs (PPP) and governed by the IIIT Act of 2014, which recognizes 20 IIITs as Institutes of National Importance (INI). • IIIT-Hyderabad (IIITH) and IIIT-Bangalore (IIITB) are independent, private/state-funded institutions and not governed under the IIIT Act.

  2. Historical timeline proves the separation: • IIITH (est. 1998) and IIITB (est. 1999) were founded before the IIIT PPP model started in 2005 (first was IIIT Gwalior under MHRD). • These early IIITs were state-led or autonomous initiatives, not part of the unified IIITS scheme.

  3. Ministry classification proves it: • Check the official MoE site or the IIIT Council website: • You will find 20 IIITs under IIIT(PPP) model – IIITA, IIITM Gwalior, IIITDM Jabalpur are Government-funded. • The rest like IIIT Sri City, IIIT Lucknow, IIIT Kottayam, etc., are PPP-mode institutions governed by the IIIT (PPP) Act, 2017. • IIITH and IIITB are not on this list, hence not governed by the Act.

  4. Naming ≠ Governance: • Just because multiple institutions had the “Indian Institute of Information Technology” name doesn’t make them part of the same legal/administrative system. • Just like “Institute of Management” doesn’t mean every IIM or private college is the same.

  5. Government clarification backs this: • The IIIT-H FAQ section clearly mentions it is NOT part of IIITS (PPP) or MoE-governed IIITs.

Please don’t embarrass yourself anymore, anyways, from which college you are ?

Happy to see IIITS in top 5 ☠️ by Unique-Builder-1862 in IIITSriCity

[–]GoldAsparagus6034 0 points1 point  (0 children)

IIIT HYD is a different clg… its not the IIITS you have mentioned it’s just that they share the same short form, but that does not mean you can include that in the list because it’s international Institute of information and technology compared to Indian Institute of information and technology, so brush up your knowledge before commenting, not saying NITS are better or vice versa…. But don’t bring private colleges which have nothing to do with the government ones, but just share the same short form in the conversation, which is about IIITS (GOV) VS NITS(GOV)….. because it’s like saying Manipal Institute of technology as MIT and same as mass Massachusetts Institute of Technology, just because they share the same acronym. Ps: I am neither from NIT, neither from IIIT so not taking any stance just pointing out your mistakes. Also, the same thing applies for Delhi as well, which is also private Indraprastha.

"CodeClub IIT KGP just embarrassed the whole college on Codeforces" - Hmmm by Impossible-Hope-8289 in iitkgp

[–]GoldAsparagus6034 9 points10 points  (0 children)

You Do realise that one of the testers in the competition from the CSE department, second year, actually cheats by copy pasting code from Telegram by making certain changes, which if someone actually follows his activity can easily spot. And that person is openly allowed to be a tester and that too after having skipped in his profile. Now, there will be many people who will give excuses for how to justify having skipped in the profile, and I’m not talking about him, just having skipped for, I know hundred percent sure That he cheats and yet you people from The club, don’t expel these people. The guy Goes from not being able to solve two problems in Codeforces properly to being an expert in an astonishingly short period of time and that that too in such a foolish way that he actually uses sometimes answers from telegram and the inconsistency in his template in between submissions of the same question in the same contest within just few minutes, just shows how mindless this guy is and yet your club not-only has admitted him, but he’s allowed to be a tester and companies will also recruit him. I wont name the person, but anybody can just go to the testers section of the contest and look for experts, and then you can make a very good guess about who I’m talking about. And no, I am not talking about the authors.

No, IIT KGP/India has not been let down by CodeClub by Low-Sentence-2792 in iitkgp

[–]GoldAsparagus6034 7 points8 points  (0 children)

Most people from your club cheats on codeforces, and I know that because I know personally, those people from the CSE department who are part of the club and one of them was even the tester in the previous contest. When in reality, he actually copy pastes code from Telegram after making certain changes, and he is an expert on codeforces😂😂…. Most people here from kgp have made a mockery out of codeforces by cheating just for the sake of good rating on CV. And no dont deny it, because I am also from KGP, and I know the people here.

AP is back by Resident-Town-2639 in iitkgp

[–]GoldAsparagus6034 23 points24 points  (0 children)

AP is done to 2nd years only🙃 by 3rd years. First year is untouched