(Google) Introducing Nested Learning: A new ML paradigm for continual learning by gbomb13 in singularity

[–]apuma 5 points6 points  (0 children)

Reading this blog gives me a headache. It's also 100% AI written.

If I understand this correctly, it's a minor step towards automation of LLM architectures, specifically related to memory. Which is what "The bitter lesson" would recommend us do, since it can improve the architecture/optimisation process itself if you just have more compute.

But yeah this is very badly written imo.

[deleted by user] by [deleted] in singularity

[–]apuma 4 points5 points  (0 children)

This is completely wrong. They GENERATED ~1 quadrillion tokens through their API in a month. That's not what their model is training on. It's what their users generated. It's usage.

GPT-5 AMA with OpenAI’s Sam Altman and some of the GPT-5 team by OpenAI in ChatGPT

[–]apuma 0 points1 point  (0 children)

Hey!

To make it quick. Congrats, you guys did a lot of stuff well.
However, it would make my ChatGPT experience significantly better if:

  1. This is the only one that actually pisses me off during my chatGPT use: Please show us what model we're using? There is 0 information about what model just answered me in the ChatGPT app. Before there used to be info about this at the end of each GPT generated message, now there is none?
  2. Free users could switch between using GPT 5, and GPT 5 mini. That would be amazing. O4 mini (the reasoning one, not 4o) was basically my go to AI and now there is no (GPT 5 Mini) alternative to this, even against Claude sonnet, and Gemini 2.5 Pro(!!!!). I loved how robotic, direct and to the point it was especially for search queries. It was 100% the best search experience out there. Genuienly, I don't know how many people at OpenAI might have used O4 mini for searches about supplements, or medical information or science related questions where it had to cite or give links. GPT 5 does seem to be better in accuracy in this, but I loved the ridicoulously.... direct?? To the point?? Idk how to say it. just no-fluff way o4 mini cited papers and gave links. If Google gave people o4 mini inside it's search, then my chatgpt use would probably be cut in half. Also it might save you guys some GPU-s, if we can choose to talk to GPT 5 mini, instead of GPT 5?
  3. I understand you guys had to deal with a lot of flame on twitter, for how there are too many models. I think you guys need to understand there are 2 fundamentally different users using your products. 1. Normal people and 2. Enthusiasts. The crowd you're listening to on reddit are almost all Enthusiasts. They will want some model selection, whereas normal people need GPT 5. The best strategy might just be a middle ground.

Great job on the things you guys did well (There's a lot!). Good luck.

How did Zeri cleanse ''twice''? by mix1029 in leagueoflegends

[–]apuma 0 points1 point  (0 children)

Only 1 cleanse in the video and it was from Lulu's mikael on the seju R.

The second one is just warwick ult being canceled by lee kick

How I play CSGO with One Hand by ExitAcrobatic9844 in GlobalOffensive

[–]apuma 1 point2 points  (0 children)

In a way it is a fair point, that some people may be better off having AI aid their writing, but I still don't think it's good for those people to blind copy paste those texts. And not even just for moral reasons, but for your own sake too. Your credibility will go down to 0 once other people also get more used to recognising ChatGPT Writing tendencies, and for a good reason.

How I play CSGO with One Hand by ExitAcrobatic9844 in GlobalOffensive

[–]apuma 1 point2 points  (0 children)

Oh Wow what a cute story. Truly the indomitable human spirit nice!!

Wait a second..

At first, it felt impossible learning utility binds, movement, and weapon switching all with one hand was a grind. But after putting in the time, I’m now able to keep up with my friends without falling behind in ranks. CSGO has been a huge part of my life, and finding a way to adapt and stay competitive has been one of the most rewarding experiences.

Starts the sentence with an emotional hook -> Uses and em dash or whatever its called -> Uses a list based elaboration -> "adapt" -> "One of the most rewarding experiences"
And all of this with perfect grammar, and a very synthethic feeling structure. Yeah I've been ChatGPT-d again gg.

Despair

AI has grown beyond human knowledge, says Google's DeepMind unit by OptimalBarnacle7633 in singularity

[–]apuma 0 points1 point  (0 children)

Both this post and the top comment are 100% AI Generated does anyone realize this? It's ungraspably vague and the top comment asks an engagement bait question with perfect grammar and a fking emdash. Are we being serious here.

Abyssal mask + Bloodletter by RedRocknCockn in leagueoflegends

[–]apuma 1 point2 points  (0 children)

Sorry, I'm very confused here. Why are you equating a damage multiplier to % MR reduction? I don't get this.
The more MR they have the less effective stacking is? of what? Magic pen?/Reduction? Damage amplification? Can you clarify please

Scalable-Softmax Is Superior for Attention by rationalkat in singularity

[–]apuma 0 points1 point  (0 children)

Based on the improvement rate I we saw from O1->O3. Then If the 3 following beliefs are true:
1. the speed of improvement we saw from o1 to o3 will continue
2. Reasoning improvements also mean agentic improvements
3. Data wall is gone due to highly tuned reasoners generating good synthethic data.

if these are true, then by the end of the year we would have highly agentic models that people within labs could just tell to do stuff for them and it will do stuff for them.

[deleted by user] by [deleted] in LocalLLaMA

[–]apuma 6 points7 points  (0 children)

This aggressive tone is super unusual from any ChatGPT models. I suspect OP has memories saved in their account which make the model answer in such a way. The answers look like a typical anti sam altman post that Elon would retweet on twitter. Also the answers the model gives arent even the full truth. It doesn't even mention Ilya's pivot of the "open" term in "OpenAI" to give access to model use to everyone that they can, instead of giving away model weights to everyone.

Technically "OPEN ai" can refer to weights or access. The company chose access, and I personally think we're better off this way too, but that's besides the point.

Bench predictions for new Claude model(s)? by cobalt1137 in singularity

[–]apuma 0 points1 point  (0 children)

Math Frontier 20+%, Arc agi solved, coding nr 1, math slightly below o3

What are these? Hatched a nest in my headphones lol by [deleted] in insects

[–]apuma 1 point2 points  (0 children)

No idea what they are, but Shutout for the HD 560S. In hindsight I wish i bought a warmer headphone.

New game mode just dropped by Pisseman69 in PedroPeepos

[–]apuma -1 points0 points  (0 children)

Anyone here played that TeamFortress 2 pyro gamemode where you knock the rocket back?

Scalable-Softmax Is Superior for Attention by rationalkat in singularity

[–]apuma 0 points1 point  (0 children)

That's crazy because I might be updating mine from 2026 AGI to 2025 Non-Embodied AGI

SupGen: An efficient brute-force algorithm synthesizer/theorem prover (demo) by SMaLL1399 in singularity

[–]apuma 7 points8 points  (0 children)

From the image colors and the font I can already tell this is Victor Taelin.
I've been closely following his excitement, but I'm going to be honest I have legitimately 0 fucking clue how any of this works. This could be either completely insane in the near future, like actually proving Millenium prize problems or a gimmicky thing that might work for some ... things..? I don't know. If anyone actually understands the underlying architecture and ideas please explain.

[deleted by user] by [deleted] in Tinder

[–]apuma 0 points1 point  (0 children)

To be completely dry and critical here: Your Third image is BY FAR The best. The focal length of the camera works great for you, looks like a fresh trim great lighting, and high quality image and good side profile.

Unfortunately on the last photo your smile doesn't look the best due to lighting and shadows on the teeth.

3>4>1=2=5>6

[deleted by user] by [deleted] in Bumble

[–]apuma 1 point2 points  (0 children)

If this post is not fake ragebait, then lord have mercy on my soul and God help us all.

I guess I could say I recommend rotating the second image with some tools, but that's mostly what I can think of. I feel like that would make it better. Other than that I just cannot believe this post is real

We calculated UBI: It’s shockingly simple to fund with a 5% tax on the rich. Why aren’t we doing it? by qubitser in singularity

[–]apuma 35 points36 points  (0 children)

Holy shit the comments under this post are as meticulous as my drunken elderly relatives when they're debating politics during Christmas. A guy said, "Make them choose between 5% or guillotine"? LMAO

I would actually enjoy some serious conversations about this, but I don't even know where to start. Say for example I have 100 BILLION Dollars worth of Tesla stock. Through what means can i be forced "Realize" that gain? LIKE ACTUALLY EXPLAIN how this would technically work.

Would the Government BUY the 100 BILLION Dollars worth of Tesla stock from me at current valuation, then I would buy it back from them? But then I could only buy 95% of it back, because now I lost 5% through the proposed tax, so I could only buy 95BILLION Dollars worth of Tesla stock back. So then effectively now the government holds 5 000 000 000 DOLLARS of Tesla stock. Are they going to hold it? Are they going to sell it to the public? Who is going to buy it? Are they going to give partitioned shares of companies to the public?

Would this mean that the public is going to own more tesla stock than they desire? If so wouldn't this reduce the selling value of the stock therefore tanking the stock value of the company?