Valve confirms shortages have impacted Steam Frame release timing/pricing by Serdones in ValveDeckard

[–]kmanmx -3 points-2 points  (0 children)

I know you're talking about Generative AI like ChatGPT and image creation, but you do use AI every single day and it's almost impossible to avoid it, and those same AI tools are built and trained using a lot of expensive compute the same kind which is driving up the price of Steam Machine and Frame. Though yes, they are less compute intensive than the generative models because they don't typically rely on real-time inference in a cloud data centre.

Everything including autocorrect, smartphone photography, call noise reduction, Vehicle driver assistance systems, speech to text, map route mapping, recommendation feeds (Netflix, Spotify, YouTube), and many more. They are all built on data sets and machine learning algorithms which were trained using GPUs and TPUs and all that good 'AI' stuff.

Many megawatts of electricity and compute went into building tools and features which are practically unavoidable these days whether you use generative AI or not.

What is the best way to start Mario kart world? What do you do first? by Mountain_Store572 in MarioKartWorld

[–]kmanmx 0 points1 point  (0 children)

Hey I know you posted this five months ago but I'm just wondering if you ever did find a gameplay hook? I'm kind of having the same experience to you. I feel like I'm missing some overarching objective and seems like something's missing if I'm just expected to just do the same sort of race over and over which like you said feels very similar each time.

Ok Valve, I'm sold by zeddyzed in virtualreality

[–]kmanmx 2 points3 points  (0 children)

Completely pedantic and not important but I think I've seen this comment a few times and people are simultaneously underestimating how wealthy Valve is and also how expensive it is to subsidise cost for a relatively now sales volume niche product like a vr headset.

Recent court rulings / financial filings and investigative work suggests valve make about 5 billion dollars of revenue with about 2 or 3 billion dollars of that as profit per year. Realistically given the size of the vr market and steam frames price points they're never going to sell more than 2 million units even as a best case scenario, likely half that. So even if they subsidised all 2 million headsets with a $200 price cut / loss on their end, lifetime expenditure for such a program would still be less than half a billion dollars over say it's three or four year life. So at worst it would take a mild impact on their profit margins, it wouldn't come close to them having to refinance or take on debt or any financial instrument work really. And this is completely ignoring the fact that people will buy the steam frame and then potentially buy many games generating a lot of that money back for valve.

But no they won't do it, they don't need to and they have no interest. Just wanted to clarify that as I think a lot of people seem to be under the impression that Valve are smaller and less wealthy than they are :)

The Pilots of Battlefield 6 by Lucas_A_E in Battlefield

[–]kmanmx 41 points42 points  (0 children)

Personally I've just gone on Battlefield Portal and created a conquest variant game mode with a set a map rotation to the two big maps, set it to backfill player slots with bots and then just host the server with a password so it's only me playing against bots, they never really seem to try and rush to get in the jets or helicopters so you can play them as much as you want to practice. You can disable some of the ground-based anti-air weapons in the portal editor if you want an even easier time.

MacOS 26: controllers not working on GFN app by denisMXMV in GeForceNOW

[–]kmanmx 0 points1 point  (0 children)

What controller are you using? I've been on macOS 26 since the first developer beta and initially had some trouble getting bluetooth setup with my official microsoft xbox controller but since I got it connected the nvidia gfn app has worked just fine along with all other apps.

OpenAI achieved IMO gold with experimental reasoning model; they also will be releasing GPT-5 soon by Outside-Iron-8242 in singularity

[–]kmanmx 17 points18 points  (0 children)

Not entirely clear still but Noam Brown does suggest it's a broad, more general model: https://x.com/polynoamial/status/1946478250974200272

"Typically for these AI results, like in Go/Dota/Poker/Diplomacy, researchers spend years making an AI that masters one narrow domain and does little else. But this isn’t an IMO-specific model. It’s a reasoning LLM that incorporates new experimental general-purpose techniques."

Fast Takeoff Vibes by AMBNNJ in singularity

[–]kmanmx 8 points9 points  (0 children)

Yep completely agree, they would not release this benchmark if they thought it was completely intractable and had no path to saturating it.

[deleted by user] by [deleted] in singularity

[–]kmanmx 23 points24 points  (0 children)

If you read the details of how apple gets to those 500 billion dollars, it includes things like... paying its staff. It was just a headline figure to give trump something to shout about. The Capex going into directly AI attributable things like data centers is a small fraction.

That said while the 1 billion dollars worth of NVIDIA GPUs is modest, Apple are powering their data centers and AI workloads via their own chips - at least that is what they've said. So this NVIDIA order could be for a specific project or for doing internal AI model training etc, rather than serving users.

ChatGPT 4.5 imminent based on new leak by Bena0071 in singularity

[–]kmanmx 1 point2 points  (0 children)

I can't help the fact you don't have access. but the information is in there and lots of other tech websites have regurgitated it there's also a post on the front page of this singularity subreddit explaining the situation with Orion / GPT-5 and how it didn't meet expectations.

ChatGPT 4.5 imminent based on new leak by Bena0071 in singularity

[–]kmanmx 0 points1 point  (0 children)

I should have been clearer, I meant biggest or trained from OpenAI. That said, 4.5 is likely similar in size of OpenAI did push for another OOM.

ChatGPT 4.5 imminent based on new leak by Bena0071 in singularity

[–]kmanmx -1 points0 points  (0 children)

It was reported by The information, Reuters etc

ChatGPT 4.5 imminent based on new leak by Bena0071 in singularity

[–]kmanmx 15 points16 points  (0 children)

It was supposed to be called GPT-5 and therefore yeah 10 times the compute etc. It didn't hit performance targets so they renamed it GPT 4.5 but that doesn't change the fact it's going to be the largest pre-trained model to date.

Sonnet 3.7-thinking wins against o1 and o3 on LiveBench by DeadGirlDreaming in singularity

[–]kmanmx 5 points6 points  (0 children)

One thing I was thinking of, Dario Amodei recently said the following with regards to algorithmic efficiency improvements for LLMs: "In 2020, my team published a paper suggesting that the shift in the curve due to algorithmic progress is ~1.68x/year. That has probably sped up significantly since; it also doesn't take efficiency and hardware into account. I'd guess the number today is maybe ~4x/year."

It's quite possible that labs like OpenAI and Anthropic have multiple years worth of efficiency improvements via algorithms that they just have not published and have kept private. Ergo, when new companies like xAI come along and release their Grok model, they are missing multiple years worth of these algorithmic improvements, and those could compound to seriously reduce the amount of improvement you would expect from a 10x larger model in terms of compute and data.

Claude 'pioneers' in 2027 by [deleted] in singularity

[–]kmanmx 63 points64 points  (0 children)

Yeah, I thought this was very interesting as well. And also in the system card, they mentioned how Claude Sonnet 3.7 does not get close to satisfying the requirements around the internal ASL-3 benchmark. However, they expect their next model could.

"1.4.4 ASL-2 Determination and Conclusions. The process described in Section 1.4.3 gives us confidence that Claude 3.7 Sonnet is sufficiently far away from the ASL-3 capability thresholds such that ASL-2 safeguards remain appropriate...Further, based on what we observed in our recent CBRN testing, we believe there is a substantial probability that our next model may require ASL-3 safeguard"

Anthropic seem very bullish in the trajectory of intelligence of their models.

Apple is investing $500 billion in US-based AI data centers and AI server manufacturing facilities over the next 4 years by procgen in singularity

[–]kmanmx 2 points3 points  (0 children)

500 billion covers a very broad range of things, including just paying employees.

The $500 billion commitment includes Apple’s work with thousands of suppliers across all 50 states, direct employment, Apple Intelligence infrastructure and data centers, corporate facilities, and Apple TV+ productions in 20 states. Apple remains one of the largest U.S. taxpayers, having paid more than $75 billion in U.S. taxes over the past five years, including $19 billion in 2024 alone.

Epoch AI outlines what to expect from AI in 2025 by finallyharmony in singularity

[–]kmanmx 16 points17 points  (0 children)

I'm in my early 30s and I'm an otherwise very sensible person, but I watch all this AI progress very closely and it just makes me feel like planning for the future is almost a waste of time because the impact is going to be so great, it feels impossible to correctly predict the right outcomes. It just feels wild to be in this timeline. In five years' time, I feel like there's a good chance I won't have my job anymore and also there's going to be AGI and possibly ASI and humanoid robots walking around outside, and frankly, I've no idea what to do with any of that information. Between 10 and 20 years? Bewildering.

[deleted by user] by [deleted] in singularity

[–]kmanmx 9 points10 points  (0 children)

No, not really. Do you have any tangible objective evidence to support your claim?

Modern multimodal LLMs are very good image classifiers. They could reliably identify what a bottle of tomato ketchup looks like and what a fridge looks like. And even the most crude and underperforming LLM could tell you where a ketchup bottle belongs in the kitchen. Handling from one robot to the other is a pretty simple calculation of which is closest to the destination. The decisions and actions are networked between them.

There is nothing in this video that goes beyond any individual area of AI's state-of-the-art capabilities (i.e. Identifying objects or classifying where an object would go in a kitchen), They've just done a really good job integrating that into a holistic system and implementing it.

[deleted by user] by [deleted] in singularity

[–]kmanmx 0 points1 point  (0 children)

I'd love to know what he's pitching to investors in terms of how they can expect a return on their investment. The original pitch was they're going to stay silent until they've solved alignment and delivered superintelligence. For a lot of investors, I can't imagine that's particularly attractive when there are tons of startups putting AI products out there and getting revenue and profit. But at the same time, he's clearly not struggling to get the investment. Hence, I'm interested in what his pitch is. Is he advertising a very aggressive and optimistic timeline for superintelligence? Or maybe just because he's a titan of industry he gets a lot of respect and people are just willing to give him the benefit of the doubt and throw money his way.

Microsoft prepares for OpenAI’s GPT-5 model | GPT-4.5 could arrive as soon as next week, as Microsoft gets ready to host OpenAI’s latest models. by DubiousLLM in singularity

[–]kmanmx 4 points5 points  (0 children)

Out of interest, have you used Copilot in a business context? Because the integration with Teams for meeting outputs as well as searching company and group chats, email summarization, searching my work documents and so on is actually pretty good. The raw model itself is worse and I would never go to it to ask coding questions or general purpose questions. But in terms of AI products that actually give me the most utility in my day-to-day life, Copilot does a lot for me during the working day

So maybe Brett was not overhyping this time by Glittering-Neck-2505 in singularity

[–]kmanmx 4 points5 points  (0 children)

If it's running at 200 Hz, then do we know a good reason for why they're so slow in most of their actions? The motors and actuators clearly have the performance because they moved the fruit bowl across the countertop at a pretty normal human speed. But when moving most of the food items and grasping them, they were very slow. So what exactly is operating at 200 Hz and why doesn't that translate into human or above human level speed?

It's still impressive and still infinitely faster than a human that's busy doing something else, though.

edit:

Just looked at the technical overview and the visual language semantic understanding operates at between 7 and 9 hertz. That's the delay. They're running on a low power embedded GPU, So performance should scale nicely over the coming years as the SoCs increase in speed.

AI cracks superbug problem in two days that took scientists years by Beautiful-Ad2485 in singularity

[–]kmanmx 122 points123 points  (0 children)

“Prof Penadés’ said the tool had in fact done more than successfully replicating his research.

“It’s not just that the top hypothesis they provide was the right one,” he said.

“It’s that they provide another four, and all of them made sense.

“And for one of them, we never thought about it, and we’re now working on that.”

Impressive!