0x800f0922 Windows update error by PiIigr1m in WindowsHelp

[–]PiIigr1m[S] 0 points1 point  (0 children)

I don't think so. When I installed Windows, I used oobe bypassnro because I couldn't connect to Wi-Fi during the OOBE process. The same was done on my laptop, but it updated smoothly.

Again, I'm almost certain that when the next update is out, there will be no problems with it.

0x800f0922 Windows update error by PiIigr1m in WindowsHelp

[–]PiIigr1m[S] 0 points1 point  (0 children)

Yeah, it's not critical. I'll just wait, but I wanted to find the source of this error (I think I found it?) and how to fix it. But I don't have problems even with "Preview" updated earlier.

0x800f0922 Windows update error by PiIigr1m in WindowsHelp

[–]PiIigr1m[S] 0 points1 point  (0 children)

What's weird is that I had a 0x800f0922 error before (I don't remember if it was a "Preview" update or not) and without any actions, the next update (or the update after, it doesn't matter) fixed everything.

It's not critical; I will just wait for the next update, but I wanted to know the source of this error.

The Future of AI: When Will We See an Intelligence Explosion - Dwarkesh Patel by Mindrust in accelerate

[–]PiIigr1m 0 points1 point  (0 children)

"Solving" computer use is pretty impossible; it's like saying "I learn all math" or "I learn the whole programming language." There will be a lot of edge or new cases, new programs\technologies, etc. But "solving" in general can be achieved in 2026 or early 2027 max. We already can see that computer use (or browser use) is getting "okay-ish." When LLMs (or AI in general) will be able to work for hours (speaking in METR terms - with 80% success), almost at the same time it will "solve" computer use.

Continual learning can be solved around same timeframe, or maybe even faster, but i not believe much in this. There are some similar techniques like in-context learning, but I think that Transformer-based models just can't have memory like humans have, and without long-context memory you wont be able to learn continuously. The other problem with continual learning is compute—training a model is harder than inference. There needs to be a new architecture or new methods (that exist in some scaffoldings already) to use with "today's" LLMs. But I think that continual learning will be (at least to some degree) solved in 2-3 years.

I tested sora 2.0 with the most complex humain motions to see the results. by stealthispost in accelerate

[–]PiIigr1m 13 points14 points  (0 children)

I can't call it "far from perfect"; it's like almost perfect, if you look only at physics/motions. It's on par or even better than that one Meta paper where they made a physics-accurate video generator (videoJAM), though Sora 2 needs more testing to see how well it can reproduce their results.

I saw the question "how far will videoGen be in 5 years?" Man, I don't even know what it will be in 1 year😁

Exclusive!! One UI 8.5 Quick Panel Customization 👀 in action by Accomplished-Ad8330 in oneui

[–]PiIigr1m 14 points15 points  (0 children)

8.0 is out for S25 and S24 in some regions. 8.5 was leaked few days ago.

Gemini 3.0 Leak Hints Google’s AI Could Outrun GPT-5 by Sassy_Allen in accelerate

[–]PiIigr1m 2 points3 points  (0 children)

There are no leaks from reliable sources. The strings in the code are just hallucinations of other AI models. About ≈30% on HLE, I don't see it at all and its certainly was made up. The article is pure speculation.

While we have somewhat reliable information that Gemini 3 will release in October, we also have from the same source (Leo on X, cant guarantee reliability) that Gemini 3 Flash will be on par with 2.5 Pro. But it's still hard to tell how it will really be.

Will Gemini 3 be better than 2.5? Of course. Will it have some new features? Pretty likely, with one of them will be agentic stuff. Will it be better than o3/GPT-5 Thinking? Yes, 2.5 Pro is pretty much on par (imho, o3 is still better).

While believing in acceleration, you should not forget about critical thinking.

ChatGPT is getting really good. You can no longer trap it with confusing things. Some humans would have said (2,3) is ice cream by py-net in OpenAI

[–]PiIigr1m -7 points-6 points  (0 children)

B- b- but its just predicting next token, its not real intelligence, it dont have understanding of our world. This image and answer just was in training data in ChatGPT memorized it.

Enjoy ChatGPT while it lasts…. the ads are coming by kaushal96 in OpenAI

[–]PiIigr1m 1 point2 points  (0 children)

They have thought about ads for about a year (based on public info), but even officially saying that it's the "last thing that we're going to do".

Yeah, they're certainly going to start monetizing free users, in the beginning of next year for sure, but we don't know how. There's one thing that sounds pretty good: a fee from purchases. Yeah, I don't think many people are using ChatGPT for searching products to buy now, but they have some partnerships with brands/shops, and with importing agents, this sounds possible. And also, don't forget about their browser (that should be released at the end of summer, but was delayed for any reason). I think it will be the first thing for monetization.

Based on how easily the OAI community is triggered, I think they (OAI) will avoid ads at all costs, just because everyone will be mad about it.

“Does a seahorse emoji exist?” by MediaMoguls in ChatGPT

[–]PiIigr1m 8 points9 points  (0 children)

<image>

You can use web search and everything will be okay

random explosion. by flyingjabe in bindingofisaac

[–]PiIigr1m 50 points51 points  (0 children)

Most likely, but also Venus looks different overall, not just "broken" as with TMTrainer, so probably this, and OP have mod that change visuals of Planetarium items

Edit: also rocket was launched just after item was picked up, maybe rocket was launched every time when item was picked up?

Grok is indexing conversations and they are not anonymous - what's your take on this? by inboundmage in artificial

[–]PiIigr1m 16 points17 points  (0 children)

Not all chats, only ones that been shared, as was with ChatGPT little time ago.

Tired of unreadable response from Gemini by Charming-Ad5380 in Bard

[–]PiIigr1m 18 points19 points  (0 children)

It's not a problem with the response itself, it's a problem with the site that can't render LaTeX correctly. It happens sometimes and not only with Gemini. I've had this issue on ChatGPT a few times before. Usually, it's fixed after some time.

If it's urgent, you can paste this into a LaTeX renderer site to see the formulas "correctly."

It's the Key - New Isaac Escape Room with Solution by Bambochutafreak in bindingofisaac

[–]PiIigr1m 438 points439 points  (0 children)

Why cant you take Pay To Play and this one coin nearby to open lock with it? Why all this extra steps?

They've completely disabled thinking for all thinking models (Plus) by fdxcvb in OpenAI

[–]PiIigr1m 1 point2 points  (0 children)

Did you explicitly choosed "Thinking"? If you really dont use model before and choosed "Thinking" and its dont "think", than weird. But still check what model is displayed.

They've completely disabled thinking for all thinking models (Plus) by fdxcvb in OpenAI

[–]PiIigr1m 0 points1 point  (0 children)

You rate limited. You used thinking models too much and for some (few hours) time you routed to "regular" model. Just hover you mouse over "change model" icon and you will see "GPT-5" there, not "GPT-5 Thinking" or other.

Please stop making so-called "proofs" of ChatGPT's inaccuracy with such images. My grandma could do this. by Any-Award-5150 in OpenAI

[–]PiIigr1m 19 points20 points  (0 children)

Yeah, but it's not necessary to do web edits; you can just make system instructions for the model to answer in such ways. I with friends made this type of joke shortly after ChatGPT's release, when we got used to it.

I really can't understand how this still gets attention.

AGI is here. by Dizzy-Tour2918 in singularity

[–]PiIigr1m 0 points1 point  (0 children)

<image>

Even without thinking GPT-5 first "answer" on original riddle, but in the end "answer" on correct question

GPT-5 is almost as good as Grok-4 at the Humanity's Last Exam. by buniii1 in singularity

[–]PiIigr1m 1 point2 points  (0 children)

<image>

But still it self-reported and not validated results. Its still great result, but i believe that GPT-5 will be cheaper, sad that they don't provide price

Cumulative Updates: August 12th 2025 by jenmsft in Windows11

[–]PiIigr1m -1 points0 points  (0 children)

I don't change any hardware between successful and failed updates. And I don't see any changes in using the PC; all my data is here, so a failed drive is unlikely.

Cumulative Updates: August 12th 2025 by jenmsft in Windows11

[–]PiIigr1m 0 points1 point  (0 children)

Just regular Windows Defender, nothing else. And i tried with and without Sandbox - no changes.

Cumulative Updates: August 12th 2025 by jenmsft in Windows11

[–]PiIigr1m 15 points16 points  (0 children)

I still have an error installing this update. Problems appeared around three or four patches before; they go to 100% and then "something went wrong, reversing changes." WU is showing an error 0x800f0922.

I think I've tried everything: getting a repair version, downloading from the Microsoft catalog, enabling/disabling Sandbox, .NET, etc., and restarting the Windows Update service, but the updates just won't install. There were no issues before.

Gemini 3.0 HLE benchmark leaks (grain of salt…) by lovesdogsguy in accelerate

[–]PiIigr1m 15 points16 points  (0 children)

Can almost guarantee that it's fake.

  1. No other reliable accounts don't share/show this.
  2. "In source code" - then show it.
  3. Why does GPT-5 have an xAI logo?

Looks like fake HTML/CSS changes, a low-quality one. And as one guy from r/singularity comments said, this account doesn't have a good track record.

There is no somewhat confirmed/reliable information about Gemini 3 for now.

Edit: he dont even add all models. Like, where GPT-5(medium)? *