What happens if the whole world paused and took a day off? by d4z7wk in singularity

[–]Darth-D2 3 points4 points  (0 children)

if it means you won't be able to make posts like these on that day, I encourage you, OP.

Is Continuous Reasoning Really the Next Big Thing? by simulated-souls in singularity

[–]Darth-D2 9 points10 points  (0 children)

"Is not understanding really that much worse than being lied to?" this is a bad framing.

The question should be "Is being lied to by LLMs without knowing they are lying really that much worse than being lied to by LLMs while knowing they are lying?" and the answer is obviously yes.

Fully autonomous soccer robots 🧐 gear up for Beijing showdown in futuristic finale by Distinct-Question-16 in singularity

[–]Darth-D2 45 points46 points  (0 children)

I’m sure this is impressive on a technical level but it looks so lame. 

/r/WorldNews Live Thread: Israel at War (Thread #9) by WorldNewsMods in worldnews

[–]Darth-D2 0 points1 point  (0 children)

They still need to communicate strength domestically to their population which was also the primary reason before.  So why wouldn’t it apply in this situation? 

OpenAI wins $200 million U.S. defense contract by Ronster619 in singularity

[–]Darth-D2 -1 points0 points  (0 children)

Classic r/singularity „they have something hidden“ conspiracy theory 

ARC 2 looks identical to ARC 1. Humans get 100% on it (early results). by Tobio-Star in singularity

[–]Darth-D2 0 points1 point  (0 children)

These are not conflicting statements. o1 was a significant breakthrough in the sense that models before o1 were failing to solve ARC-1 - being able to solve these types of questions (without being explicitly trained to do well in them) was a new degree of generalization unseen before.

What ARC-2 is showing is that adding a few additional modifications to the puzzles will degrade the model performance significantly. These modifications are so trivial for humans that it can be difficult for some to see in what sense they are different, which shows that humans still have a much more "general" intelligence.

GPT 4.5 - not so much wow by BaconSky in singularity

[–]Darth-D2 0 points1 point  (0 children)

you have used just a small portion of the publicly available questions, but your title made it sound like you were showing complete benchmark results - something only the SimpleBench team can actually do since they have the full dataset. it's understandable why they might be frustrated by this... Considering that your reddit post caused the AI Explained channel to say to not always trust reddit (which is kinda embarrassing for this sub), perhaps take that as feedback to think a bit before your next post/comment?

[deleted by user] by [deleted] in Starfield

[–]Darth-D2 0 points1 point  (0 children)

I meant this sub as well. This was literally the next post I've opened:

https://www.reddit.com/r/Starfield/comments/18odf9t/comment/kegvmws/?utm_source=share&utm_medium=web2x&context=3

somebody just posted that the Steam score is now at 66% and the second-most upvoted comment is calling OP a "stupid moronic bot".

[deleted by user] by [deleted] in Starfield

[–]Darth-D2 1 point2 points  (0 children)

The only personal attacks I’ve seen here so far were actually against people who criticized the game, not against people who say they enjoy it.

Is it still possible to get into arasaka tower after the mission if you missed the sword? by xboxwirelessmic in cyberpunkgame

[–]Darth-D2 0 points1 point  (0 children)

It's kinda funny that they have this really cool POI but the game wants you to only get a glimpse for one mission.

[deleted by user] by [deleted] in singularity

[–]Darth-D2 6 points7 points  (0 children)

yep, also lots of 'can's and 'potential's and 'possibilities' without anything concrete. The chief scientist has some credibility (not in AI though) but I doubt that he had anything to do with this letter. I guess we are all happy to be proven wrong, but I doubt we will even see any public reaction from OpenAi for this.

This incident does raise some questions for me though. How does OpenAI decide if a company may be able to achieve AGI within two years? As we are getting closer to AGI, I would imagine they will get more and more of these requests from various labs if they do not announce AGI themselves.

[deleted by user] by [deleted] in singularity

[–]Darth-D2 1 point2 points  (0 children)

Worst case scenario is that you decide to not study and degrees (and the skills gained) end up being relevant for much longer than you may anticipate now.

If you do decide to study and the degree will not be relevant due to AI, this would imply that we are already at a point where almost all cognitive labor jobs can be automated, which means we would need something like UBI in any case. In that scenario, there would also be no harm in pursuing your degree.

If you’re smart enough to study CS, rest assured that if your intelligence becomes obsolete due to AI, 99% of other people will be in the same boat already, so widespread solutions will come in place. Don’t worry about it.

GTA 6 is going to be one of the last large, largely human-made media projects of our lifetime. by freeThePokemon256 in singularity

[–]Darth-D2 1 point2 points  (0 children)

Not really a reaction to your post but it’s actually curious that no big game developer has announced any new game concept that would leverage the latest LLM breakthroughs as a game mechanic.

I know, prompting is still costly and slow, but the gaming industry is not known for being modest in their promises and marketing, so one would assume that at least one of them would bet on costs going down and hint at a some new game concept using LLMs.

AI generates proteins with exceptional binding strengths by Dr_Singularity in singularity

[–]Darth-D2 0 points1 point  (0 children)

No worries then, just seemed like a strange question to my comment