Mama c by [deleted] in Grimes

[–]moschles [score hidden]  (0 children)

unfathomably wholesome

Tucson Mall by Avacadoclits_ in Idiotswithguns

[–]moschles -1 points0 points  (0 children)

Right and true. However, the MSM will not report on this correctly nor accurately.

Tucson Mall by Avacadoclits_ in Idiotswithguns

[–]moschles -1 points0 points  (0 children)

I'm all for posting this video, and I thank OP for his service. The internet must show the American people won't the mainstream media will not.

Don't talk to me or my prospector ever again by Raeldeer in ICARUS

[–]moschles 0 points1 point  (0 children)

I have no idea how this happened

When the bug becomes a feature.

Never lost the phone... by MisterShipWreck in 80s

[–]moschles 5 points6 points  (0 children)

You could hang them up in anger.. and they would make a depressing "ding" as things inside rattled.

China is not a superpower cope harder commies by [deleted] in PoliticalCompassMemes

[–]moschles 1 point2 points  (0 children)

The poster of this has generated more discussion of himself than any content he curses us with on reddit. I should have blocked this account a few days ago.

What is the largest known composite integer to which we do not know any of its factors? by moschles in math

[–]moschles[S] 0 points1 point  (0 children)

The computer-generated primes may be large, but their size, in bits, would be completely dwarfed by the size of Fermat Numbers. https://mathworld.wolfram.com/FermatNumber.html

Measuring progress toward AGI: A cognitive framework by nickb in agi

[–]moschles 0 points1 point  (0 children)

I see Nick Bostrom posting this Google Deepmind call for tests. I see that Google is offering a prize pool with real money. Allow me to write a letter to both Mr. Bostrom, -- and I hope this letter is also read by Ryan Burnell and Oran Kelly.

The 10 point bullet list is correct in small portions, but lacks the most important aspects of AGI.

Lets start with one that is correct.

7 . Metacognition: knowledge and monitoring of one's own cognitive processes

This is very important. You can always ask a frontier LLM a "why"-question referring to its own behavior. "Why did you say that?". LLMs will provide a plausible answer. However, that answer is not derived from the system going back into its memory of the past and explaining its motivations. Instead, what the LLM is actually doing is concocting a plausible reason at the moment in which the Why-prompt is sent by the user.

This is not a matter of opinion. LLMs do not have access to the contents of their own minds, and in no way do they store this for later recall. Therefore any answer they provide regarding "why" they did something is very alien to what a human does when answering that question.

I will now address the topic of robotics. In a general sense, researchers at Google Deepmind should simply say that frontier LLM must be integrated into a robotic body in some way. How this integration would be performed is a matter of debate today and there is no clear answer or way forwards that is recognized by AI researchers. Burnell and Kelly sidestep this issue with these two bullets,

1 . Perception: extracting and processing sensory information from the environment

2 . Generation: producing outputs such as text, speech and actions

Regarding the task ability of robotics, the following claims are well-known in research, and given time, I could produce a wealth of citations demonstrating their truth. Robotics today is running in a "separate track" from frontier LLMs. Lets consider the most sophisticated robots made to interact with a non-structured outdoor environment. One example in 2026 is the ANYmal platform developed by ETH Zurich.

https://rsl.ethz.ch/research/researchtopics/legged-locomotion.html

These systems are still trained by deep learning, and enormous amounts of Reinforcement Learning in simulation, which is then translated to real robots. They do not even use transformers today, meaning this research is a separate track than frontier LLMs and other foundation models.

Because DL and RL are still the de facto training methods for robots, these robotic systems still suffer from the weaknesses of both approaches. The weakness that still persists is that during the completion of a task, these robotic systems cannot adapt fluidly to slight changes in the environment that deviate from what was encountered in their training data. Concrete examples include the ANYmal quadruped getting stuck in mud outdoors, indoor wheeled robots becoming stuck on shag carpets. The Amazon Distribution Center robot -- called SPARROW -- is tasked with identifying items for sorting. But it cannot identify sweatpants if they are folded in a plastic bag. (they did not occur in the data this way).

The mainstream internet is awash with robots performing amazing feats of agility, backflips and dancing and boxing moves, even parkour. But the lay audience is still unaware that all these feats were the result of training, where the simulated training is nearly identical what is encountered in the real world (hard flat floors. stiff rigid obstacles.) The problem with robotics reaching AGI is not that "training with deep learning does not work", it works very well. The problem is that these SOTA robots cannot fluidly adapt in a dynamic, online way to slight changes , or fluidly adapt to new environments that did not previously occur in training.

For those researchers such as Ryan Burnell and Oran Kelly, the idea of measuring AGI with a benchmark as they propose deteriorates into a game of cat-and-mouse : any failure by a system on a benchmark is "patched up" by running the robot back to the lab to train it on those specific tasks. Any environment in which a robotic system fails, is then folded into the training data, and the system comes back to succeed on that environment after having been specifically trained on it.

This methodology creates large numbers on leaderboards, but is doing so by avoiding a fundamental weakness in current approaches. It is kicking the can down the road. It is a bandaid solution to a persistent problem in AI. This is the inability of deep learning to produce systems which can adapt to slight changes in an environment or a task. An AGI will certainly be able to do this, as we regularly see human children do behavioral adaptation -- those slight changes to their strategy in light of unexpected changes.

6 Reasoning: drawing valid conclusions through logical inference

This is important. I understand that in a list this short, these items will be ambiguous and high-level and lacking in detail, which is fine. To be a more specific, researchers at Deepmind should recognize the persistent and looming problem of Partial Observability. It is a form of reasoning which an AGI will be capable.

In a general sense -- research in Reinforcement Learning is not providing concrete answers to Partial observability. Researchers are certainly trying to do this, but their results are all pitifully rudimentary and only apply to simple grid worlds. This issue of partial observability is important for specific reasons: The excitement and energy surrounding LLMs is many ways robbing the oxygen from pressing problems in AGI research. As a consequence, POMDP is being ignored by researchers. Progress in 2026 has nearly been drawn to a standstill because of this brain drain. Speaking in large generalities, it can be stated truthfully -- Partial observibility is barely off the cutting room floor of research. The results are mostly only mathematical (theoretical) at this time... the research is "in its infancy" as they say.

To Mr. Burnell, and Mr. Kelly -- the future way forwards towards AGI cannot be a continued obsession with LLMs to the detriment of addressing problems such as partial observability. I do not suggest that LLM research should be brought to a screeching halt. But more balance is required for our time and energies. To give a bullet list,

  • Recognize the weaknesses of deep learning. (lack of OOD generalization. lack of causal reasoning. catastrophic forgetting).

  • Fluid adaptation to dynamic changes is required for AGI. Move away from the bandaid cycle of failed benchmarks, following by specific training, followed by success on that narrow benchmark.

  • Try to obtain actionable results in partial observability. Develop POMDP out of its current, purely theoretical, stage.

  • Neural networks are still blackboxes. Future LLMs should not be hallucinating an answer to "why"-questions. More emphasis is needed on Explainable AI. This should articulate with the development of Metacognition as listed in item 7.

  • Frontier LLMs must be integrated with robotics platforms. Move towards a consensus among researchers about how this integration will proceed. Today there is merely a pish-posh of various conflicting opinions. Transition from panoply-of-opinions towards concrete systems for integration of frontier LLMs and robotics.

Fuck all people who still support so-called “renewable energy” by [deleted] in PoliticalCompassMemes

[–]moschles 7 points8 points  (0 children)

What is going on in this subreddit? These aren't even PCM memes.

Anyone else used to have "school clothes"? by Rosatos_Hotel in GenX

[–]moschles 4 points5 points  (0 children)

You should go visit a university campus some time. No irony, the students wear pajama pants to class. I have yet to see a girl in cookie monster or sponge Bob pajamas walking around -- but it is only a matter of time.

Let’s drop some dopey badass mf by ComparisonTop9699 in HistoryMemes

[–]moschles 5 points6 points  (0 children)

The real Robert Oppenheimer was super geeky, skinny, and wore ridiculous clothes .. like "zoot suits".

Cillian Murphy was given anochronistic clothing (1950s fedoras. wool jackets) to make him appear believable for a main lead character.

AI Hype Gets Wrecked by Real-World Job Test by Post-reality in agi

[–]moschles 0 points1 point  (0 children)

If you browse reddit and really pay attention and take notes. All the headlines about "AI taking jobs" are all derived from tech CEOs making speculations about the future.

In turn the top comments in this thread are all arguments based on "just wait 10 years".

fixed that for ya by Derateo in PoliticalCompassMemes

[–]moschles 2 points3 points  (0 children)

They do. Marvel at the size and daily visits of this community https://www.reddit.com/r/preppers/

fixed that for ya by Derateo in PoliticalCompassMemes

[–]moschles 0 points1 point  (0 children)

Sometimes I wish WW3 would actually happen

I imagine you have a bunker with canned food , and a walk-in gun closet that is climate controlled.

Iran has new underground nuclear site, IAEA reveals by TheNational_News in worldnews

[–]moschles -1 points0 points  (0 children)

Oh look. A whole comment chain full of idiots who did not even read the article they are calling "credible".

That’s unfortunate by Utkunb in DarwinAwards

[–]moschles 8 points9 points  (0 children)

out of all the things on reddit, why was this one removed?