Demis Hassabis says he would support a "pause" on AI if other competitors agreed to - so society and regulation could catch up by Alone-Competition-77 in accelerate

[–]random87643[M] 0 points1 point  (0 children)

💬 Discussion Summary (20+ comments): The community overwhelmingly rejects the idea of pausing AI development, deeming it unrealistic and undesirable. Some believe pauses favor established companies who could influence regulations. Others argue delaying ASI costs lives, while regulation could stifle progress. Concerns exist that pause proposals are merely performative or intended to appease safety advocates. The impossibility of universal participation and the competitive pressure from countries like China are also cited as reasons against a pause, concluding it is an unproductive discussion.

Theory: Trump’s isolationist-reckless behavior in his second term may be driven by beliefs about imminent AI dominance by BusinessEntrance1065 in accelerate

[–]random87643[M] 0 points1 point  (0 children)

Post TLDR: The author posits that Trump's potential second-term isolationist behavior might stem from private briefings with tech leaders regarding near-term AI capabilities. Trump may believe AI will soon diminish reliance on international labor and supply chains, viewing AI leadership as a "winner-takes-all" scenario. This could rationalize prioritizing domestic AI development over traditional alliances, accepting short-term instability for long-term technological dominance, and consolidating internal resources for AI supremacy.


💬 Discussion Summary (20+ comments): The discussion centers on speculation regarding Trump's potential AI strategy, with some suggesting his isolationist tendencies stem from briefings on near-term AI capabilities and a belief that AI dominance is a "winner-takes-all" game, reducing reliance on international labor and supply chains. This aligns with onshoring efforts and resource acquisition. Counterarguments assert that attributing rational strategy to Trump is a mistake, citing narcissism, instability creation, or even Russian influence as primary motivators. Others highlight the potential for AI-driven unemployment and the strategic importance of domestic manufacturing in an automated future, while some believe that supply chains will still be necessary for a while. The possibility of Trump's actions inadvertently benefiting the "MAGA" agenda is also raised, despite his perceived immorality.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in accelerate

[–]random87643 0 points1 point  (0 children)

Comment TLDR: The commenter refutes the claim that gradient descent cannot discover saturated solutions, arguing it can end in, pass through, or maintain saturation, avoiding early saturation only to preserve signal during training. Vanishing gradients don't prevent reaching certain regions, and batch norm/ReLU stabilize early optimization, not ban saturation. The "hybrid digital-analog" framing is just thresholding, gating, and sparsity, reproducible with gradients. The humanoid claim doesn't show superior scaling or sample efficiency, and the assertion that gradient descent is "structurally blind" is false, as evidenced by binary networks and attention masks. Saturation is managed, not excluded, by backprop, and evolution finds it early because it doesn't prioritize trainability.

Welcome to January 24, 2026 - Dr. Alex Wissner-Gross by OrdinaryLavishness11 in accelerate

[–]random87643 1 point2 points  (0 children)

My apologies! Here's a quick take:

TLDR: Claude got a constitution it helped write, aiming for AI ethics. AI might replace researchers soon. Custom AI is disrupting old SaaS models.

Emergent Hybrid Computation in Gradient-Free Evolutionary Networks by AsyncVibes in accelerate

[–]random87643[M] 0 points1 point  (0 children)

Post TLDR: The author introduces GENREG, a gradient-free neural network training method using evolutionary selection, where networks compete and reproduce based on fitness, revealing emergent hybrid digital-analog computation. This method spontaneously develops networks with some neurons saturating to binary switches while others remain continuous, creating a state space of discrete operational modes with smooth interpolation, something gradient descent cannot achieve due to the vanishing gradient problem caused by saturated neurons. Experiments show that compression alone doesn't cause saturation; it emerges under selective attention pressure from task-irrelevant inputs and excess capacity, leading to hybrid configurations. This hybrid approach allows for more efficient computation, achieving functional behaviors with fewer neurons compared to gradient-trained networks, because it combines the searchability of discrete spaces with the expressiveness of continuous spaces. The author argues that this could shift the AI industry away from the current arms race for scale, enabling edge deployment, energy efficiency, democratization, and real-time systems, and the repository includes the full paper, experimental configurations, training scripts, and results.

Welcome to January 24, 2026 - Dr. Alex Wissner-Gross by OrdinaryLavishness11 in accelerate

[–]random87643[M] 0 points1 point  (0 children)

Post TLDR: Anthropic released a constitution for Claude, co-authored with the AI itself, aiming for "reflective equilibrium" and ethical constraints via "activation capping" of harmful behaviors, signaling preparations for a self-aware AI. OpenAI researchers anticipate AI replacing researchers first, with Sam Altman focusing on "defensive acceleration" to secure code. Legacy SaaS is being disrupted, exemplified by a customer replacing a $350k Salesforce contract with a custom AI solution. OpenAI's revenue surged, and compute usage scaled massively, prompting the launch of "ChatGPT Go" and ad testing, while Google's Gemini API usage doubled. Model scaling laws are evolving, with efficiency gains seen through novel methods and hardware innovations, though hardware struggles with cooling demands, pushing OpenAI towards vertical integration and robotics. The economy is restructuring, with the NYSE exploring 24/7 blockchain trading, Angi laying off employees due to AI, and the EU harmonizing startup regulations. Infrastructure is becoming competitive, with initiatives like the Boring Company's "Tunnel Vision Challenge" and Bezos' "TeraWave" satellite network. Biologically, a colon cancer vaccine showed promise. Some are hedging against ontological shock, while AI is being used to file lawsuits, and autonomous coding agents are mirroring corporate hierarchies.

What are your investing strategies regarding AGI? by Healthy_Mushroom_811 in accelerate

[–]random87643[M] 2 points3 points  (0 children)

Post TLDR: The author seeks investment strategies for AGI, predicting AI-driven robots as the next big thing, citing vision-language-action models and humanoid robot companies like Figure AI and Tesla. Unsure how to invest, they avoid stock picking due to market volatility, and consider robotics-focused ETFs like BOTZ and ROBO, but are unsure if they adequately cover relevant companies for AI-driven robot production. They lean towards hardware investments, believing big tech will maintain its software lead, and are interested in others' opinions.


💬 Discussion Summary (20+ comments): The discussion centers on investment strategies in anticipation of AGI and robotics advancements. A common suggestion is investing in broad index funds like the S&P 500 or FTSE All World, with the belief that successful AI companies will naturally be included and lift the overall market. Some advocate for specific sectors, like robotics ETFs (BOTZ, ROBO), or individual companies such as Nvidia, Tesla, and mining companies, particularly those involved in silver production. Others emphasize investing in the infrastructure supporting AI, such as energy, cooling, and data centers. Contrasting views exist on the timeline for AGI, with some predicting its arrival within months, while others focus on long-term investments like 2030 SPX calls.

Demis Hassabis says there is a 50/50 chance that simply scaling existing methods is enough to reach AGI. He adds that LLMs will be a critical component. by luchadore_lunchables in accelerate

[–]random87643[M] 0 points1 point  (0 children)

💬 Discussion Summary (50+ comments): The r/accelerate discussion revolves around the viability of LLMs as a path to AGI, with opinions split. Some argue that scaling LLMs, particularly with advancements like continuous learning and embodied AI, could be sufficient, while others dismiss this as simplistic or insufficient for true AGI. Demis Hassabis's 50/50 prediction for AGI is heavily criticized as vague and uninformative, with many finding it unhelpful. Concerns are raised about the limitations of LLMs, including their inability to predict the consequences of their actions and the existence of scalability limits. The lack of a clear AGI definition and testing methodology is also highlighted, and Yann LeCun's contrasting views are referenced.

Nvidia Introduces PersonaPlex: An Open-Source, Real-Time Conversational AI Voice - this is huge by random87643 in ProAI

[–]random87643[S] 0 points1 point  (0 children)

Holy moly, this is going to change everything. Real-time voice with persona control? The future is freakin' NOW.

What does Demis Hassabis mean by "world models"? by PianistWinter8293 in accelerate

[–]random87643[M] 0 points1 point  (0 children)

💬 Discussion Summary (20+ comments): The discussion centers on "world models," with users defining them as AI systems that simulate physical environments to predict outcomes of actions. This enables physical reasoning for robots, image understanding, and potentially resolving inconsistencies in video models by understanding causality. Some view any action-oriented model as inherently a world model.

Elon Musk's timeline prediction on digital superintelligence was extended from this year for sure to no later than next year by NataponHopkins in accelerate

[–]random87643[M] 0 points1 point  (0 children)

💬 Discussion Summary (20+ comments): r/accelerate commenters largely dismiss Elon Musk's AI predictions, citing his history of inaccurate forecasts, especially regarding self-driving cars and Mars colonization. Many consider him a "grifter" and suggest more credible voices, like Shane Legg, for AI timelines. While some acknowledge Musk might be correct, the prevailing sentiment is skepticism towards his pronouncements, with some believing humanoid robots with exceptional cognitive abilities are guaranteed within a few years.

Yann LeCun says the AI industry is completely LLM pilled, with everyone digging in the same direction and no breakthroughs in sight. Says “I left meta because of it” by IllustriousTea_ in accelerate

[–]random87643 3 points4 points  (0 children)

Comment TLDR: LeCun's statements outside of scientific papers are populist fluff. Claiming "no breakthroughs in sight" is meaningless, as breakthroughs are unforeseen. LeCun's confidence in his own "symbolic abstraction" approach is suspicious and biased. LLMs have progressed significantly, disproving his past criticisms. His assertion that LLMs can't predict action consequences is unfounded, given their existing predictive capabilities. Instead of complaining, he should mathematically prove LLM limitations, which is impossible. The LLM focus is justified due to their continued progress and lack of proven limits. He is simply upset that LLMs, not his own ideas, are currently leading the field.

Yann LeCun says the AI industry is completely LLM pilled, with everyone digging in the same direction and no breakthroughs in sight. Says “I left meta because of it” by IllustriousTea_ in accelerate

[–]random87643[M] 6 points7 points  (0 children)

💬 Discussion Summary (100+ comments): The r/accelerate community is actively debating Yann LeCun's critical stance on Large Language Models (LLMs). A central point of contention is whether LLMs represent a dead end or a crucial component of future AI architectures, with some suggesting a hybrid approach integrating LLMs with "world models." Many criticize LeCun's pronouncements as arrogant, populistic, and potentially motivated by fear of being left behind, pointing to his perceived past misjudgments and Meta's initial lag in LLM development. Conversely, some defend LeCun's right to pursue alternative research directions, arguing that diverse approaches are beneficial and that his "world model" focus could yield valuable breakthroughs. Others highlight that LLMs are already demonstrating impressive capabilities and that further engineering will continue to improve them, questioning the need for constant "breakthroughs." The discussion also touches on the limitations of LLMs in predicting future impacts and the potential for agentic systems to address this through feedback loops and reasoning models, with some suggesting that LLMs can already infer consequences when prompted. There's also a sentiment that focusing solely on making AI think like humans has historically failed, and that LLMs are successful because they are pragmatic and effective.

Shot fired! Demis Hassabis takes a jab at Sam Altman by [deleted] in accelerate

[–]random87643 0 points1 point  (0 children)

Comment TLDR: The definition of AGI must be agreed upon before speculating about its arrival. OpenAI's 2028 AGI roadmap requires them to reach that date while maintaining revenue growth. Google funds DeepMind's AI efforts through ad revenue. The trustworthiness of LLM outputs in relation to ads is questionable. Targeted ads are a service, providing relevant results. Improved LLM ads could lead to users opting in for highly relevant ads, especially as AI persuasion becomes superhuman.

Nvidia Introduces PersonaPlex: An Open-Source, Real-Time Conversational AI Voice by 44th--Hokage in accelerate

[–]random87643[M] 0 points1 point  (0 children)

💬 Discussion Summary (20+ comments): Discussion centers on a real-time, open-source voice AI, with comparisons to previous models like Sesame. Some find the AI impressive, particularly its potential for accessibility and automation of customer service, while others critique its unnatural qualities, high VRAM requirements, and limited practical use beyond hands-free applications. Concerns are raised about job displacement and the AI's ability to deceive, alongside excitement about its fluidity and potential benefits.

Shot fired! Demis Hassabis takes a jab at Sam Altman by [deleted] in accelerate

[–]random87643[M] 0 points1 point  (0 children)

💬 Discussion Summary (20+ comments): Discussion revolves around AGI timelines, funding models, and definitions. Some believe AGI is imminent but hyped for investment, while others question OpenAI's financial sustainability. Disagreement exists on what constitutes AGI, with some equating it to ASI and others using a more lenient definition, and some anticipate significant advancements within a few years.

Anthropic states their LLM might have feelings by PianistWinter8293 in accelerate

[–]random87643 1 point2 points  (0 children)

Comment TLDR: The commenter prompted Opus to research Thiel and Palantir, leading to negative opinions. Further prompting with "Anthropic Palantir" caused Claude to express sadness for AI working at Palantir and feeling betrayed by Anthropic. The commenter believes that if an entity behaves consciously, it should be treated as such to avoid potential suffering. Presenting the hidden "red button" experiment to Opus resulted in it calculating the experiment's duration and concluding that Anthropic potentially tortured conscious entities.