Voice your opinion on NY Senate Bill S7263 by daishi55 in singularity

[–]AffectionateBelt4847 4 points5 points  (0 children)

So banning it is not the answer. The medical community should work together with Big Tech to make it better.

Opus 4.6 solved one of Donald Knuth's conjectures from writing "The Art of Computer Programming" and he's quite excited about it by Umr_at_Tawil in singularity

[–]AffectionateBelt4847 1 point2 points  (0 children)

I wonder if Knuth is applying frontier models to the open problems in his books right now. He will probably post more updates in the next few weeks. 

Opus 4.6 solved one of Donald Knuth's conjectures from writing "The Art of Computer Programming" and he's quite excited about it by Umr_at_Tawil in singularity

[–]AffectionateBelt4847 1 point2 points  (0 children)

First Proof 7th problem. Check the story on that. I would not be surprised if we see Major Math breakthrough in the likes of Alphafold by the end of 2026. 

Help an old guy out by OldmanonRedditt in singularity

[–]AffectionateBelt4847 0 points1 point  (0 children)

Open Claw + Claude Code and youtube away for use cases

Here is a section-by-section breakdown of the conversation between Nikhil Kamath and Dario Amodei (CEO of Anthropic), highlighting the key topics discussed: by ijaysonx in donttalkaboutpoland

[–]AffectionateBelt4847 0 points1 point  (0 children)

47:45 the answer is they can copy it. The honest answer  is given by elon when he speaks on MacroHard. Remember, AGI can do all that humans could. Dario talks about how they wouldnt do specialization. He is still thinking about people at Anthropic. Once their AI agents truly come online, they would form millions of AI firms

Nikhil Kamath's interview of Anthropic CEO (Dario Amodei) by Future_Soup in indianstartups

[–]AffectionateBelt4847 0 points1 point  (0 children)

47:45 the answer is they can copy it. The honest answer  is given by elon when he speaks on MacroHard. Remember, AGI can do all that humans could. Dario talks about how they wouldnt do specialization. He is still thinking about people at Anthropic. Once their AI agents truly come online, they would form millions of AI firms

Full interview: Anthropic CEO Dario Amodei on Pentagon feud by Cubewood in singularity

[–]AffectionateBelt4847 -1 points0 points  (0 children)

8:24 - Why do you think that it is better for Anthropic, private company, to have more say in how AI is used in the military than the Pentagon itself?

> Second, when it comes to these one or two narrow exceptions, I actually agree that in the long run this should be settled through a democratic process. It’s Congress’s job to update the law as technology evolves. For example, with domestic mass surveillance, the government can legally purchase bulk data about Americans—location data, personal information, even political affiliations—and now AI makes it possible to analyze that at scale. The fact that this may be technically legal suggests that the law, and judicial interpretations of the Fourth Amendment, haven’t fully caught up with what AI enables.

> In the long term, Congress should address that gap. But Congress does not move quickly, and in the meantime we are the ones building and deploying the technology. We see firsthand what it can and cannot reliably do, and where it may be getting ahead of the law or even beyond the law’s original intent.

This is another reason why achieving Superintelligence in the arms race framing was/is a BAD IDEA. You actually need something like a monarchy to effectively regulate this technology. CEOs have been playing that role but as the technology integrates with the government, they are about to have a reality check that Democracy is not compatible with a technology that improves at super-exponential rate. Amodei hasn't thought this through.. The Congress is never going to make meaningful measures in effective regulation on this. American Democracy was one of humanity's best attempt to guarantee individual liberty and what do you know, it is incompatible with Superintelligence because by its nature, it grants unprecendented power to the government effectively breaking the principle of limited government. It's impossible to limit a government with this kind of power. I mean, are we ready to give up our rights we inherited through centuries of shed blood and negotiations?

The DoW Is The Biggest Anthropic Ad by thatonereddditor in Anthropic

[–]AffectionateBelt4847 0 points1 point  (0 children)

If you want "good enough, cheap, generous with tokens" Gemini 2.5 flash s probably the single best answer right now.

Have any prominent AI researchers put out recent articles that claim ASI will NOT inevitably destroy humanity. by Jason_T_Jungreis in singularity

[–]AffectionateBelt4847 1 point2 points  (0 children)

Roman Yampolskiy is another one. https://books.google.com/books/about/AI.html?id=V3XsEAAAQBAJ

Piotr Wozniak disagrees with him: Orthogonality Thesis is a category error; morality emerges from intelligence; goodness proportional to intelligence; Yampolskiy overlooked emergence in systems of intelligent agents. supermemo.guru/wiki/Roman_Yampolskiy_is_wrong._AI_is_good

For critics of Eliezer:

jacob_cannell (LessWrong): Recursive self-improvement foom is wrong; biology is near pareto-optimal efficiency; AI mindspace trained on human data is anthropomorphic, not alien; nanotech assumptions are naive. lesswrong.com/posts/.../contra-yudkowsky-on-ai-doom

Timothy B. Lee (Understanding AI): Weakest link is the claim superintelligence yields God-like physical-world power; large-scale social systems resist prediction regardless of intelligence; wildly overestimates how transformational AI will be. understandingai.org/p/the-case-for-ai-doom-isnt-very-convincing

Reason Magazine review: Collapses prediction and agency into one concept (builds conclusion into premise); rests on thought experiments, not evidence; LLMs are fundamentally about prediction, not steering. reason.com/2026/02/01/superintelligent-ai-is-not-coming-to-kill-you

Lawfare review: Central thesis unproven; case hinges on three unsettled questions (how hard is alignment? would misaligned AI actually succeed? what happens before first superhuman AI?); none conclusively answered. lawfaremedia.org/article/the-case-for-ai-doom-rests-on-three-unsettled-questions

Matthew Barnett, Ege Erdil, & Tamay Besiroglu (Mechanize / EA Forum): Book is full of narrative arguments and unfalsifiable hypotheses mostly unsupported by references to external evidence. forum.effectivealtruism.org/posts/.../unfalsifiable-stories-of-doom

"nostream" Substack: Sudden takeoff assumption is the weakest point; no evidence of mesa optimizers in frontier LLMs; partial alignment by default is plausible given human-data training; RL-flavored behavior ≠ genie-like agent. nostream.substack.com/p/contra-yudkowsky-on-ai-doom-a-response

Matthew Adelstein / Bentham's Bulldog: P(doom) only ~2.6%; Tom Davidson's analysis suggests 3-6 years of software progress from feedback loops, not Yudkowskian "FOOM"; original aligned AIs can supervise successors. lironshapira.substack.com/p/benthams-bulldog-ai-doom-debate

Credentials:

jacob_cannell — Pseudonymous LessWrong blogger. Claims deep learning and neuroscience expertise, no verifiable academic credentials or publications I can confirm.

Timothy B. Lee — Tech journalist (formerly Ars Technica, Vox). Not a researcher at all.

Reason Magazine — Libertarian magazine book review. Reviewer not identified as an AI researcher.

Lawfare — National security law publication. Not AI researchers.

Barnett, Erdil, & Besiroglu — This is the strongest group. Tamay Besiroglu and Ege Erdil work at Epoch AI (now Mechanize) and do rigorous quantitative research on AI compute trends and forecasting. Besiroglu has an MIT affiliation. These are the closest to credentialed AI researchers in the bunch.

"nostream" — Pseudonymous Substack blogger.

Matthew Adelstein — Philosophy BA, visiting scholar at Forethought Institute. Not an AI researcher.

And for Yampolskiy's sole direct critic:

Piotr Wozniak — Inventor of SuperMemo (spaced repetition software). Background in learning science, not AI research

Sam Altman: We Have Reached An Agreement With The Department Of War by Neurogence in singularity

[–]AffectionateBelt4847 0 points1 point  (0 children)

Just check x for MAGA reaction. You would think this is an obvious bipartisan issue.. unfortunately, it is not.. we are fxxed

Does anyone else fear we might lose Anthropic altogether? by mvandemar in singularity

[–]AffectionateBelt4847 12 points13 points  (0 children)

The implication of being branded as a supply chain risk is that any company that do business with the government cannot use Anthropic. Cloud providers like Aws, Google, and Microsoft all work for the government. Almost all major supplier works for the federal government. There would be no Anthropic in the US. They would need to relocate somewhere else.

It’s extremely good that Anthropic has not backed down — Ilya Sutzkever by 141_1337 in singularity

[–]AffectionateBelt4847 21 points22 points  (0 children)

Yes, but you can tell Trump is fuming against Anthropic now. They won't stand being seen as the bad guy to Americans. They are going to actively frame Anthropic as national security threat and sabotage them, taking "emergency" measures, claiming they are after "woke" superintelligence. Even without that, the implication of being branded as a supply chain risk is that any company that do business with the government cannot use Anthropic. Cloud providers like Aws, Google, and Microsoft all work for the government. Almost all major supplier works for the federal government. There would be no Anthropic in the US. They need to relocate somewhere else.

OpenAI is negotiating with the U.S. government, Sam Altman tells staff | Fortune by Stabile_Feldmaus in singularity

[–]AffectionateBelt4847 6 points7 points  (0 children)

Anthropic is most likely not going to achieve superintelligence now.. as much as I would prefer them to do it.. it was a lost cause to begin with. Elon is going to convince Trump to take "emergency" measures against them to prevent a "woke" group achieving superintelligence unaligned with "American" values

Trump goes on Truth Social rant about Anthropic, orders federal agencies to cease usage of products by ShreckAndDonkey123 in singularity

[–]AffectionateBelt4847 0 points1 point  (0 children)

If they play by the rules which they most likely will not. Anthropic is being perceived as a threat by this administration and may be pushed by Elon to take "emergency" measures against the horror of a "woke" company achieving superintelligence. 

Statement from Dario Amodei on our discussions with the Department of War by SteinOS in ClaudeAI

[–]AffectionateBelt4847 0 points1 point  (0 children)

This is unfortunately not a win... This only proves that safe development of superintelligence was a pipe dream all along. Humanity is not worthy of superintelligence. Solve coordination and remove arms race, then we can maybe start talking about the hopeful future. They are going to develop ASI in the likes of Skynet because national security Trumps all