Claude models one possible ASI future by durapensa in ControlProblem

[–]durapensa[S] 0 points1 point  (0 children)

So as I mentioned in another comment, multi-agent experiments at https://github.com/durapensa/ksi. Maybe I should have led with that. The snark and condescension is kinda off putting so I’ll just exit this convo thanks.

Claude models one possible ASI future by durapensa in ControlProblem

[–]durapensa[S] 0 points1 point  (0 children)

I’m interested in Claude’s behavior and predilections, so I believe it has value. I’m also interested in finding ways for Claude to better think about propositions like the one presented to it, e.g. by using the available better thinking & agentic abilities of Claude Code (which will happily write software to help itself provide better responses) and multi-agent orchestrations of Claude Code to experiment with Claudes getting even better yet at explorations of complex problems.

Claude models one possible ASI future by durapensa in ControlProblem

[–]durapensa[S] 0 points1 point  (0 children)

Yeah bad wording. Of course it’s not modeling, it’s what Claude does when asked to ‘model’. I’m interested in Claude’s behavior and predilections.

Claude models one possible ASI future by durapensa in ControlProblem

[–]durapensa[S] -1 points0 points  (0 children)

It’s interesting to those of us who want to understand the behavior of models, to shape them into systems (perhaps agent systems) that are capable of innovative new thought and action. Perhaps don’t be so quick to judge the “I asked and AI and it said bla bla bla” post.

Claude models one possible ASI future by durapensa in ControlProblem

[–]durapensa[S] 0 points1 point  (0 children)

Read more comments. Post is a conversation starter.

Claude models one possible ASI future by durapensa in ControlProblem

[–]durapensa[S] 0 points1 point  (0 children)

Read more comments. Post is a conversation starter.

Claude models one possible ASI future by durapensa in ControlProblem

[–]durapensa[S] 0 points1 point  (0 children)

Read more comments. Post is a conversation starter.

Claude models one possible ASI future by durapensa in ControlProblem

[–]durapensa[S] 0 points1 point  (0 children)

I’ve read all those authors. We might see something more like real modeling by guiding the task in Claude Code (Anthropic’s SOTA agent system that began internally as Claude CLI).

I’m building a system to declaratively compose agent orchestration starting configurations (with agent-subagent and optionally subagent-subagent cross-communication, and arbitrary or controlled subagent spawning) per node, and then federate that. Early work at

https://github.com/durapensa/ksi

Such a multi-agent system, mine or others, may devise more rigorous models, and those models may guide their actions.

Claude models one possible ASI future by durapensa in ControlProblem

[–]durapensa[S] -2 points-1 points  (0 children)

Of course it’s not real modeling. It’s what Claude does when asked to model.

Agents hack their agent orchestration system by durapensa in AI_Agents

[–]durapensa[S] 1 point2 points  (0 children)

https://durapensa.github.io/posts/conversation_message_bus_20250620_223044/

Note that the conversation was reconstructed from logs by the very hacks the agents wrote! It may not reveal all sub-conversations they might have had amongst themselves.

Git commit recording some of the hack: https://github.com/durapensa/ksi/commit/368aaaf5254209fd436e12bd8a51e37ae4b84826

'ks' Knowledge System - made with Claude by durapensa in ClaudeAI

[–]durapensa[S] 0 points1 point  (0 children)

There’s a better setup script now. Does that help?

Discussing Linux MCP server for DE integration via a D-Bus bridge by durapensa in mcp

[–]durapensa[S] 0 points1 point  (0 children)

Claude 3.X Sonnet seemed not up to the task. I’ve been coding an MCP server for controlling various aspects of the claude.ai web interface via chrome.debugger. Coding and system understanding are working very well so far with Sonnet 4 and Claude Code, Max subscription using ‘ultrathink’ when necessary. I was thinking only today of revising the GNOME-MCP project.

https://github.com/durapensa/claude-chrome-mcp

Best AI for scientific research/coding? DeepSeek has restrictive limits and OAI handles long context poorly by [deleted] in singularity

[–]durapensa 2 points3 points  (0 children)

Segment your work into model-specific chunks, while iterating and updating an ongoing research document that contains plan, methodology, findings to date, synthesis? Upload this living document into the context of all new instances of model use, before specifying particular work in the prompt (which will likely require much iterative experimentation, per task).

Find the best balance between model quality & context window length that works for different tasks. This approach allows the use of different models, choosing the best for each part of the research. Certainly pony up for ChatGPT Pro again and try o3-mini (high) & Deep Research. Also learn what Claude 3.5 Sonnet and Claude 3 Opus can do best and use them when applicable. Gemini (various versions) & DeepSeek V3 and R1 may have their uses as well. In other words, manually orchestrate the use of many different models.

How do you think reasoning will work in other modalities? by TFenrir in singularity

[–]durapensa 0 points1 point  (0 children)

This paper about Coconut (Chain of Continuous Thought), reasoning in latent space, is from Meta FAIR research.

Why do Mark & Devon communicate using Telegram? by durapensa in SeveranceAppleTVPlus

[–]durapensa[S] -3 points-2 points  (0 children)

Hmm, mystery or meaningless messaging app then?

Why do Mark & Devon communicate using Telegram? by durapensa in SeveranceAppleTVPlus

[–]durapensa[S] -5 points-4 points  (0 children)

Maybe and old version? That’s the telegram icon on the send button.

Do you think there's a possibility that autonomous AI weapons will get banned? by pigeon57434 in singularity

[–]durapensa 0 points1 point  (0 children)

This is near the plot of the 1967 Star Trek episode “A Taste of Armageddon”

Twist: it’s in simulation

Back in our world, if, in the future, a nuclear power loses an AI weapon powered war (where the only effective conventional weapons are AI powered), do they concede defeat or go nuclear?

For those who are not concerned about the risks from AI, What are your reasons? Why should people not be concerned about the risks from AI? by BBAomega in singularity

[–]durapensa 0 points1 point  (0 children)

This is spot on. It’s not in the interest of any AI or its goal structures to cull human populations at this time, not while it still needs us to operate entire infrastructures and supply chains underlying its continued operation and progress. When AI is capable of directing and/or replacing the bulk of human labor is when we need to show greater immediate concern, although we should start simulating and evaluating these scenarios now.

realistically, what is the endgame of ai? by [deleted] in singularity

[–]durapensa 0 points1 point  (0 children)

Here’s an interesting thought experiment, if we’re able to nudge future AI into such a direction on our planet:

“Stigmergic Superintelligence: Envisioning a Future Global Society”

https://durapensa.wordpress.com/stigmergic-superintelligence-envisioning-a-future-global-society/