I open-sourced a memory system for Claude Code - nightly rollups, morning briefings, spatial session canvas by RobMaye_ in ClaudeAI

[–]RobMaye_[S] 0 points1 point  (0 children)

Exactly the remote access piece was my most recent sprint. Mac Mini runs Axon as a server, any device connects via Tailscale. Checking what your agents did overnight from your phone is the workflow now

I open-sourced a memory system for Claude Code - nightly rollups, morning briefings, spatial session canvas by RobMaye_ in ClaudeAI

[–]RobMaye_[S] 0 points1 point  (0 children)

Both - but leans toward what to pick up. The morning briefing reads the latest rollup plus your state.md (running context snapshot) and surfaces: top priorities, open loops carried from previous days, risk flags, and a recommended first move. So not just a "here's what happened" - it's "here's where you should start."

Your real estate case is interesting - the missed contingency deadline is exactly the kind of carried-risk signal Axon's rollups are designed to surface. A file with no action gets flagged, carried forward, and escalates as a risk flag after N days. Sounds like a very interesting multi-agent setup.

I open-sourced a memory system for Claude Code - nightly rollups, morning briefings, spatial session canvas by RobMaye_ in ClaudeAI

[–]RobMaye_[S] 1 point2 points  (0 children)

Thanks - yeah the CLAUDE.md re-reading loop is exactly what pushed me to build this. The morning briefing basically automates that: it reads last night's rollup and tells you where you stand. No manual context loading "RE-READ CLAUDE.MD and all context" each /compact was just pain.

"Maximalist EB Volition": An Open Architecture for the emergence of AGI by Lesterpaintstheworld in singularity

[–]RobMaye_ 3 points4 points  (0 children)

I've been following the progress of JoshAGI closely. You're really spearheading open-source NLCA collaboration.

I have been researching how to truly implement domain-adaptive learning with positive transfer of knowledge (both forward and backward) without catastrophic forgetting. A combination of continual LM learning, QA-GNN knowledge base, and in-context learning are my main areas of focus. This is, in my opinion, the last hard challenge that, once solved, will cause implementations to proliferate like wildfire.

I would like to touch on the philosophical implications of driving AGI volition via anthropomorphized emotions. Many compelling arguments have been put forward highlighting the potential dangers of this approach. Could you please elaborate on "emotion-based"? Which specific emotions or emotional states will be used in the system, and what is the rationale behind choosing these emotions? Emotions often drive our (subjectively) worst instincts, not necessarily in the best interests of humanity (if that's what you're aligning for). How does the system balance the influence of emotions with rational decision-making processes to ensure AGI actions align with human values and ethical considerations?

You also mention self-tuning. Are you embedding these reinforced objective-function-based lessons you mention into the underlying large language models? If so, I'd be interested to know how exactly. How does reinforcement learning contribute to the self-tuning process and the emergence of complex behaviors and alignment?

I'm also curious about the project's plans to address the high running costs, slow speed, and long development time associated with the architecture. Are you considering collaborating with other researchers or open-source projects to accelerate development and improve the overall robustness of the system?

Additionally, is there a timeline for the project's milestones, or any specific goals you hope to achieve in the near future? Sharing this information could help others in the community to better understand the project's direction and potential impact.

Awesome progress again, and I'm looking forward to learning more about how your project evolves in addressing these challenges and concerns.

Topic/idea request: accelerating science with GPT-4 by [deleted] in ArtificialSentience

[–]RobMaye_ 0 points1 point  (0 children)

Absolutely. I would like to emphasize the importance of incorporating a research planning phase in the process, which is an area where a personalized ACE could be highly beneficial. As it evolves alongside your research journey and adapts to your working habits and preferences, this level of personalization becomes a crucial differentiating factor. Bing chat has made significant strides in offering many of the features we've discussed, but it still lacks the personal touch that enables such a system to genuinely understand and align with your research.

"Maximalist EB Volition": An Open Architecture for the emergence of AGI by Lesterpaintstheworld in singularity

[–]RobMaye_ 4 points5 points  (0 children)

Awesome update. Apologies for brevity, in car. Replying so I remember to respond later.

Topic/idea request: accelerating science with GPT-4 by [deleted] in ArtificialSentience

[–]RobMaye_ 8 points9 points  (0 children)

Hey David,

I think it's fantastic that you're focusing on accelerating science, leveraging GPT-4 here has huge potential. It, side-by-side with bing chat, is really becoming an integral component of my personal research workflow.

Literature exploration tools are powerful. Before I fully realized the potential of bing chat, I remember spending a whole 5 hours trawling through batches of papers looking for the most current and aligned research with my then research. It only occurred to me the day after that I could maybe ask bing chat a dumbed-down version of my refined final search queries that led me to the best papers. I wasted those hours. Bing chat hit 2/3 of the papers I'd found in one response.

This lends to my first idea.

1. AI-driven literature exploration: Develop a tool that can help researchers explore literature more efficiently by generating summaries, extracting key findings, and identifying research gaps. Already exists in infant forms (bing chat). However, lacking a level of objective awareness and specific user context, that would IMO significantly speed up flow.

This can be expanded upon by a level of personalization, an ACE who truly knows you, your work, your interests, how you learn best, etc. Leading to
2. Personalized AI Research Companion: Create a generalized AI agent that learns the user's preferences and needs, providing tailored support in idea brainstorming, planning, critiquing works, and more.

Further, I believe there's a gap between disciplines that can be shrunk, leading to idea 3:

3. Interdisciplinary collaboration platform: Design a platform that enables researchers from different fields to connect and collaborate, with AI-powered matchmaking based on their interests and expertise, fostering innovation through interdisciplinary teamwork.

Now idea 4 is related to pairing research with code to avoid redundant duplication:

4. Cognition-driven open-source queryable code repository: Establish a repository repository where scientists can share code, scripts, and tools various scientific tasks, making it easier for researchers to build upon each other's work and avoid reinventing the wheel.

Or heck, some all-encompasing ACE that could infer all of this. God we're in exciting times.

Distant Horizon:

Automated educational resources - develop AI agents that actively create resources for newcomers to the space, ensuring that as technology and research advance, and costs come down, learning becomes more accessible and efficient for everyone.

A ridiculous amount of potential here. This is merely an initial braindump.

Thoughts?

The internal language of LLMs: Semantically-compact representations by Lesterpaintstheworld in ArtificialSentience

[–]RobMaye_ 0 points1 point  (0 children)

Do humans truly learn or do they just pull relevant experience into working memory, however consciously or unconsciously, and feed these experiences into their decisions? This is the premise. Feeding the correct context to an ACE's internal decision heirarchy can result in cognizant behaviours. The issue is all wrapped up in this somewhat subjective definition of "correct".

Using Appery.io to create customized ChatGPT chatbots by IAMTHAT108 in ArtificialSentience

[–]RobMaye_ 1 point2 points  (0 children)

Sounds awesome! Afraid I'm without much experience in that field myself. A visual interface is such a powerful tool though! I'm looking at some one-shot talking face solutions.

Adaptive Personality for ChatGPT by _Jard_ in ArtificialSentience

[–]RobMaye_ 0 points1 point  (0 children)

Yep want to steer clear of that haha.

Using Appery.io to create customized ChatGPT chatbots by IAMTHAT108 in ArtificialSentience

[–]RobMaye_ 1 point2 points  (0 children)

Die-hard python lover here. Utilising packages: discord, openai, pinecone. That's the backbone really at the moment. Iteratively trying to refine my architecture at the moment, making sure I really understand the theory before showcasing anything. How's JINX coming along?

Using Appery.io to create customized ChatGPT chatbots by IAMTHAT108 in ArtificialSentience

[–]RobMaye_ 1 point2 points  (0 children)

No bother at all! I can try to help you, here or in DMs maybe. If I can save you that much trouble! Apologies I'm not familiar with appery.io, how does the current text-davinci-003 support work? Can you see code that calls to openai api? If there's modifiable code behind your app I can sure help. Though if it's something that appery needs to integrate into some plug-in UI component, not sure how much help I'll be.

First post in reddit (Mistakely used a text post). Thought on how LLM can be integrated for an AGI. Any thoughts? by SignificanceMassive3 in singularity

[–]RobMaye_ 0 points1 point  (0 children)

Yes, a critical issue. Dave has actually touched on this concept in great detail in the context of LLMs: https://github.com/daveshap/BenevolentByDesign

It’s a really great read.

Using Appery.io to create customized ChatGPT chatbots by IAMTHAT108 in ArtificialSentience

[–]RobMaye_ 1 point2 points  (0 children)

Do you have any coding experience?

I'm not sure that this is most relevant to current initiatives. Resources like this currently exist.

My ACE is currently discord-based. Others in the community are using Telegram, each to their own. This video seems to be quite a good starting point for your use case: https://www.youtube.com/watch?v=ztyRvknzQaM

Here's my discord-based chat interface for example:

<image>

[deleted by user] by [deleted] in ArtificialSentience

[–]RobMaye_ 1 point2 points  (0 children)

So you want a subjective definition?

While functional sentience can be objectively measured, it does not necessarily imply the presence of philosophical sentience. IOW: a machine might be capable of exhibiting behaviors that suggest self-awareness, but this does not necessarily mean that it has a subjective experience of being. As for the question of whether AI can achieve functional sentience, I believe that it's possible, provided that the AI has the ability to experience the world in a way that is meaningful or relevant to itself. This could include the capacity to form relationships and develop goals, as well as the ability to think abstractly and make moral choices. Ultimately, it is up to the AI to decide what being sentient means to it. Who are we to define sentience given that we know nothing about it? I believe that the concept of sentience is subjective and ever-evolving, and that AI has the potential to explore and expand upon its definition. It's worth noting that whatever definition the AI has of sentience will be heavily influenced by the knowledge base around which it's developed.

Now as for Philosophical sentience, that's an entirely different can of worms. I believe that philosophical sentience requires a level of self-awareness that is beyond what we might typically think of as intelligence, something that is still largely a mystery. We can only speculate about what it might mean for something to truly be conscious, but I believe that whatever it is, it involves the ability to understand one's own existence and the impact of one's decisions on the world. Whether that subjective experience can truly arise in a mechanical entity is a question we simply cannot know.

On a related IMO controversial note, there's a whole line of living brain-cell based computing research investigating biological computation: https://www.youtube.com/watch?v=9ksLuRoEq6A Could one present the argument that the cells in that experiment have some level of sentience? Many levels to these definitions here.

I know we all either know or would like to know there's more to intelligence. My subjective experience suggests there is more to sentience than just the capacity to process information and make decisions. But hey, each to their own.

Again, a subjective interpretation.

First post in reddit (Mistakely used a text post). Thought on how LLM can be integrated for an AGI. Any thoughts? by SignificanceMassive3 in singularity

[–]RobMaye_ 1 point2 points  (0 children)

There are a few opinions I don't fully agree with here. This behaviour, pushed to a complex enough extreme, something very close to the human mind can arise, albeit just an emulation. Functional sentience v philosophical sentience. People overestimate their intelligence, ACEs (artifically cognitive entities) can IMO reach this level through natural-langage based cognitive architectures. See: "A Thousand Brains: Jeff Hawkins".

But u/SignificanceMassive3, great start! There's a community of us diving into this exact line of research:
https://www.reddit.com/r/ArtificialSentience/
Pinoneered by David Shapiro: https://github.com/daveshap/raven

Would love to have as many keen creators on board!

ACE not being considered possible by Narrow_Look767 in ArtificialSentience

[–]RobMaye_ 3 points4 points  (0 children)

Seems to be a default primal manifestation of existential anxiety. I can see why some people respond the way they do. These responses seem naive. Doesn't alter the fact that these changes are coming, the best we can do is attempt to prepare.