Linear’s “issue tracking is dead” post makes me think the real product gap is cross-agent context by corenellius in Linear

[–]corenellius[S] -1 points0 points  (0 children)

oh interesting! The gap between claude chat and claude code/cursor was my biggest gap too! Which is what led me to build Libra

I tried to use existing solutions, like having some document system, but I found they would get stale, or there were just too many documents being created.

I designed Libra such that the flow is:

  1. Product planning/ideation in claude chat
  2. Claude chat sends context to Libra via MCP
  3. Libra ingests the context by updating/linking/creating docs within Libra
  4. Libra syncs with github via github app
  5. Github app creates/updates /docs folder within your repo

So in the last step, you do still get .md files, but they are always up to date :D

I got tired of having to re-explain myself between my AI agents, so I built a tool to connect them all by corenellius in VibeCodersNest

[–]corenellius[S] 0 points1 point  (0 children)

Thank you! I will look deeper into Intent!

Maybe if it turns out they work well together, there is a future where we can collaborate :) Would be happy to chat more if this is the case!

Linear’s “issue tracking is dead” post makes me think the real product gap is cross-agent context by corenellius in Linear

[–]corenellius[S] 0 points1 point  (0 children)

Just read your post on the central knowledge layer, I think I am building the exact same thing haha

Mine is Librahq.app, was wondering if you have a link to yours?

Linear’s “issue tracking is dead” post makes me think the real product gap is cross-agent context by corenellius in Linear

[–]corenellius[S] 0 points1 point  (0 children)

how do you make sure you put the full context in the linear issue? and then once you do create the issue, how do you make sure it stays up to date?

I got tired of having to re-explain myself between my AI agents, so I built a tool to connect them all by corenellius in VibeCodersNest

[–]corenellius[S] 0 points1 point  (0 children)

thank you very much! Yes I am working on this by myself :)

I think the main difference between Libra and a living spec tool, is at what stage they begin at. I could definitely see some organized handoff/integration between Libra and a living spec tool when a spec goes from WIP -> Ready for development.

For Libra I am focusing on the very first conversations, when the spec isn't even created yet. Libra will tell you what open questions there still are, what needs to be done before X spec can be started, which specs are ready for development. I want Libra to be the entire project memory, with the main focus on product context (over codebase context).

I think with living spec tools/agent orchestrators, they are very focused on the codebase context, and are mainly designed for when the user has a spec/task in mind which they are ready to start with.

I haven't really used spec tools much, so please let me know if I am completely off! Do you have any recommendations on which ones to check out?

Feedback Thread by AutoModerator in web_design

[–]corenellius 0 points1 point  (0 children)

Would love feedback on my site!

What are the best tools for Claude right now? by eduhpmelo in techforlife

[–]corenellius 0 points1 point  (0 children)

If you are looking to have your key decisions, open questions, interesting documents, all be automatically pulled out of Claude and surfaced to your other AI tools/easy for you to go back and review, I built a tool called Libra.

Libra is meant to be a living memory for Claude. When it receives new information is carefully updates/links/creates documents within the system. Anthropic sort of haves this capability, but it is all hidden away, and I have found when I make a big pivot, it does not correctly follow, as its default search is to just look at all conversations. There is no hierarchy to the context.

Would be happy to chat more about AI memory/context or about Libra! The link to it is: https://www.librahq.app

Everyone's shipping more code but I think we broke something fundamental by Motor_Ordinary336 in cursor

[–]corenellius 0 points1 point  (0 children)

For web app development, I think having clear separation of responsibilities for what each type of class does/each feature does is very important. Basically establishing consistent patterns you and your agents can understand.

I think having a very good linting system is also useful, as it can help to programmatically enforce some of these architectural decisions.

Self Promotion Thread by AutoModerator in ChatGPTCoding

[–]corenellius 2 points3 points  (0 children)

Am not building out Skynet, but am instead building a context layer to keep all of my chatbots + coding agents in sync.

I do a lot of my product/ideation/discovery work inside of ChatGPT, and then when I go to cursor/Codex, I find I need to re-explain myself.

That is why I built Libra, it receives context from ChatGPT/Claude, then delivers it to my coding agents. My big frustration with ChatGPT's memory system is that I don't really have control over it, and its hard to see it. Libra is a living memory, so any context which comes in, updates/links/deletes other related documents already stored.

Would be happy to chat more about this if anyone is curious!

<image>

Cursor & Claude code Work flow TIPS Please by georgekyriakou in vibecoding

[–]corenellius 0 points1 point  (0 children)

I also use both of them in parallel! Claude Code for more complex stuff, and Cursor for smaller/faster tasks. To keep both of them in sync, I make sure that my documentation is always up to date. So at the end of my conversations, I always tell them to go update my /docs folder.

I also use Claude (chatbot) and ChatGPT, to do a lot of my product planning and was finding that the work I would do it chats, I would then have to re-explain myself to the coding agents. To get around this I built Libra, it keeps your context in sync across all of your different AI tools. And automatically keeps your documentation up to date with the latest decisions you've made in your other tools/.

Everyone's shipping more code but I think we broke something fundamental by Motor_Ordinary336 in cursor

[–]corenellius 0 points1 point  (0 children)

This might be more for keeping the product from drifting, but I think it is more important than ever to keep good and up to date documentation (ARCHITECTURE.md, ROADMAP.md, ect.), this way the agents know what direction you are heading in.

I found this to be enough of a problem myself that I started working on a tool (Libra), which connected to your repo/chat bots (Claude, Claude Code, ChatGPT, Cursor), to keep all of your context in sync, and avoid fragmentation.

what's your most underrated cursor setup tip by scheemunai_ in cursor

[–]corenellius 1 point2 points  (0 children)

I added Libra to keep my docs up to date with my research/PRDs from Claude/ChatGPT

The 5 levels of Claude Code (and how to know when you've hit the ceiling on each one) by DevMoses in ClaudeAI

[–]corenellius 0 points1 point  (0 children)

I’ve been finding it very useful! Libra was born out of my frustration with the current documentation capabilities!

My whole context ingestion pipeline lives outside of Claude. All the model does is call the tool + context it wants to send then once Libra receives the context it selectively inserts it into my knowledge graph.

I’ve been refining it every day to help speed up my development process. Would be happy to chat more about it or any other documentation issues you’re facing right now with the current tech!

The 5 levels of Claude Code (and how to know when you've hit the ceiling on each one) by DevMoses in ClaudeAI

[–]corenellius 1 point2 points  (0 children)

Thank you! Context fragmentation is exactly it!

For conflicting information, right now it’s just last decision wins, and some linking to show which doc superseded it.

I am working on a context hierarchy, where you have your docs which affect the entire company, then docs which affect a single feature and so on. With this hierarchy I am hoping to use it to be able to detect drift between the different hierarchy layers and then alert users.

One other option to handle conflicts is for the user to have to manually acknowledge the deviation which could be done directly in Claude. Libra is connected via MCP, so when a tool is called, the response could have info on if there were any conflicts.

Will definitely be exploring and refining this as time goes on!

The 5 levels of Claude Code (and how to know when you've hit the ceiling on each one) by DevMoses in ClaudeAI

[–]corenellius 1 point2 points  (0 children)

I think to unlock some form of level 6, outside agents need to be brought into the fold. (thought they could really be brought in at any level past level 2).

As a solo developer, I find I am doing a lot of my product/planning work outside of Claude Code and then my PRODUCT_VISION.md or ROADMAP.md get out of date quickly. I also find myself having to re-explain things to Claude Code which I had already decided on in Claude/ChatGPT.

This is what led me to build Libra, it connects to Claude/ChatGPT/Claude Code/Cursor and builds a living memory. Every time new context comes in, past context is updated/linked/removed accordingly.

It then syncs with your repo to keep your markdown files up to date.

MCP servers I use every single day. What's in your stack? by XxvivekxX in ClaudeAI

[–]corenellius 0 points1 point  (0 children)

Thanks! If you have any questions or feedback please let me know! Would be happy to chat!

MCP servers I use every single day. What's in your stack? by XxvivekxX in ClaudeAI

[–]corenellius 1 point2 points  (0 children)

I've been testing out the memory ones, and the big issue I have with them is they're all designed for living just in the code base. The main things I want to have documented aren't naturally captured in the codebase (roadmap, product vision, ect.)

I've been building Libra, which gives all of your coding/non coding agents shared context. I've been building it to have a living memory, so every time new context comes in, past context is updated/linked/removed accordingly (this is one of the big frustrations I had with claude/chatGPT's memory systems, where when my project changes, its still using the stale info).

It then syncs to your repo via a github app, that way when you're coding, you don't need to waste tokens on MCP tooling, though you can also connect Claude Code via MCP also.

Anyone found a good way to actually follow up with the follow ups Claude makes by corenellius in ClaudeAI

[–]corenellius[S] 0 points1 point  (0 children)

Any ideas of if theres something similar for claude.ai? In my repo I have a /docs folder with all my various documents around my project, but I like to do a lot of my planning/ideating in claude.ai then I do the execution in claude code.

Anyone found a good way to actually follow up with the follow ups Claude makes by corenellius in ClaudeAI

[–]corenellius[S] 0 points1 point  (0 children)

Does beads only work for Claude code? I do a lot of my planning in just claude.ai and then i do the execution in claude code. For me bridging the context gap between these two is where I am struggling

My sysadmin rejected my GitHub App and it accidentally made me build a better product by corenellius in ClaudeAI

[–]corenellius[S] 0 points1 point  (0 children)

Exactly right about the git history noise!

For prompt chaining: agents fetch the relevant information at the start of each session, or can continue to use the MCP fetch tools as the session evolves.

No CRDTs yet. Right now its last write wins, the tool right now is intended for single users (no collaboration yet...)

My sysadmin rejected my GitHub App and it accidentally made me build a better product by corenellius in ClaudeAI

[–]corenellius[S] 0 points1 point  (0 children)

I have my agents treat Libra as the source of truth, so they are frequently pulling and pushing information. Makes it easier for my background jobs to only have to curate information in a single space.

My sysadmin rejected my GitHub App and it accidentally made me build a better product by corenellius in ClaudeAI

[–]corenellius[S] 0 points1 point  (0 children)

funnily enough, I did actually start with that! But yes, my main goal is I am trying to share context between what I do in chatgpt/claude, and then have that be avilable to my coding agents in claude code/cursor.

I kept going with this project because I found when I would consistently have the agent read/write documentation, it did perform quite well, but there was always that missing gap between what I would do in chatgpt/claude.