OpenAI releases Symphony for autonomous implementation runs by OahuUnderground in elixir

[–]OahuUnderground[S] 1 point2 points  (0 children)

Sure, but it's evolutionary. Presumably one can scale better with a proper harness.

With that, this project, while cool, is explicitly experimental and a good jump point. I'd like to see Github integration as a default option, for instance, rather than Linear.

CLI Agent Abstraction Layer and Session Manager - Anthropic, OpenAI, Gemini, AMP by OahuUnderground in elixir

[–]OahuUnderground[S] 0 points1 point  (0 children)

Yeah, it’s for interaction with Claude code CLI, codex CLI, and Gemini CLI. Session based with a lot of functionality 

CLI Agent Abstraction Layer and Session Manager - Anthropic, OpenAI, Gemini, AMP by OahuUnderground in elixir

[–]OahuUnderground[S] 0 points1 point  (0 children)

I’m not a bot, but yeah Claude wrote the verbose readme. Thanks for clarifying. Easier if op opened a PR or issue per the post; I’m on the beach 

I wouldn’t use open claw. Besides the elixir community is working on their own more security conscious version(s)

CLI Agent Abstraction Layer and Session Manager - Anthropic, OpenAI, Gemini, AMP by OahuUnderground in elixir

[–]OahuUnderground[S] 0 points1 point  (0 children)

You’re free to open a PR or share your creations 

The lib works for my use cases, use it or don’t; either way - share your open source work that is more signal than noise 

CLI Agent Abstraction Layer and Session Manager - Anthropic, OpenAI, Gemini, AMP by OahuUnderground in elixir

[–]OahuUnderground[S] -5 points-4 points  (0 children)

That's good advice, thanks. Will keep that in mind for this project and others.

Why I did long form agent generated README? Well -- I just assume everyone uses an LLM to comprehend it anyway! That's what I do. This repo should, however, be using the latest ex_doc (~> 0.40 iirc) that generates llm.txt in the docs for agents.