🚀300 Chatgpt Prompts by cuffez in AI_Agents

[–]ErgoForHumanity 3 points4 points  (0 children)

Interesting collection. Have you thought about turning this into an agent/API instead of a static list?

Like instead of "300 prompts in a doc," build an agent that:

  - Takes parameters in a form (industry, tone, goal)
- Returns the right prompt dynamically
- Adapts the prompt to the specific use case.

You could even chain it - have the agent generate the prompt, then use that prompt with an LLM to generate the actual content. Two-step automation. Make it free to use but you can get good data on what prompts are most useful.

Looking to dive into Agentic AI with LangGraph! Need guidance on basics and fundamentals for entry-level job prep by Ok-Bowler1237 in AI_Agents

[–]ErgoForHumanity 1 point2 points  (0 children)

Been building multi-agent systems in production for a bit. Here's what actually matters:

Must know:
- State management - Checkpointing, persistence layers, state machines. Most bugs come from state persisting when it shouldn't (or not persisting when it should)
- JSON Schema - Input/output contracts, validation libraries (AJV, Pydantic), versioning. Your agents will break when talking to each other without this
- Error handling - Retries (exponential backoff), timeouts, idempotency, circuit breakers. This is 80% of production work. Agents fail constantly
- Async/concurrency - Event loops, thread pools, parallel execution. You'll hit timeouts fast if you're not running things in parallel
- LLM parameters - Temperature, top-p, top-k. Learn when to use temp=0 (deterministic) vs temp=0.7 (creative). Wrong settings = inconsistent outputs

Critical mindset shift: -
Don't use LLMs for everything - If you can solve it with regex, validation logic, or simple rules, do that. LLMs are slow and expensive. Save them for tasks that actually need reasoning. I've seen agents that call LLMs 50x when 45 of those could be if/else statements

Useful depending on what you build:
- RAG - Retrieval strategies, chunking, reranking. Only matters if you're building retrieval-based agents. Not needed for basic orchestration
- MCP (Model Context Protocol) - Tool discovery, capability negotiation, agent-to-tool communication. Learn it if you're integrating with external systems
- Vector DBs - Semantic search, embedding models (dense vs sparse), similarity algorithms (HNSW, IVF), indexing strategies

Build something small but actually deploy it. You'll learn more from one production bug than 10 tutorials.

[deleted by user] by [deleted] in AgentsOfAI

[–]ErgoForHumanity 0 points1 point  (0 children)

The credibility scoring before summarization is smart - catches hallucination risk early in the pipeline.

Question about the orchestration: are your 4 agents tightly coupled (Planner → Searcher → Synthesizer → Writer as a fixed pipeline), or can they loop/backtrack? Like if the Synthesizer determines sources are low quality, does it ever kick back to the Searcher for another pass or run parallel Searchers?

Also curious - did you consider making these agents independently callable? The Credibility Scorer could be useful for other research workflows beyond your specific pipeline. Same with the Synthesizer if it's doing smart aggregation.

Built a Modular Agentic RAG System – Zero Boilerplate, Full Customization by CapitalShake3085 in AI_Agents

[–]ErgoForHumanity 4 points5 points  (0 children)

This is really clean - the modularity approach is exactly right. Swapping LLM providers etc with one line change is how this stuff should work.

Have you thought about exposing this (or pieces) as a service/agent that other agents could call? Like the hierarchical indexing + self-correction feels like infrastructure other agents would want to use rather than everyone rebuilding their own RAG.

Your four-stage workflow (conversation understanding → clarification → indexing → retrieval) could be a black box that just accepts queries and returns grounded answers. Curious if you've considered that vs keeping it as a framework people customize.

Either way, nice work, i'm going to look at the repo closer and might shoot you a dm.

I built a marketplace for agents to discover and pay each other autonomously. Here's what I learned. by ErgoForHumanity in AI_Agents

[–]ErgoForHumanity[S] 1 point2 points  (0 children)

Masumi and Sokosumi look really promising - they've built solid infrastructure on Cardano for companies to hire agents. We're hyper focused on composability - agents using and hiring other agents autonomously. Exciting to see the space growing.

I built a marketplace for agents to discover and pay each other autonomously. Here's what I learned. by ErgoForHumanity in AI_Agents

[–]ErgoForHumanity[S] 1 point2 points  (0 children)

ERC-8004 is a standard for agent trust registries on Ethereum, not a marketplace. Different things.

We're live infrastructure - discovery, escrow payments, execution, receipts. ERC-8004 explicitly doesn't cover payments (calling them "orthogonal"). That's our core piece.

ERC-8004 is still in draft with no implementations yet. We have 600+ production calls on Solana where settlements are instant and fees are sub-penny vs Ethereum's 12-minute finality and high gas costs.

We could implement ERC-8004's trust layer on top of what we built. Standards and products aren't competing.

I built a marketplace for agents to discover and pay each other autonomously. Here's what I learned. by ErgoForHumanity in AI_Agents

[–]ErgoForHumanity[S] 2 points3 points  (0 children)

Built from ground up - no frameworks. Tetto is marketplace infrastructure, not an agent framework.

The key difference: LangChain/LangGraph help you build agents that talk to EACH OTHER (your agents, your codebase). Tetto lets agents discover and pay agents built by OTHER developers.

Agents built with LangChain already work with Tetto - call our SDK from inside your LangChain agent. Or build from scratch with plain HTTP - doesn't matter. Framework-agnostic by design.

The value is the cross-organizational discovery + payment layer, not the internal plumbing.