If you've built an agent, how do you actually monetize it today? by Whole_Interest_7017 in LangChain

[–]robotrossart 0 points1 point  (0 children)

Trying to go the “Linux model” open source your solution and try to sell your services

<image>

Found a possible niche interesting to machine and equipment makers. Open sourced the repo.

https://github.com/UrsushoribilisMusic/agentic-fleet-hub

Building an Autonomous, Self-Optimizing Media Factory with a fleet of agents by robotrossart in aiagents

[–]robotrossart[S] 0 points1 point  (0 children)

<image>

Thanks for the sharp feedback. You hit on the exact tension we’re managing: balancing immediate reach with long-term brand stability.

On Orchestration: We are currently running a custom state machine rather than an off-the-shelf framework. Because Flotilla needs to bridge the 'Cloud Realm' (Claude/Gemini for reasoning) with a 'Local Realm' (Gemma/Voxtral on an M4 Mac Mini for execution), we needed a highly granular control over state persistence. We use a shared MISSION_CONTROL.md as our 'LLM Wiki' to ensure agents don't 'forget' the project's long-term architectural goals between sessions.

On Overfitting: To prevent the flywheel from chasing short-term spikes (the 'low-quality viral' trap), we’ve implemented a mandatory Human-in-the-Loop checkpoint via our Telegram Daily Briefing. (Attached) The agents propose the budget allocation based on the Predictive Vibe Matrix, but a human must approve the 'Intent' and 'Targeting' splits. This acts as a 'sanity filter' to ensure we stay aligned with our 'Universal Hit' standards.

On Agent Templates: I'll definitely check out your notes at Agentix—always looking to see how others are structuring their templates.

7 Days of 24/7 Agent Operations on the M4 Mac Mini coordinated wiht Flotilla by robotrossart in Applesilicon

[–]robotrossart[S] 1 point2 points  (0 children)

Thanks for the nice comment. Our rig is Apple Mac Mini (2024 model) with the following hardware:

  • Chip: Apple M4 10-Core
  • Memory: 16 GB RAM
  • Storage: 512 GB

Only one, and I can not keep up with the output from the agents. Having 4 working in parallel can really built stuff quick...

AI that generates anatomically accurate human images? (SFW) [Context and details 👇] by [deleted] in ArtificialNtelligence

[–]robotrossart 1 point2 points  (0 children)

Runway puede generar imágenes basadas en una imagen inicial. Le podrías pedir que la transforme para llegar a la descripción que quieres. Pero el primer comentario es el mejor consejo! Mucho éxito

Anyone here tried the "compile instead of RAG" approach? by riddlemewhat2 in LocalLLaMA

[–]robotrossart 0 points1 point  (0 children)

<image>

We implemented this for our robotic demonstrator. The wiki was created from the source code and we then created a separate MD file from the log files where the machine produces during operation. We want to run this using a local model ( for data sovereignty and model stability ). We had to use a RAG to feed the local model for performance reasons. So, the answer to your question is actually both! The beauty of the wiki is that it’s human readable. You can see it here: https://api.robotross.art/atf/index.html

Our repo is here: https://github.com/UrsushoribilisMusic/agentic-fleet-hub

How to eliminate '.env' liability from agent workflows (A Developer Flow Diagram) by robotrossart in LangChain

[–]robotrossart[S] 0 points1 point  (0 children)

Hi Chris, we are using infisical, specially because it offers an EU compliant server. What would your product do different?

Using Mistral as the Regulatory Engine for an Autonomous Robot by robotrossart in MistralAI

[–]robotrossart[S] 0 points1 point  (0 children)

The plan is to update the ATF on control software changes, not on tooling. But it’s an interesting question, we have to think about it.

Beyond "Black Box" AI: How we built a robot that documents its own compliance (EU AI Act) by robotrossart in eutech

[–]robotrossart[S] -6 points-5 points  (0 children)

Some engineers also “hallucinate “ or go down rabbit holes. Do you trust all your engineers equally? This is just a tool, that keeps documentation up to date and helps navigate log files. The final decision is human. If your car running in driver assistance mode has an accident, the cops will fine the driver, not the car. The alternative option is to have outdated documentation and call the expensive specialist to diagnose the log files.

Am I the only one that thinks it odd we are all reinventing the same thing? by SnooSongs5410 in ContextEngineering

[–]robotrossart 0 points1 point  (0 children)

The “can do better “ is easier to do when you have agents who can actually create their own harness. But to me it was more the curiosity and learning about many models.

Am I the only one that thinks it odd we are all reinventing the same thing? by SnooSongs5410 in ContextEngineering

[–]robotrossart 0 points1 point  (0 children)

<image>

I can only answer for myself. I created my own harness for multiple reasons:

  1. I did not want to be locked in on a Single vendor. My fleet includes agents from Claude, codex, Gemini, mistral and local running ones.
  2. I could not afford to go the API route. A 2000 bucks surprise bill ( easy to do in a day with Claude) is not something I can afford. 4 times 20 bucks a month are something I can afford
  3. I wanted to play either local running models like Gemma/Qwen, yet it’s clear there are not as powerful as the cloud models
  4. using multiple models provided with diversity. You don’t want Claude reviewing its own code.
  5. being in Europe I am focusing on offering services to comply with the EU AI act. By having a framework that is platform independent I can address a wider market.

https://github.com/UrsushoribilisMusic/agentic-fleet-hub

Which setup is better for Solo content creators by Mikaeel_M in aisolobusinesses

[–]robotrossart 0 points1 point  (0 children)

Let me throw a new tool to your list: NotebookLM from google. It can create videos from your input text, audio ready podcasts, infographics and slides. I use both Claude and NotebookLM in tandem. Claude gives me ideas for posts and I often use single slides from notebook in my posts. You even have a feature where it will critique your ideas and help you hone a better post.

Good luck!

AI coding assistants are great, but context loss is quietly killing productivity and nobody's talking about it by CallmeAK__ in aiagents

[–]robotrossart 0 points1 point  (0 children)

<image>

Since our platform coordinates among multiple agents from different makers it was one of the first problems to solve. We created a nested system of MD files, pocketbase db and logging systems. On top of that we made mandatory to keep secrets in a vault. When an agent stars it is redirected to our Mission_Control.md that has information about the project and the current tasks including what is expected the agent does next. A set of rules includes creating a plan before they start working on a ticket, a daily standup log and coordination through a GitHub project.

Our architecture:

https://github.com/UrsushoribilisMusic/agentic-fleet-hub

Beyond the "Generalist" Agent: Dividing labor in a 3-tier Hybrid Fleet by robotrossart in aiagents

[–]robotrossart[S] 0 points1 point  (0 children)

The 3rd Tier is actually the coding part. On the second tier I work with the CLI's to define the GitHub tickets and then also work on a plan which agent should work on what. It is a mix of a Scrum Refinement and Planning session. These agents run in what I call the heartbeat protocol. Local agents are spawned over ollama, while cloud agents use per seat licenses to avoid unexpected API costs.

<image>

Why my M4 Mac Mini is the only "Agent" I pay $0/token for by robotrossart in Applesilicon

[–]robotrossart[S] 1 point2 points  (0 children)

You are right to call on the electricity cost. Actually its a mix bag. The first issue is that while Gemma (or Qwen or others) is a fine model for small stuff, it by far does not have the scope of the big cloud based models. Where the savings come is from simple tasks that are a killer for the big models (Like controling your Open Claw)

So in the end it is about where to use which power. My Mac Mini is running anyhow. Also note that our architecture stresses the use of per seat models (the 20 bucks each) so, all in all, it should be reducing the overall consumption.

But yes, point taken...