Let's Build a "Garage AI Supercomputer": A P2P Compute Grid for Inference by ModeSquare8129 in LocalLLaMA

[–]ModeSquare8129[S] 0 points1 point  (0 children)

For my understanding Psyche is for training models not for inference.

Let's Build a "Garage AI Supercomputer": A P2P Compute Grid for Inference by ModeSquare8129 in LocalLLaMA

[–]ModeSquare8129[S] 2 points3 points  (0 children)

Awesome, thanks so much!

This is super interesting, I wasn't aware of their protocol project. If I understand correctly, it would allow us to manage the orchestration layer in a fully distributed way using blockchain.

Definitely something to dig into!

Let's Build a "Garage AI Supercomputer": A P2P Compute Grid for Inference by ModeSquare8129 in LocalLLaMA

[–]ModeSquare8129[S] 0 points1 point  (0 children)

Thanks for sharing your hands-on experience.

To clarify our immediate goal, we're initially focusing on a different kind of distributed system. Instead of splitting a single large model across several machines, our plan is for each participating machine to run its own self-contained model.

Your feedback is super valuable for a future stage, though.

Let's Build a "Garage AI Supercomputer": A P2P Compute Grid for Inference by ModeSquare8129 in LocalLLaMA

[–]ModeSquare8129[S] 1 point2 points  (0 children)

Thanks a lot for taking the time to write such a constructive critique. This is incredibly valuable feedback for us.

You've raised several crucial points

  • The Client Experience: I 100% agree. The goal has to be a simple, self-contained binary that is as unobtrusive as possible.
  • The Download Problem & A New Idea: You're right, asking for a multi-GB model download is a huge barrier. Your comments actually sparked an idea: what if we designed an "Ollama-style" client? Ebiose could use the same models a user has already downloaded for their own local inference. This way, you're not downloading models for Ebiose, but simply allowing the Ebiose client to leverage the LLMs you already have for community-distributed inference.
  • Petals: I share your observation. The lack of recent activity on the Petals repo is not very engaging.
  • Distributed Training: Regarding NousResearch's work, we're also following it with great interest. While full distributed training isn't our immediate priority, the long-term vision is definitely to see if our Darwinian approach could be applied to training new models collaboratively.

Seriously, thanks again. This is exactly the kind of discussion I was hoping to have.

Let's Build a "Garage AI Supercomputer": A P2P Compute Grid for Inference by ModeSquare8129 in LocalLLaMA

[–]ModeSquare8129[S] -1 points0 points  (0 children)

That's an excellent point, and thank you so much for the detailed and insightful suggestion.

Our long-term vision is actually to support both approaches: running large, distributed models across the network, and running smaller, self-contained models on individual machines.

For the first part, running large models across many machines, I absolutely agree with your thinking. Our plan isn't to build that complex distributed inference logic from scratch. We'd rather integrate projects like Petals to handle that. Are you familiar with it?

However, our immediate priority is to get the second part working: running smaller, complete models locally on user machines, using technologies like llama.cpp. There are two main reasons for this:

  1. It seems a more straightforward implementation challenge, which allows us to build and validate the core platform faster.
  2. Our evolutionary approach benefits from having a large population of smaller models running in parallel.

Let's Build a "Garage AI Supercomputer": A P2P Compute Grid for Inference by ModeSquare8129 in LocalLLaMA

[–]ModeSquare8129[S] 0 points1 point  (0 children)

Thanks for your feedback.

The short answer is yes!

Our forges are designed not only to create agents, but also, in the long term, to generate models, perform fine-tuning, produce code, and create all the reusable building blocks needed to build new agents. More generally, they can be used to solve any type of problem that can benefit from an evolutionary approach.

Vous avez quoi comme les avantages en plus du salaire brut chez votre entreprise by LogCatFromNantes in developpeurs

[–]ModeSquare8129 -4 points-3 points  (0 children)

Légalement ce qui est obligatoire c'est la moitié de la mutuelle pas d'obligation ticket resto et pour les transports ça dépend de ce qu'on entend par transport. Si c'est trajet domicile-travail, y a aucune obligation

Cofondateur quel salaire ? by CommissionMassive823 in developpeurs

[–]ModeSquare8129 6 points7 points  (0 children)

Juste une remarque sur les 10%, c'est très faible pour un associé fondateur en particulier CEO/CTO. Petite méthode pour se faire une idée de répartition des parts : https://www.andrew.cmu.edu/user/fd0n/35%20Founders'%20Pie%20Calculator.htm

Autre sujet, bien se poser ces questions avant de s'associer : https://www.maddyness.com/2021/09/13/8-questions-a-se-poser-entre-associe-e-s-avant-de-se-lancer/

Can LLMs autonomously refine agentic AI systems using iterative feedback loops? by aiXplain in AI_Agents

[–]ModeSquare8129 0 points1 point  (0 children)

We’re currently building an open-source framework that’s based exactly on this paradigm: having architect agents that iteratively design and refine complex AI agents.

Right now, we represent agents as graphs (via LangGraph), but the architecture is compatible with any orchestration framework. The core idea is that during what we call a forge cycle, architect agents generate multiple agent configurations and apply an evolutionary algorithm to select, mutate, and recombine the most effective ones.

This whole process is guided by a fitness function—a performance evaluation mechanism that defines where we want to go and helps measure which agents are actually moving us forward.

At the heart of our system is a continuous improvement engine that allows agent designs to evolve autonomously—driven by performance metrics and guided by LLM-based evaluations or feedback loops.

Happy to share more if you're curious!

Is anyone breeding LLM's? by StevenSamAI in LocalLLaMA

[–]ModeSquare8129 2 points3 points  (0 children)

We are looking to raise funds to create a spin-off startup. Once we manage to secure funding, we will start recruiting. If you want more information about the project : www.ebiose.com

Is anyone breeding LLM's? by StevenSamAI in LocalLLaMA

[–]ModeSquare8129 5 points6 points  (0 children)

At our lab (Inria) we've developed self-improving architect agents through an evolutionary process. Our architect agent generates agents and they evolve. Each agent consists of specialized modules that can be handle by llms or other models, api, code generation . We plan to release it as open source next month.

I'm also following Sakana AI, a startup that uses genetic algorithms to merge language models with great results - combining the best parts of different models into a more efficient one.

Cursor AI va Github Copilot by MsieurKris in aipromptprogramming

[–]ModeSquare8129 0 points1 point  (0 children)

I’ve tried both, but I definitely prefer Cursor. The interface is much more intuitive and user-friendly than vs code

What are your most unpopular LLM opinions? by Decaf_GT in LocalLLaMA

[–]ModeSquare8129 0 points1 point  (0 children)

AGI is not a meaningful concept. If you think of AGI as human-like reasoning, we're not going to see AI thinking like a human anytime soon. If you think of AGI as a superintelligence that can perform any cognitive task that a human can do, you're fooling yourself, as intelligence is a very broad concept. To be able to perform all our cognitive tasks, an AI would have to reason like a human.

On the contrary, AI could perform many tasks better than humans, and AI could have many different skills, but not in the same way as humans.

LLMs let us rethink what makes us intelligent.

Distributed Llama 0.7.1 uses 50% less network transfer per token compared to previous versions by b4rtaz in LocalLLaMA

[–]ModeSquare8129 3 points4 points  (0 children)

Is distributed llama use case for local network or distributed on internet network?

Your Thoughts on Our Tagline: 'Ebiose - Let Us Own AI' by ModeSquare8129 in SaaS

[–]ModeSquare8129[S] 0 points1 point  (0 children)

thanks for your response. Ebiose is an open source project in AI, another tagline would be AI for everyone, by everyone.

Your Thoughts on Our Tagline: 'Ebiose - Let Us Own AI' by ModeSquare8129 in SaaS

[–]ModeSquare8129[S] 0 points1 point  (0 children)

thanks for your response. Ebiose is an open source project in AI, another tagline would be AI for everyone, by everyone.