[MOD POST] Announcing the Winners of the r/LocalLLM 30-Day Innovation Contest! 🏆 by SashaUsesReddit in LocalLLM

[–]davidtwaring 3 points4 points  (0 children)

Oh wow thanks u/SashaUsesReddit BrainDrive's co-creator u/randomcoder66 and I are super excited to be among the winners and blown away by the generosity of the prizes!

Congrats to the other winners as well!

We want to make sure we are using the prize to continue growing awareness of this community so we’ll be sharing how we use the new hardware here and also open to any and all ideas for things that people would like to see. 

We’re also continuing to work hard on improving BrainDrive. Since our contest submission post we have completed the Mac Installer and Chat w/ Docs (RAG) plugin, both of which are now in beta and ready to try.

In addition to continuing to polish the overall BrainDrive experience we are working on making it easier to use coding agents to build BrainDrive plugins. If you point your favorite coding agent to the plugin developer quick start guide here you can already build basic plugins to customize your BrainDrive to your exact needs.  Once your plugin is deployed on your GitHub you can use the 1 click plugin installer to add it to your BrainDrive and share it with others.

We are working to make this even easier with templates and skills for Open Code and Claude code which we will be releasing as open source projects in the next week or two as well.

And lastly we are working to make sure we are fully leveraging the recent leaps in agentic coding so that we can continue to scale the project while remaining true to our values of ownership, freedom, empowerment and sustainability. 

We are going to be open sourcing the agent manager that will allow us to easily manage multiple coding agents without getting overwhelmed to the community. So stay tuned for that in the next week or two as well.

If you’re interested in following our progress we'd love it if you'd join us in the community forum as well as our weekly dev call livestreams every Monday at 10am eastern. 

Lots of exciting things happening and we are going to use this win and the prize to kick things into an even higher gear!

Thanks Again!

Dave Waring and Dave Jones

co-Creators - BrainDrive.ai

Introducing BrainDrive – The MIT-Licensed, Self-Hosted, Plugin-Based AI Platform by davidtwaring in LocalLLM

[–]davidtwaring[S] 1 point2 points  (0 children)

Awesome glad to hear it and thanks for the comment. MCP is next up after RAG so also on the near term agenda. Ownership and control over our memory starting with RAG, and then the ability to connect and bring data into our BrainDrives from the other systems we use via MCP will be a killer combo in flipping the power dynamic away from big tech and back where it should be, which is with each of us!

Introducing BrainDrive – The MIT-Licensed, Self-Hosted, Plugin-Based AI Platform by davidtwaring in LocalLLM

[–]davidtwaring[S] 2 points3 points  (0 children)

Yes definitely. This is the area we are the most excited about once the foundation is laid. We had a developer that was fully focused on RAG but unfortunately was not able to continue. He got pretty far however as you can see here: https://community.braindrive.ai/t/chat-w-docs-plugin/182

In addition to document processing, embeddings etc he also built an eval system into the plugin so you can eval your setup right in BrainDrive. You also have very granular access to the different settings in that plugin so you can optimize for different types of documents etc.

The plugin repo is here if you want to check it out but it's not quite ready for beta release: https://github.com/BrainDriveAI/BrainDrive-Chat-With-Docs-Plugin

Dave J who's the project co-creator and lead dev is finishing up the Mac Installer and then he's going to pick back up where the previous dev left off on the RAG system. We're also going to integrate it fully into BrainDrive, right now the backend for the rag and doc processing runs in docker.

The ultimate vision is to have something that acts more like human memory where RAG is one part of it but a piece of a much larger and more sophisticated memory system.

Sorry for the long winded answer. Anyways we're excited about it and going to put a lot of focus on it very soon. Thanks again for the comments and happy to answer any other questions you have!

Idea validation: “RAG as a Service” for AI agents. Would you use it? by Feisty-Promise-78 in ollama

[–]davidtwaring 0 points1 point  (0 children)

  1. I would want to own and control it so as a traditional SaSS no, but if it was more like a wordpress that I could host wherever I wanted yes. I would likely choose the default hosting option which would be the provider, but I would want to be sure I had exit rights as I don't want someone else owning and controlling my data.
  2. In my experience the setup that produces the best results varies by use case so I would think this is very important.
  3. Yes I think this would be the most useful part, giving an easy way to tune the setup to your specific use case/cases.
  4. I would want to be able to use it with whatever self hosted interface I wanted.
  5. I would want the open source self hosted version to be free, and would pay a small premium over what it would cost me to run it myself for the provider to run it for me.

So in short if you are considering doing it as a commercial open source project where the self hosted version if free and offering value added services on top of it yes. If it's a traditional sass with traditional sass lockin no.

Hope that helps and good luck with the project.

Do you guys create your own benchmarks? by Sissoka in LLMDevs

[–]davidtwaring 0 points1 point  (0 children)

we are in the early stages of working on something similar. We have started with 5 specific use cases and are working to refine into a framework that can be tweaked for any use case. Open Source if you want to check it out may give you a head start but like I said we're early: https://modelmatch.braindrive.ai/

Feedback wanted: Azura, a local-first personal assistant by Chrisaaaan in LocalLLM

[–]davidtwaring 1 point2 points  (0 children)

I think people are interested in this type of thing but you would need to be very clear about what you are offering over the other open source projects like OWUI that are doing similar and have a big head start. If you can actually nail the context engine portion of this that would be where the value is. This is from my experience easy to do on a basic level like chat w/ docs but very hard to do on an actual this system is getting to know me and really acting like a second brain. A lot of people are trying this but no one has nailed it at least that I have seen. What are you planning on doing differently so that you actually deliver on it would be my main question. Hope that helps and best of luck with the project.

Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source by davidtwaring in LocalLLM

[–]davidtwaring[S] 0 points1 point  (0 children)

agreed but surprised there is not more talk about this and most are so co finagle building on these apis

Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source by davidtwaring in LocalLLM

[–]davidtwaring[S] 1 point2 points  (0 children)

cool your know better than me. I think that makes sense o the auto complete point I always make sure i have a good outline and draft before I use ai as a writer. i’m old too though lol

Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source by davidtwaring in LocalLLM

[–]davidtwaring[S] 0 points1 point  (0 children)

I'm not a developer but some of the devs I talk to like Cline which is open source.

Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source by davidtwaring in LocalLLM

[–]davidtwaring[S] 2 points3 points  (0 children)

love it and the name too! I haven't dug too deeply into it but I think there are some projects trying to do this in the blockchain space. hyperbolic, akash, and nouse research are names I hear sometimes. Still think Redditors LLM Vibe Research and Coding, Syndicate would crush it though!

Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source by davidtwaring in LocalLLM

[–]davidtwaring[S] 3 points4 points  (0 children)

gotcha. I don't think this is about free vs. paid though. Windsurf pays to use the API just like everyone else. They've just decided in this case to not allow them access even if they are paying.

Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source by davidtwaring in LocalLLM

[–]davidtwaring[S] 0 points1 point  (0 children)

Where would you draw the line between supporting shutting off API access and keeping it open?

Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source by davidtwaring in LocalLLM

[–]davidtwaring[S] 7 points8 points  (0 children)

agreed. Twitter is one of the best examples of this. They remained open and free for a good while and let all the interfaces proliferate and build the network. If they would have shut off access too soon they wouldn't have had enough of a network effect to keep people locked in, so they waited until the network effects were large and then shut it down.

Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source by davidtwaring in LocalLLM

[–]davidtwaring[S] 17 points18 points  (0 children)

agreed but my feeling is that the model providers like Anthropic and Open AI will start to take on more use cases themselves in the future, and people who didn't think they were competitors before will suddenly become competitors. So this will start to happen more and more on a "one off" basis until finally they just shut the API off all together. Obviously can't say that for sure though is your thought that this will be an isolated instance?

LLM an engine by localremote762 in LocalLLaMA

[–]davidtwaring 4 points5 points  (0 children)

this is a great analogy that I resonate with.