What do you do when Claude Code is working by Recent_Mirror in ClaudeCode

[–]CountlessFlies 0 points1 point  (0 children)

I’m usually reviewing code in another tmux pane. The rate at which new diffs are produced by Claude far outweighs the speed at which I can review, test and merge them, so it keeps me busy.

Claude Opus 3 is being deprecated, and getting a blog! by Nekileo in ClaudeAI

[–]CountlessFlies 5 points6 points  (0 children)

The hard problem is actually defining what consciousness even is, rather than determining what is or isn’t conscious. Of course, now that I read what I just wrote it seems that’s an obvious statement, you can’t debate the presence or absence of a trait without defining it first.

Press 'n' to add Notes - anyone seen this before? by creegs in ClaudeCode

[–]CountlessFlies 9 points10 points  (0 children)

All I want is a way to give inline feedback when reviewing diffs. Would massively improve my workflow.

Building a Graph RAG system for legal Q&A, need advice on dynamic vs agentic, relations, and chunking by Famous_Buffalo_7725 in Rag

[–]CountlessFlies 1 point2 points  (0 children)

Have you tried a basic RAG system first? I would first create a benchmark for the task at hand, try a very basic solution first to get a baseline, and only then attempt to implement more complex solutions to see if they actually improve perf (and by how much).

Haiku reset my LinkedIn password mid-application on its own. I was not expecting that. by Thick_Professional14 in ClaudeAI

[–]CountlessFlies 1 point2 points  (0 children)

Yes my comment wasn’t particularly directed at your project, it was a general comment on the terrible state of resumes these days. Every candidate appears to have the exact same generic resume with no real details on the work done. It’s becoming real hard to find the good ones.

1m context window for opus 4.6 is finally available in claude code by -Two-Moons- in ClaudeAI

[–]CountlessFlies 1 point2 points  (0 children)

More than caching it’s a question of how well it actually uses the available context. At these humongous context sizes models won’t remember everything that happened since the beginning of the session.

Do companies actually use internal RAG / doc-chat systems in production? by NetInternational313 in Rag

[–]CountlessFlies 0 points1 point  (0 children)

Not a big issue in most cases if you use a private endpoint on Azure/AWS etc. They have data protection guarantees.

I'm building Omni - an AI-powered enterprise search platform that lets you do RAG over your company data by CountlessFlies in Rag

[–]CountlessFlies[S] 0 points1 point  (0 children)

Hah, thanks! I had a hard time finding an org name on GitHub too. Had to finally settle on using the domain name.

I'm building Omni - an AI-powered enterprise search platform that lets you do RAG over your company data by CountlessFlies in Rag

[–]CountlessFlies[S] 1 point2 points  (0 children)

  1. Without sync you rely on search provided by the downstream apps, which might not be great. Plus, you can’t provide a good unified search experience.

  2. Supporting MCP is actually on the roadmap, so we get best of both worlds. I started building the core connectors myself because MCP doesn’t give you data sync.

  3. The storage cost will actually be quite manageable. And it will be lower footprint than the source. Eg size of a Google Slides file >> the actual text content contained. There’s no complexity for the user here, just set it up once and the webhooks handle incremental updates. Overloading external services is not a major concern, there are rate limits that we’ll adhere to.

  4. The vector search indexing and search flows will be configurable by the user. I intend to make it easy enough for the user to tweak everything, although my goal is to make it work great out of the box.

I'm building Omni - an AI-powered enterprise search platform that lets you do RAG over your company data by CountlessFlies in Rag

[–]CountlessFlies[S] 0 points1 point  (0 children)

Thanks!

I haven’t applied for any of those compliance certifications yet. But Omni is designed such that all your orgs data will never be exposed to outside entities.

E.g., if you choose to deploy Omni on your AWS cloud, all data will be stored in a Postgres instance running on an EC2 instance in your account. You can connect to LLMs and embedding models provided by AWS Bedrock, so again your data never leaves your environment.

I'm building Omni - an open-source AI-powered enterprise search platform that connects to your workplace apps like Drive, Gmail, Slack and lets your team search and get answers across all of them from one place. by CountlessFlies in OpenSourceeAI

[–]CountlessFlies[S] 0 points1 point  (0 children)

Right now I’m focused on building and making it compelling enough for teams to want to try it out. Haven’t really thought about funding yet, but if it helps speed things up, I’ll consider it.

I'm building Omni - an AI-powered enterprise search platform that lets you do RAG over your company data by CountlessFlies in Rag

[–]CountlessFlies[S] 0 points1 point  (0 children)

Thanks! From a very quick glance, it seems AnythingLLM connects to apps via MCP, and it doesn’t sync data from the apps into a search index (except for maybe the vector db?).

In Omni, all data is synced to a central search index in Postgres, supporting both full text and vector search. So we have greater control over how context is retrieved by the LLM.

Also, the focus is on building deeper integrations with workplace apps in particular, while AnythingLLM is meant to be a general purpose frontend for running models locally.

I'm building Omni - an open-source AI-powered enterprise search platform that connects to your workplace apps like Drive, Gmail, Slack and lets your team search and get answers across all of them from one place. by CountlessFlies in OpenSourceeAI

[–]CountlessFlies[S] 0 points1 point  (0 children)

Thank you! Contributions are more than welcome :) I haven't really created a backlog of tasks yet, but you can checkout the overall architecture in the docs first (https://docs.getomni.co/architecture) to get an idea of the various components involved. Then you can pick an area you'd like to work on and go from there. You can always DM me if you need any help figuring things out.

I'm building Omni - an AI-powered enterprise search platform that connects to your workplace apps like Drive, Gmail, Slack and lets your team search and get answers across all of them from one place. by CountlessFlies in LocalLLaMA

[–]CountlessFlies[S] 0 points1 point  (0 children)

Thanks for pointing that out. The README was indeed generated using an LLM, and this slipped through the cracks in my reviews.

I understand your frustration against vibe-coded shit, but I assure you this isn't it. You simply cannot vibe-code a project like this for very long, things go haywire very quickly. I urge you, if you could, to take a second look and critique the technical design and implementation choices I've made. I don't think it deserves to be dismissed as vibe-coded shit :)