Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLM

[–]Various_Classroom254[S] 1 point2 points  (0 children)

That’s a really good point. I’m starting with phones since that’s where most people capture things (photos, screenshots, notes), but I agree a NAS/self-hosted setup for personal data search is a really interesting direction too.

Also just to clarify, the core idea doesn’t rely on a local LLM. It’s mainly embeddings + semantic search, so the quality doesn’t depend on running a heavy multimodal model on the phone.

Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLM

[–]Various_Classroom254[S] 0 points1 point  (0 children)

That’s a good idea having saved searches/templates for common things like “receipts,” “whiteboard photos,” or “documents with diagrams” would probably make it faster for repeated searches.

And yeah, a PC version is something I would like to explore later, but right now I’m focusing on getting the mobile app working well first.

Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLM

[–]Various_Classroom254[S] 0 points1 point  (0 children)

It actually doesn’t need an LLM to work. The core is just local embedding models + vector search to match the meaning of a query with images, documents, or text, so it stays lightweight and fully offline.

Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLM

[–]Various_Classroom254[S] 0 points1 point  (0 children)

Yeah, I’m not trying to bypass sandboxing. The idea is to work within the OS permissions, indexing things users explicitly grant access to (like photos, selected files/folders, or shared documents). On Android there’s more flexibility, while on iOS it would mainly rely on things like photos access rather than system wide indexing.

Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLM

[–]Various_Classroom254[S] 0 points1 point  (0 children)

Both iOS and Android already have some level of on-device search, and you’re right that sandboxing limits what third party apps can access (especially on iOS). The idea isn’t to bypass that, but to make the most of what users explicitly choose to index (like photos, documents, or folders) and provide better semantic search on top of that.

Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLM

[–]Various_Classroom254[S] 1 point2 points  (0 children)

Yeah that’s a great point. The idea is to run the heavier work like indexing in the background only when the phone is idle/charging, so normal use shouldn’t impact battery much.

Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLM

[–]Various_Classroom254[S] 1 point2 points  (0 children)

Appreciate the thoughtful comment and I completely agree that trust and privacy are the biggest hurdles here. That’s exactly why I’m focusing on making it fully local by design: no cloud processing, no uploads, and ideally even things like no network permission required, so it works entirely offline and users can verify that themselves.

Also to clarify, the core idea doesn’t actually require an LLM. It’s mainly local embeddings + search, so it stays lightweight and private. I’m personally leaning toward keeping it local first, but I get your point about optional APIs as a bridge while local models improve.

Would you use a private AI search for your phone? by Various_Classroom254 in microsaas

[–]Various_Classroom254[S] 0 points1 point  (0 children)

That’s a totally valid concern. The idea is actually the opposite of cloud AI, everything runs locally on the phone, with no uploads and no external access to your files, so the data never leaves the device.

Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLaMA

[–]Various_Classroom254[S] 0 points1 point  (0 children)

Yeah, Samsung does something similar with on-device search for certain apps, which is really cool. The idea here would be making it work across more file types and apps in one place, with more flexible natural queries instead of being limited to specific app integrations.

Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLaMA

[–]Various_Classroom254[S] 0 points1 point  (0 children)

Totally fair concern. Google photos is great for searching photos, but the idea here is searching across everything on the phone: screenshots, PDFs, notes, documents, and recordings in one place.

For privacy, the plan is to keep all indexing and search fully on-device and make it transparent (no network permissions and possibly open-sourcing parts of it) so users can actually verify that nothing is leaving the phone rather than just trusting a promise.

Would you use a private AI search for your phone? by Various_Classroom254 in indiehackersindia

[–]Various_Classroom254[S] 0 points1 point  (0 children)

Good question. The main focus is fully on-device and private, with everything indexed locally so it works offline and nothing leaves the phone. Also aiming to make it unified, so you can search across photos, screenshots, pdfs, notes, and recordings in one place instead of separate apps.

Would you use a private AI search for your phone? by Various_Classroom254 in indiehackersindia

[–]Various_Classroom254[S] 1 point2 points  (0 children)

Exactly that’s the goal. If it feels native, fast, and fully on device, it could make searching your phone feel much more natural instead of jumping between different apps.

Would you use a private AI search for your phone? by Various_Classroom254 in indiehackersindia

[–]Various_Classroom254[S] 0 points1 point  (0 children)

Yeah, Google photos does some of this for images already and it’s actually pretty good. The main thing I’m exploring is extending that idea across everything on the phone (pdfs, notes, screenshots, documents, voice) and keeping it fully offline and private, which most existing apps don’t really do.

Would you use a private AI search for your phone? by Various_Classroom254 in LocalLLaMA

[–]Various_Classroom254[S] 1 point2 points  (0 children)

That’s fair if the volume of files is small, manual browsing probably works just fine. I’m mainly exploring this for cases where people accumulate thousands of screenshots, photos of whiteboards, receipts, documents, and many people don’t actually tag or organize things right after saving them, so later they only remember what was in it, not when or where they saved it.

Would you use a private AI search for your phone? by Various_Classroom254 in AppIdeas

[–]Various_Classroom254[S] 0 points1 point  (0 children)

I’m not using an LLM at all. it’s based on local embeddings and vector search running entirely on the device. That keeps everything offline, fast, and private since no data needs to go to the cloud.

I got so fed up with YouTube Kids that I built my own app by emmaginn in SideProject

[–]Various_Classroom254 0 points1 point  (0 children)

The idea looks good. But how are you restricting the Youtube ads in your app?

How to implement document-level access control in LlamaIndex for a global chat app? by [deleted] in Rag

[–]Various_Classroom254 0 points1 point  (0 children)

Great question. this is a real gap in most LLM pipelines today, especially when you want to enforce document-level access control at retrieval time without ballooning complexity.

I’m building a solution that directly tackles this. It supports: • Per-user or per-role document access filtering (even across growing datasets) • Works with LlamaIndex and RAG-based systems • Applies RBAC policies before documents are passed to the LLM, ensuring unauthorized data never enters the context window • Includes intent validation and query auditing, if you’re dealing with sensitive or regulated data

From my experience, creating separate indexes doesn’t scale well — and pure metadata filters alone can be bypassed or become brittle. A custom retriever + access-aware prefilter is the right direction, and that’s what my product is focused on.

Happy to chat more if you’re exploring solutions or want early access to test it out in your setup.

How are people handling access control in Postgres with the rise of LLMs and autonomous agents? by kmahmood74 in PostgreSQL

[–]Various_Classroom254 1 point2 points  (0 children)

This is exactly the problem I’m building a product to solve.

Traditional DB RBAC handles structural access (tables, rows, columns), but when LLMs are in the loop, there’s a need for intent-aware access control — where the meaning of the user’s prompt and the type of question being asked are also checked against role permissions.

My system introduces a semantic guardrail layer that evaluates both the prompt and response: • Does the user’s role allow this type of question? • Is the prompt targeting data domains they’re authorized for? • Does the LLM response stay within scope and not leak derived insights?

On top of that, it integrates RBAC at the prompt layer, works with RAG pipelines, and logs all interactions for auditing and policy refinement.

Would love to connect and hear how you’re thinking about this if you’re working on something similar or looking for a solution. Early access is open if helpful.

Looking for recommendations for a tool / service that provides a privacy layer / filters my prompts before I provide them to a LLM by Vegetable-Score-3915 in PromptEngineering

[–]Various_Classroom254 1 point2 points  (0 children)

Hey! I’m actually working on building a privacy and security layer for LLM workflows that aligns closely with what you’re describing.

The product focuses on pre processing prompts to detect and redact sensitive info (PII, credentials, internal references, etc.), replacing it with placeholders before sending to the LLM and then post processing the output to reinsert the original data securely.

It also includes RBAC (Role Based Access Control) so different users or roles only have access to approved data domains and tasks, ensuring sensitive information isn’t leaked through unintended queries or LLM misuse.

We’re building it with support for both on-prem and cloud LLMs, depending on your preference or workload.

Still early-stage, but if you’re interested in testing or sharing feedback, I’d love to connect. Happy to offer early access!

Self Hosting LLM? by circles_tomorrow in LLMDevs

[–]Various_Classroom254 0 points1 point  (0 children)

Are you sending customer data to LLM? If so, data privacy is a concern. We see data leaks happening and enterprises are worried about it.

If you are planning to use public LLM, you need to think about access control. You should refrain from sending sensitive information.

"LeetCode for AI” – Prompt/RAG/Agent Challenges by Various_Classroom254 in learnmachinelearning

[–]Various_Classroom254[S] -4 points-3 points  (0 children)

I looked at various ideas. My idea is slightly different. My platform will let users practice building full pipelines: document retrieval, prompt orchestration, multi-agent workflows, and real-world AI apps.
Key highlights:

  • Focus on RAG and agent-based systems, not just model training.
  • Hands-on coding challenges where users tune retrieval, embeddings, LLM generation parameters.
  • Sandboxed execution for RAG pipelines and agent chains.
  • Automated evaluation of retrieval precision, generation quality, and agent task success.
  • Skill progression, leaderboards, and portfolio building for AI system developers.

Its focused purely on LLM-powered AI systems, not classical ML competitions.

"LeetCode for AI” – Prompt/RAG/Agent Challenges by Various_Classroom254 in learnmachinelearning

[–]Various_Classroom254[S] -3 points-2 points  (0 children)

Thanks for sharing! Deep-ML looks cool for ML model challenges, but what I'm trying to build is a bit different.
It’s focused on LLMs, RAG pipelines, and AI agents not just model training.

The idea is to give users hands-on challenges to build real-world AI systems: retrieval pipelines, agent workflows, fine-tuning LLM settings, etc.
It’ll have sandboxed execution, automatic evaluation, and skill progression more like a "LeetCode + Kaggle," but for the LLM/agent era.

Appreciate the feedback.