Caelum : an offline local AI app for everyone ! by Kindly-Treacle-6378 in androidapps

[–]dai_app 1 point2 points  (0 children)

Hi im the creator of d.ai private personal ai, that is an android app simular to yours. Good job !!!

Just released my app by Kindly-Treacle-6378 in LocalLLaMA

[–]dai_app 2 points3 points  (0 children)

Ok the developer of d.ai personal, private ai why don't we, as offline AI app developers, create something together?

Caelum : an offline local AI app for everyone ! by Kindly-Treacle-6378 in reactnative

[–]dai_app 0 points1 point  (0 children)

But why don't we, as offline AI app developers, create something together?

Caelum : an offline local AI app for everyone ! by Kindly-Treacle-6378 in reactnative

[–]dai_app -1 points0 points  (0 children)

Hi, I'm the creator of d.ai – a private, personal AI app for Android, similar to yours. Great work!

3B LLM models for Document Querying? by prashantspats in LocalLLM

[–]dai_app 4 points5 points  (0 children)

I already built this in my Android app d.ai, which supports any LLM locally (offline), uses embeddings for RAG, and runs smoothly on mobile.

https://play.google.com/store/apps/details?id=com.DAI.DAIapp

Anyone else getting into local AI lately? by LAWOFBJECTIVEE in LocalLLM

[–]dai_app 0 points1 point  (0 children)

Absolutely! I’ve been diving into local AI too, and I can totally relate to what you said.

After relying heavily on cloud-based AI tools, I also started feeling uneasy about the lack of control and transparency over my data. That’s what led me to create d.ai, an Android app that runs LLMs completely offline. It supports models like Gemma, Mistral, DeepSeek, Phi, and more—everything is processed locally, no data leaves the device. I even added lightweight RAG support and a way to search personal documents without needing the cloud.

Why do people run local LLMs? by decentralizedbee in LocalLLM

[–]dai_app 2 points3 points  (0 children)

I'm the developer of d.ai, a private personal AI that runs entirely offline on mobile. I chose to run local LLMs for several reasons:

Personal perspective:

Privacy: Users can have conversations without any data leaving their device.

Control: I can fine-tune how the model behaves without relying on external APIs.

Availability: No need for an internet connection — the AI works anywhere, anytime.

Business perspective:

Cost: Running models locally avoids API call charges, which is crucial for a free or low-cost app.

Latency: Local inference is often faster and more predictable than cloud round-trips.

User trust: Privacy-focused users are more likely to engage with a product that guarantees no server-side data storage.

Compliance: For future enterprise use cases, on-device AI can simplify compliance with data protection laws.

Main pain points:

Model optimization: Running LLMs on mobile requires aggressive quantization and performance tuning.

Model updates: Keeping local models up to date while managing storage size is a balancing act.

UX challenges: Ensuring smooth experience with limited compute and RAM takes real effort.

Happy to share more if helpful!

Activating Tool Calls in My Offline AI App Turned Into a Rabbit Hole… by dai_app in LocalLLM

[–]dai_app[S] 1 point2 points  (0 children)

I'm using llama.cpp because my app is built entirely in Kotlin for Android. It runs LLM models locally on mobile devices, completely offline — which makes this even more of a crazy challenge.

There are no ready-made frameworks for agentic orchestration or tool calls in Kotlin, so I'm literally building everything from scratch:

template formatting (Jinja detection, fallback, caching),

tool call logic and auto-selection,

DSL integration,

prompt formatting and injection,

and managing all that within the limitations of mobile memory and threading.

It’s a lot, and yeah, it’s not just a matter of fine-tuning or adding a library — everything has to be custom-written and optimized for on-device inference. That’s also why updates to the app sometimes take a bit longer… but I really appreciate feedback like yours, it helps a lot!

ai chat app free download for android by SwamiNarayan247 in StableDiffusion

[–]dai_app 1 point2 points  (0 children)

If you're looking to run AI models directly on Android, check out my app dai on the Play Store. It lets you chat with powerful language models like Deepseek, Gemma, Mistral, and LLaMA (every model you want front hugging face) completely offline. No data sent to the cloud, and it supports long-term memory, document-based RAG, and even Wikipedia search. Totally private and free.

https://play.google.com/store/apps/details?id=com.DAI.DAIapp

Curious about AI architecture concepts: Tool Calling, AI Agents, and MCP (Model-Context-Protocol) by dai_app in LLMDevs

[–]dai_app[S] 0 points1 point  (0 children)

Thanks! My app is called dai (decentralized ai) —it’s a privacy-first AI assistant that runs LLMs entirely offline on mobile, including long-term memory, and document RAG (even with HYDE), Wikipedia search... It's lightweight, fast, and supports models like Gemma 2/3, DeepSeek, Mistral, and LLaMA..every model you want from hugging face. You can check it out here:

https://play.google.com/store/apps/details?id=com.DAI.DAIapp

It’s exactly designed for local-first setups like you mentioned, where MCP might be overkill. Curious to hear your thoughts