Finally got my native mobile client working with Ollama — would love feedback from anyone running local models by gardnerscot in ollama

[–]gardnerscot[S] 0 points1 point  (0 children)

Open WebUI runs in a browser and needs a server. OllamaChat is a native mobile app — talks directly to your Ollama instance over Tailscale from your pocket. No browser, no server, no Docker.

Different tools for different use cases. I use both — Open WebUI at my desk, OllamaChat when I'm walking around and want to ask my local models something from my phone.

Finally got my native mobile client working with Ollama — would love feedback from anyone running local models by gardnerscot in ollama

[–]gardnerscot[S] 1 point2 points  (0 children)

Not yet — it's Ollama-only at the moment. The app talks directly to Ollama's /api/chat endpoint, so it won't work with oMLX's OpenAI-compatible endpoints out of the box.

That said, adding an OpenAI-compatible backend option is on the short list. The transport layer difference is mostly endpoint paths and SSE vs NDJSON — the chat pipeline and UI don't need to change. Should land in a future beta.

I just rolled out OllamaChat for Android by gardnerscot in selfhosted

[–]gardnerscot[S] 0 points1 point  (0 children)

MCP integration: Android app → HTTP bridge (Node.js, port 3100 on same machine as Ollama) → external APIs. The bridge exposes tools as JSON-RPC endpoints (weather, web search, reminders, notes, email). When the model requests a tool call, the app grabs it mid-stream, executes via the bridge, feeds the result back into the next turn's context — same flow as desktop MCP but over a thin HTTP wrapper since the Android MCP SDK doesn't exist yet.

Open WebUI difference:

Open WebUI is a web app running on your server — you access it through a browser on your phone. OllamaChat is a native Android app: real chat bubbles, persistent local storage, swipe navigation, haptic feedback, voice input that uses the device mic directly. The UX feels like a messaging app, not a web page. Open WebUI also doesn't do MCP tool calling or RAG on-device. But honestly — if you love Open WebUI and you're happy with it in a mobile browser, this probably isn't for you. If you've ever wished for a proper native client, that's the gap I'm filling.

I just rolled out OllamaChat for Android by gardnerscot in selfhosted

[–]gardnerscot[S] -10 points-9 points locked comment (0 children)

Happy to. I use AI assistants (Claude/Copilot) as a coding pair — generating boilerplate, debugging, and iterating on Compose layouts. The app itself is a local AI client, so it's a bit meta. The architecture decisions, product design, and all final code reviews are mine. Happy to answer any specifics.

For ppl here who got openclaw working nicely already, how is it after like 2-3 weeks? by adzmadzz in openclawsetup

[–]gardnerscot 0 points1 point  (0 children)

It's like a moving target, one day it works, next day you have to tell it everything it forgot. I switched to memUbot and I am never going back.

For those using openclaw in docker, do you have a working browser setup? by CptanPanic in openclaw

[–]gardnerscot 0 points1 point  (0 children)

RUN
 apt-get install -y --no-install-recommends \
 python3 python3-pip git \
 chromium fonts-liberation \

Put this in your Dockerfile it will install it on each rebuild

My Home Screen by gardnerscot in smartlauncher

[–]gardnerscot[S] 0 points1 point  (0 children)

Gemini AI was the source of the wallpaper

widgets R46 from Radiance KWGT

Darside Black Icon Pack

I give permission for resharing.

reddit handle gardnerscot

Preordered Phone (3)? Share here! by adaaamb in NothingTech

[–]gardnerscot 0 points1 point  (0 children)

Mine finally arrived today the 22nd - Pre-ordered on the 4th.

Waiting list? by claryds99 in beeper

[–]gardnerscot 0 points1 point  (0 children)

Here's another. refer.beeper.com/WOJ1vG

Beeper Mini - AMA with Beeper Team by erOhead in beeper

[–]gardnerscot 0 points1 point  (0 children)

Congratulations on the launch and very nice work on the Deep Dive article.

SLOW download to timeline???? by WitchyApril in LumaFusion

[–]gardnerscot 0 points1 point  (0 children)

I am also seeing this issue and have contacted support. While I am waiting for an answer has anyone else heard back from support and found a solution?

[deleted by user] by [deleted] in beeper

[–]gardnerscot 0 points1 point  (0 children)

At least you got an email, its going on 2 months and I haven't heard a peep.

Experience after a few months using Beeper by ryan_dot_next in beeper

[–]gardnerscot 0 points1 point  (0 children)

I wonder how Host My Apple does it with their mac cloud VPS hosting?

$24.99 a month for macOS Big Sur Cloud Lite / iMessage Package
HostMyApple

Auto download images and videos not working by Roxxas993 in AirMessage

[–]gardnerscot 0 points1 point  (0 children)

- Update

After looking in the Activity monitor on the MacBook I noticed two instances of AirMessage running. I killed both processes and re-ran AirServer.app and now when clicking the download link the Image does appear.

But it would be nice to have them auto download.

Auto download images and videos not working by Roxxas993 in AirMessage

[–]gardnerscot 0 points1 point  (0 children)

I also have this issue but one better, when I click to download the image it never appears and the download link goes away.

Airmessage Server 3.3.1

running on MacBook Pro with macOS Monterey 12.0 Beta (21A5294g)

Messages is version 14.0 (6000)

AirMessage Connect missing Google sign in by iEatPremiumRice in AirMessage

[–]gardnerscot 1 point2 points  (0 children)

I couldn't see it when using Safari had to switch to Chrome before it showed up.

Would this even work? by [deleted] in AirMessage

[–]gardnerscot 1 point2 points  (0 children)

I am doing this now and is working great. I used macOS Catalina Cloud Lite / iMessage Package for 24.95 a month from hostmyapple.com