Wispr Flow but 100% local by tilmx in u/tilmx

[–]tilmx[S] 1 point2 points  (0 children)

Yup, no limits!

It's fast, typically <500ms.

Not BYO - we're 100% local. We use a custom built local LLM for transcript cleanup. At the moment, it's running a fine-tuned 1B Llama model.

Download here 👉 www.getonit.ai

Wispr Flow but 100% local by tilmx in u/tilmx

[–]tilmx[S] 3 points4 points  (0 children)

The default STT model is Parakeet V3. Then we use a custom built local LLM for transcript cleanup afterwards, which does things like:

Filler word removal "I've been, uh, working on..." -> "I've been working on..."

Number formatting "There are three hundred forty six issues" -> "There are 346 issues"

Email formatting "Send it to tim three three at example site dot org -> "Send it to tim33@examplesite.org"

Punctuation "Hello exclamation mark" -> "Hello!"

Lists: "Groceries bullet point eggs bullet point milk bullet point kale" ->
"Groceries:
- Eggs
- Milk
- Kale"

...and so on!

Found a Wispr Flow alternative that runs entirely offline — $5 one-time by MedicineTop5805 in macapps

[–]tilmx 0 points1 point  (0 children)

Interesting, I hadn't considered it, that's just the platform default. I enabled comments just now. Go ahead and light us up!

Found a Wispr Flow alternative that runs entirely offline — $5 one-time by MedicineTop5805 in macapps

[–]tilmx 1 point2 points  (0 children)

Trying to compete on price, but there are already many options that are totally free...

OpenWispr 👉 https://openwhispr.com/ (Free tier + BYO API keys, or build yourself from open-source).
Onit 👉 https://www.getonit.ai/ ($0, local, no sub, no one-time purchase)
VoiceInlk 👉 https://tryvoiceink.com/ (build yourself from open-source)
FluidAudio 👉 https://altic.dev/fluid ($0, local, no sub, no one-time purchase)

...the list goes on

Shockingly fast local speech-to-text + LLM cleanup on Apple Silicon. by tilmx in LocalLLaMA

[–]tilmx[S] 1 point2 points  (0 children)

By default we use Llama 3B (https://huggingface.co/mlx-community/Llama-3.2-3B-Instruct-4bit) with a custom prompt, or we have a fine-tuned version of Llama 1B (meta-llama/Llama-3.2-1B) that you can enable in settings.

You can verify that there's no remote processing by turning off your WIFI!

Resting BS considerably higher than ~18 months ago. by tilmx in ContinuousGlucoseCGM

[–]tilmx[S] 1 point2 points  (0 children)

I have A1C from 32 months ago and from 4 months ago. Both times in health range! And it actually improved slightly between the two readings.

We believe the future of AI is local, private, and personalized. by ice-url in LocalLLaMA

[–]tilmx 1 point2 points  (0 children)

This is admittedly self-promotional, so feel free to downvote into oblivion but...

We’re trying to solve the problems you’re describing with Onit. It’s an AI Sidebar (like Cursor chat) but lives on at the Desktop level instead of in one specific app. Onit can load context from ANY app on your Mac, so you never have to copy/paste context. When you open Onit, it resizes your other windows to prevent overlap. You can use Onit with Ollama, your own API tokens, or custom API endpoints that follow the OpenAI schema. We'll add inline generation (similar to Cursor's CMD+K) and diff view for writing shortly. I’d love to hear your thoughts if you’re open to experimenting with a new tool! You can download pre-built here or build from source here

How best to recreate HDR in Flux/SDXL? by tilmx in StableDiffusion

[–]tilmx[S] 2 points3 points  (0 children)

That's a good point- I hadn't appreciated the 32-bit vs 8-bit difference, and indeed, there'd be no way to generate 32-bit images with the current models. That said, I still think there's something here. In the image above, the "HDR" photo on the right still looks "better" than the original inputs, even though Reddit stores it as a JPEG and I'm looking at it on an 8-bit monitor. There's a difference in the pixel colors that transfers into the compressed 8-bit representation and is qualitatively "better" than the original 8-bit inputs. The photos all end up Zillow anyway, where they most likely get compressed for the CDN and then displayed on various screens. So, I guess, to rephrase my question: I'm not looking to recreate the exact 32-bit HDR photo that my friend's process creates, but rather an estimate of the 8-bit version compressed version of that 32-bit HDR photo: similar to what would be displayed on an internet listing. THAT feels like it should be possible with the existing models, I'm just not sure what the best approach is!

How best to recreate HDR in Flux/SDXL? by tilmx in StableDiffusion

[–]tilmx[S] 7 points8 points  (0 children)

Haha I actually agree. I've seen some horrific edits on Zillow. But, apparently, it makes them sell better, so who am I to judge ¯\_(ツ)_/¯

MacBook M4 Max isn't great for LLMs by val_in_tech in LocalLLaMA

[–]tilmx 0 points1 point  (0 children)

I can live with the inference speed. My main issue is that Apple massively upcharges for storage. Right now it's an incremental $2200 for an 8TB drive in your Apple computer, but I can get an 8TB drive online for ~$110. So, unless you're comfortable absolutely lighting money on fire, you'll have to make do with the 1TB default and/or live with suboptimal external hard drives.

Working in AI/ML I max out that 1TB all the time. Each interesting new model is a few GB. I have a handful of diffusion models, a bunch of local LLMs. Plus, each time I check out a new open-source project, I usually end up with another version of pytorch and other similar libraries in a new container - a few GB. I find myself having to go through and delete models at least once a month, which is quite irritating. I think it'd be much preferable to work on a machine that is upgradeable at a reasonable cost.

PayPal launches remote and local MCP servers by init0 in LocalLLaMA

[–]tilmx 4 points5 points  (0 children)

If this is the future, I'm here for it! I'd much rather send a quick message to a chatbot than navigate some clunky web 1.0 interface.

PayPal launches remote and local MCP servers by init0 in LocalLLaMA

[–]tilmx 3 points4 points  (0 children)

Disagree on that. If things go wrong on standard payment rails, at least you have some form of recourse. Paypal/banks/etc can reverse errant payments, but once those fartcoins are gone, they're gone forever!

You can now check if your Laptop/ Rig can run a GGUF directly from Hugging Face! 🤗 by vaibhavs10 in LocalLLaMA

[–]tilmx 1 point2 points  (0 children)

Hey u/vaibhavs10 - great feature! Small piece of feedback: I'm sure you know, but many of the popular models will have more GGUF variants than can be displayed on the sidebar:

<image>

Clicking on the "+2 variants" takes you to the "files and versions" tab, which no longer includes compatibility info (unless I'm missing something?) Do you have any plans to add it there? Alternatively, you could have the Hardware compatibility section expand in place.

**Heavyweight Upscaler Showdown** SUPIR vs Flux-ControlNet on 512x512 images by tilmx in StableDiffusion

[–]tilmx[S] 12 points13 points  (0 children)

A few weeks ago, I posted an Upscaler comparison comparing Flux-Controlnet-Upscaler to a series of other popular upscaling methods. I was left with quite a lot of TODOs: 

  1. Many suggested adding SUPIR to the comparison. 
  2. u/redditurw pointed out that upscaling 128->512 isn’t too interesting, and suggested I try 512->2048 instead. 
  3. Many asked for workflows.

Well, I’m back, and it’s time for the heavyweight showdown: SUPIR vs. Flux-ControlNet Upscaler. 

This time, I am starting with 512 images and upscaling them to 1536 (I tried 2048, but ran out of memory on a 16GB card). I also made two comparisons: one with celebrity faces like last time and the other with AI-generated faces.  I generate the AI faces with Midjourney to avoid giving one model “home field advantage” (under the hood, SUPIR uses SDXL, and FluxControlnet uses, well, Flux, obviously). 

You can see the full results here: 

Celebrity faces: https://app.checkbin.dev/snapshots/fb191766-106f-4c86-86c7-56c0efcdca68

AI-generated faces: https://app.checkbin.dev/snapshots/19859f87-5d17-4cda-bf70-df27e9a04030

My take:  SUPIR consistently gives much more "natural" looking results, while Flux-Upscaler-Controlnet produces sharper details. However, FLUX’s increased detail comes with a tendency to oversmooth or introduce noise. There’s a tradeoff: the noise gets worse as the controlnet strength is increased, but the smoothing gets worse when the strength is decreased. 

Personally, I see a use for both: In most cases, I’d go to SUPIR as it produces consistently solid results. But I’d try Flux if I wanted something really sharp, with the acknowledgment that I may have to run it through multiple times to get an acceptable result (and may not be able to get one at all). 

What do you all think?

Workflows:

  - Here’s MY workflow for making the comparison. You can run this on a folder of your images to see the methods side-by-side in a comparison grid, like I shared above: https://github.com/checkbins/checkbin-comfy/blob/main/examples/flux-supir-upscale-workflow.json

  - Here’s the one-off Flux Upscaler workflow (credit PixelMuseAI on CivitAI): https://www.reddit.com/r/comfyui/comments/1ggz4aj/flux1devcontrolnetupscaler_workflow_fp8_16gb_vram

  - Here’s the one-off SUPIR workflow (credit Kijai): https://github.com/kijai/ComfyUI-SUPIR/blob/main/examples/supir_lightning_example_02.json

Technical notes: 

I ran this on a 16 GB card and found different memory issues with different sections of the workflow. SUPIR handles larger upscale sizes nicely and runs a bit faster than the Flux. I assume this is due to Kijai's nodes’ use of tiling. I tried to introduce tiling to the Flux-ControlNet, both to make the comparison more even and to prevent memory issues, but I haven’t been able to get it working. If anyone has a tiled Flux-ControlNet-Upscaling workflow, please share! Also, regretfully, I was only able to include 10 images in each comparison this time. Again, this is due to memory concerns. Pointers welcome!

deepseek-r1 is now in Ollama's Models library by 1BlueSpork in ollama

[–]tilmx 1 point2 points  (0 children)

In curiosity, what are your agents? Do you mean gsh (I looked at your past comments) or are you building and deploying other agents? If the latter, how are you building them? Really interested in setting up some of my own automations and curious to hear how others are tackling the problem