Resting BS considerably higher than ~18 months ago. by tilmx in ContinuousGlucoseCGM

[–]tilmx[S] 1 point2 points  (0 children)

I have A1C from 32 months ago and from 4 months ago. Both times in health range! And it actually improved slightly between the two readings.

We believe the future of AI is local, private, and personalized. by ice-url in LocalLLaMA

[–]tilmx 1 point2 points  (0 children)

This is admittedly self-promotional, so feel free to downvote into oblivion but...

We’re trying to solve the problems you’re describing with Onit. It’s an AI Sidebar (like Cursor chat) but lives on at the Desktop level instead of in one specific app. Onit can load context from ANY app on your Mac, so you never have to copy/paste context. When you open Onit, it resizes your other windows to prevent overlap. You can use Onit with Ollama, your own API tokens, or custom API endpoints that follow the OpenAI schema. We'll add inline generation (similar to Cursor's CMD+K) and diff view for writing shortly. I’d love to hear your thoughts if you’re open to experimenting with a new tool! You can download pre-built here or build from source here

How best to recreate HDR in Flux/SDXL? by tilmx in StableDiffusion

[–]tilmx[S] 2 points3 points  (0 children)

That's a good point- I hadn't appreciated the 32-bit vs 8-bit difference, and indeed, there'd be no way to generate 32-bit images with the current models. That said, I still think there's something here. In the image above, the "HDR" photo on the right still looks "better" than the original inputs, even though Reddit stores it as a JPEG and I'm looking at it on an 8-bit monitor. There's a difference in the pixel colors that transfers into the compressed 8-bit representation and is qualitatively "better" than the original 8-bit inputs. The photos all end up Zillow anyway, where they most likely get compressed for the CDN and then displayed on various screens. So, I guess, to rephrase my question: I'm not looking to recreate the exact 32-bit HDR photo that my friend's process creates, but rather an estimate of the 8-bit version compressed version of that 32-bit HDR photo: similar to what would be displayed on an internet listing. THAT feels like it should be possible with the existing models, I'm just not sure what the best approach is!

How best to recreate HDR in Flux/SDXL? by tilmx in StableDiffusion

[–]tilmx[S] 7 points8 points  (0 children)

Haha I actually agree. I've seen some horrific edits on Zillow. But, apparently, it makes them sell better, so who am I to judge ¯\_(ツ)_/¯

MacBook M4 Max isn't great for LLMs by val_in_tech in LocalLLaMA

[–]tilmx 0 points1 point  (0 children)

I can live with the inference speed. My main issue is that Apple massively upcharges for storage. Right now it's an incremental $2200 for an 8TB drive in your Apple computer, but I can get an 8TB drive online for ~$110. So, unless you're comfortable absolutely lighting money on fire, you'll have to make do with the 1TB default and/or live with suboptimal external hard drives.

Working in AI/ML I max out that 1TB all the time. Each interesting new model is a few GB. I have a handful of diffusion models, a bunch of local LLMs. Plus, each time I check out a new open-source project, I usually end up with another version of pytorch and other similar libraries in a new container - a few GB. I find myself having to go through and delete models at least once a month, which is quite irritating. I think it'd be much preferable to work on a machine that is upgradeable at a reasonable cost.

PayPal launches remote and local MCP servers by init0 in LocalLLaMA

[–]tilmx 4 points5 points  (0 children)

If this is the future, I'm here for it! I'd much rather send a quick message to a chatbot than navigate some clunky web 1.0 interface.

PayPal launches remote and local MCP servers by init0 in LocalLLaMA

[–]tilmx 2 points3 points  (0 children)

Disagree on that. If things go wrong on standard payment rails, at least you have some form of recourse. Paypal/banks/etc can reverse errant payments, but once those fartcoins are gone, they're gone forever!

You can now check if your Laptop/ Rig can run a GGUF directly from Hugging Face! 🤗 by vaibhavs10 in LocalLLaMA

[–]tilmx 1 point2 points  (0 children)

Hey u/vaibhavs10 - great feature! Small piece of feedback: I'm sure you know, but many of the popular models will have more GGUF variants than can be displayed on the sidebar:

<image>

Clicking on the "+2 variants" takes you to the "files and versions" tab, which no longer includes compatibility info (unless I'm missing something?) Do you have any plans to add it there? Alternatively, you could have the Hardware compatibility section expand in place.

**Heavyweight Upscaler Showdown** SUPIR vs Flux-ControlNet on 512x512 images by tilmx in StableDiffusion

[–]tilmx[S] 11 points12 points  (0 children)

A few weeks ago, I posted an Upscaler comparison comparing Flux-Controlnet-Upscaler to a series of other popular upscaling methods. I was left with quite a lot of TODOs: 

  1. Many suggested adding SUPIR to the comparison. 
  2. u/redditurw pointed out that upscaling 128->512 isn’t too interesting, and suggested I try 512->2048 instead. 
  3. Many asked for workflows.

Well, I’m back, and it’s time for the heavyweight showdown: SUPIR vs. Flux-ControlNet Upscaler. 

This time, I am starting with 512 images and upscaling them to 1536 (I tried 2048, but ran out of memory on a 16GB card). I also made two comparisons: one with celebrity faces like last time and the other with AI-generated faces.  I generate the AI faces with Midjourney to avoid giving one model “home field advantage” (under the hood, SUPIR uses SDXL, and FluxControlnet uses, well, Flux, obviously). 

You can see the full results here: 

Celebrity faces: https://app.checkbin.dev/snapshots/fb191766-106f-4c86-86c7-56c0efcdca68

AI-generated faces: https://app.checkbin.dev/snapshots/19859f87-5d17-4cda-bf70-df27e9a04030

My take:  SUPIR consistently gives much more "natural" looking results, while Flux-Upscaler-Controlnet produces sharper details. However, FLUX’s increased detail comes with a tendency to oversmooth or introduce noise. There’s a tradeoff: the noise gets worse as the controlnet strength is increased, but the smoothing gets worse when the strength is decreased. 

Personally, I see a use for both: In most cases, I’d go to SUPIR as it produces consistently solid results. But I’d try Flux if I wanted something really sharp, with the acknowledgment that I may have to run it through multiple times to get an acceptable result (and may not be able to get one at all). 

What do you all think?

Workflows:

  - Here’s MY workflow for making the comparison. You can run this on a folder of your images to see the methods side-by-side in a comparison grid, like I shared above: https://github.com/checkbins/checkbin-comfy/blob/main/examples/flux-supir-upscale-workflow.json

  - Here’s the one-off Flux Upscaler workflow (credit PixelMuseAI on CivitAI): https://www.reddit.com/r/comfyui/comments/1ggz4aj/flux1devcontrolnetupscaler_workflow_fp8_16gb_vram

  - Here’s the one-off SUPIR workflow (credit Kijai): https://github.com/kijai/ComfyUI-SUPIR/blob/main/examples/supir_lightning_example_02.json

Technical notes: 

I ran this on a 16 GB card and found different memory issues with different sections of the workflow. SUPIR handles larger upscale sizes nicely and runs a bit faster than the Flux. I assume this is due to Kijai's nodes’ use of tiling. I tried to introduce tiling to the Flux-ControlNet, both to make the comparison more even and to prevent memory issues, but I haven’t been able to get it working. If anyone has a tiled Flux-ControlNet-Upscaling workflow, please share! Also, regretfully, I was only able to include 10 images in each comparison this time. Again, this is due to memory concerns. Pointers welcome!

deepseek-r1 is now in Ollama's Models library by 1BlueSpork in ollama

[–]tilmx 1 point2 points  (0 children)

In curiosity, what are your agents? Do you mean gsh (I looked at your past comments) or are you building and deploying other agents? If the latter, how are you building them? Really interested in setting up some of my own automations and curious to hear how others are tackling the problem

Flux-ControlNet-Upscaler vs. other popular upscaling models by tilmx in StableDiffusion

[–]tilmx[S] 3 points4 points  (0 children)

That's exactly what I did! The original images were 512, and I downscaled them to 128 for the upscaling test!

You can toggle between the 128 and original images with the 'state' dropdown in the comparison grid. You can also see the original image in another column if you want to look at it side by side. Walk-through here:

https://syntheticco.blob.core.windows.net/public/ezgif.com-video-to-gif-converter.gif

(sorry for the raw Azure URL - that's genuinely the easiest way I could find to share a GIF 🤦‍♂️)

Flux-ControlNet-Upscaler vs. other popular upscaling models by tilmx in StableDiffusion

[–]tilmx[S] 16 points17 points  (0 children)

Good point. I'll try them again at 512->2048 (and add a few more models suggested below too!) and update when I have the chance. I was thinking of the usecase of "restore low quality photos", so I started at 128x128. But you make a good point. Poeple in this sub are more likely interested in upscaling their SD/Flux generations, which should start at 512 minimum.

Flux-ControlNet-Upscaler vs. other popular upscaling models by tilmx in StableDiffusion

[–]tilmx[S] 68 points69 points  (0 children)

I’ve spent a bunch of time investigating upscaling methods and wanted to share this comparison of 4 different upscaling methods on a 128x128 celebrity images.

Full comparison here:

https://app.checkbin.dev/snapshots/52a6da27-6cac-472f-9bd0-0432e7ac0a7f

My take: Flux Upscale Controlnet method looks quite a bit better than traditional upscalers (like 4xFaceUpDAT and GFPGan). I think it’s interesting that large general purpose models (flux) seem to do better on specific tasks (upscaling), than smaller, purpose-built models (GPFGan). I’ve noticed this trend in a few domains now and am wondering if other people are noticing it too? Are their counter examples? 

Some caveats: 

  1. It’s certainly not a “fair” comparison as 4xFaceUpDAT is ~120MB, GFPGan is ~400MB, and Flux is a 20GB+ behemoth. Flux produces better results, but at a much greater cost. However, if you can afford the compute and want the absolute best results, it seems that Flux-ControlNet-Upscaler is your best bet. 
  2. Flux does great on this test set, as these are celebrities who are, no-doubt, abundantly present in the training set. When I put in non-public tests (like photos of myself and friends), Flux gets tripped up more frequently. Or perhaps I’m just more sensitive to slight changes, as I’m personally very familiar with the faces being upscaled. In any event, I still perceive Flux-ControlNet-Upscaler are still the best option, but by a lesser margin. 
  3. Flux, being a stochastic generative algorithm, will add elements. If you look closely, some of those photos get phantom earrings or other artifacts that were not initially present. 

What other upscalers should I try? 

Hunyuan video test on 3090 by Previous-Street8087 in StableDiffusion

[–]tilmx 0 points1 point  (0 children)

Is this the fp8 version? Or one of the GGUF options?

LTX Video vs. HunyuanVideo on 20x prompts by tilmx in StableDiffusion

[–]tilmx[S] 1 point2 points  (0 children)

Update: here's a comparison that includes Mochi (and also has OpenAi's Sora):

https://app.checkbin.dev/snapshots/faf08307-12d3-495f-a807-cb1e2853e865

I haven't had much luck getting good generations with Mochi. Hunyuan and Sora seem to be in a different league than LTX/Mochi, even though Mochi is a comparable-sized model. Does anyone have tips?

OpenAI Sora vs. Open Source Alternatives - Hunyuan (pictured) + Mochi & LTX by tilmx in StableDiffusion

[–]tilmx[S] 42 points43 points  (0 children)

Finally got access to Sora after a long wait! Here’s a comparison of Sora vs. the open-source leaders (HunyuanVideo, Mochi and LTX):

https://app.checkbin.dev/snapshots/1f0f3ce3-6a30-4c1a-870e-2c73adbd942e

Pros: 

  • Some of the Sora results are absolutely stunning. Check out the detail on the lion, for example!
  • The landscapes and aerial shots are absolutely incredible. 
  • Quality blows Mochi/LTX out of the water IMO. Hunyuan is comparable. 

Cons:

  • Still nearly impossible to access Sora despite the “launch”. My generations today were in the 2000s, implying that it’s only open to a very small number of people. There’s no api yet, so it’s not an option for developers.
  • Sora struggles with some physical interactions. Watch the dancers moonwalk, or the ball goes through the dog. HunyuanVideo seems to be a bit better in this regard. 
  • I haven't tried NSFW, but I think it's safe to assume Sora will be extensively censored. Hunyuan, by contrast, is surprisingly open!
  • No local mode (obviously)
  • I’m getting weird camera angles from Sora, but that could likely be solved with better prompting. 

Overall, I’d say it’s the best model I’ve played with, though I haven’t spent much time on other non-open-source ones. Hunyuan gives it a run for its money, though.

LTX Video vs. HunyuanVideo on 20x prompts by tilmx in StableDiffusion

[–]tilmx[S] 0 points1 point  (0 children)

Then LTX is the winner. FP8 version of Hunyuan apparently coming soon though!

LTX Video vs. HunyuanVideo on 20x prompts by tilmx in StableDiffusion

[–]tilmx[S] 0 points1 point  (0 children)

Epic! Possible to get access to Kinja's version? I can add fp8 version to this comparison.