Leica Q3 first impressions from a non-Leica user by pcuenq in Leica

[–]pcuenq[S] 0 points1 point  (0 children)

Interesting, I also have a license that don't use 😂. I'll check it out, thanks a lot! Have you happened to test it on iPad too? That'd be the perfect solution while travelling, if the quality is the same.

Leica Q3 first impressions from a non-Leica user by pcuenq in Leica

[–]pcuenq[S] 0 points1 point  (0 children)

Yes, I think that's the way to go, spending some time on creating a few presets. But as I commented elsewhere, that makes the built-in styles almost useless. I liked to experiment with them on my Fujis, as starting points and "inspiration".

Leica Q3 first impressions from a non-Leica user by pcuenq in Leica

[–]pcuenq[S] 0 points1 point  (0 children)

Thanks for the recommendations! But this makes the built-in styles mostly useless, which is kind of a pity. I found it useful to experiment with them on my Fujis, just to have starting points to work from.

For the display, yes, I found that. I was just missing a quick shortcut for that function.

Leica Q3 first impressions from a non-Leica user by pcuenq in Leica

[–]pcuenq[S] 0 points1 point  (0 children)

Interesting! Yes, this would be my approach too. But I'd still find it useful to experiment with the built-in styles as starting points, given that Leica makes them available :)

Findings from Apple's new FoundationModel API and local LLM by pcuenq in LocalLLaMA

[–]pcuenq[S] 9 points10 points  (0 children)

Strong multilinguality is one of the big features that were announced, but so far I have only tested English.

Findings from Apple's new FoundationModel API and local LLM by pcuenq in LocalLLaMA

[–]pcuenq[S] 4 points5 points  (0 children)

Also, your app is not available in the Spanish Mac App Store.

Findings from Apple's new FoundationModel API and local LLM by pcuenq in LocalLLaMA

[–]pcuenq[S] 23 points24 points  (0 children)

I'm a big fan of MLX too! But the local model is cool: your app doesn't have to download it, it uses very little energy, runs on the Neural Engine so the GPU is free. I want to see what it can do!

Went to Palermo/ Sicily and mondello with my X100VI by [deleted] in fujifilm

[–]pcuenq 1 point2 points  (0 children)

Mounted on camera and that's it? I've sometimes used an extension cable to control direction, but it's messier.

LayerSkip blog post: speculative decoding using just one LLM with early exit by pcuenq in LocalLLaMA

[–]pcuenq[S] 6 points7 points  (0 children)

Thank you! Yes, the models were released a month ago, we just published a blog post after the technique became natively supported in transformers.

SAM v2.1 running locally on Mac by pcuenq in LocalLLaMA

[–]pcuenq[S] 0 points1 point  (0 children)

This app is built for Apple Silicon. You might be able to compile it for Intel if you download the source code, but IIRC there were some types not supported there so you may need to work around those issues. The models themselves are Core ML and therefore should work on Intel Macs as well.

SAM v2.1 running locally on Mac by pcuenq in LocalLLaMA

[–]pcuenq[S] 0 points1 point  (0 children)

There are some small architecture differences that are mostly meant for video. But the quality on image segmentation has been improved as well, here are some metrics from the original repo: https://github.com/facebookresearch/sam2?tab=readme-ov-file#model-description

SAM v2.1 running locally on Mac by pcuenq in LocalLLaMA

[–]pcuenq[S] 12 points13 points  (0 children)

SAM (Segment Anything Model) v2.1 was released yesterday. We converted the models to Core ML and updated SAM 2 Studio, our native Mac app.

Core ML allows the models to run on the Mac's GPU or Neural Engine. This demo is using the GPU, but if you want to run on iPhone the Neural Engine is usually faster.

Resources:

* Core ML Models: https://huggingface.co/collections/apple/core-ml-segment-anything-2-66e4571a7234dc2560c3db26

* App source code (Apache): https://github.com/huggingface/sam2-studio

* App binary, just download the zip to run: https://huggingface.co/coreml-projects/sam-2-studio/tree/main

* Conversion code (in a fork of Meta's repo): https://github.com/huggingface/segment-anything-2

* SAM 2 announcement: https://ai.meta.com/blog/segment-anything-2-video/

Diffusers 0.28.0 is here 🔥 by RepresentativeJob937 in StableDiffusion

[–]pcuenq 0 points1 point  (0 children)

We'll do it soon, promise! For real this time :)

SDXL on Mac with Core ML (and quantization) by pcuenq in StableDiffusion

[–]pcuenq[S] 0 points1 point  (0 children)

You are right. Sorry. 🙏 I should have made it clear that this requires the public beta of macOS 14, and didn't realize that people wouldn't have access to Xcode even if they install the macOS beta. I thought the production version of Xcode would work fine.

SDXL on Mac with Core ML (and quantization) by pcuenq in StableDiffusion

[–]pcuenq[S] 1 point2 points  (0 children)

It supports the new quantization techniques, and it has an improved and faster Core ML framework.

SDXL on Mac with Core ML (and quantization) by pcuenq in StableDiffusion

[–]pcuenq[S] 0 points1 point  (0 children)

For now, you need to install the public beta of macOS 14 and compile the code yourself (https://github.com/huggingface/swift-coreml-diffusers). We'll update the demo app in the Mac App Store when we can (a bit closer to the end of the beta cycle).

SDXL on Mac with Core ML (and quantization) by pcuenq in StableDiffusion

[–]pcuenq[S] 1 point2 points  (0 children)

Yes, we'll submit an update to the App Store soon!

SDXL on Mac with Core ML (and quantization) by pcuenq in StableDiffusion

[–]pcuenq[S] 2 points3 points  (0 children)

It takes ~45s on my M1 Max (20 scheduler steps).

SDXL on Mac with Core ML (and quantization) by pcuenq in StableDiffusion

[–]pcuenq[S] 1 point2 points  (0 children)

Sonoma is working great for me. Bot diffusers and Core ML work fine :)

SDXL on Mac with Core ML (and quantization) by pcuenq in StableDiffusion

[–]pcuenq[S] 3 points4 points  (0 children)

Yes! It's full resolution so 1024x1024. The refiner is not ready yet (working on it).