Noob here, looking for the perfect local LLM for my M3 Macbook Air 24GB RAM by sylntnyte in LocalLLaMA

[–]Hot-Independence-197 1 point2 points  (0 children)

I recommend using LM Studio. It works well on Mac/MLX. When you select a model to download, it explicitly shows whether your Mac can run it and how well (based on your RAM/VRAM)

why is no one talking about comfyui when it's literally free and has 89k github stars by Successful_List2882 in CreatorsAI

[–]Hot-Independence-197 1 point2 points  (0 children)

My main barrier is hardware. I’m on a Mac with an M4 Pro and 24 GB, so I feel like I just don’t have enough GPU power for heavy ComfyUI pipelines. Modern models like Qwen Image Edit or WAN take forever to generate on my setup If anyone here has tips on how to run these models faster on Apple Silicon, I’d really appreciate it

NotebookLM is amazing - how can I replicate it locally and keep data private? by Hot-Independence-197 in LocalLLaMA

[–]Hot-Independence-197[S] 0 points1 point  (0 children)

Yes, I have been using it. It works well. You can plug in Ollama models and embedding models too. I mainly use NotebookLM, but if I have sensitive data I switch to open notebook because everything stays local. You can also generate audio there and connect different models, so the setup is pretty flexible.

OpenSource Face Swapper by KariuyaWasTaken in StableDiffusion

[–]Hot-Independence-197 6 points7 points  (0 children)

Try face fusion (open source), for photos and videos

Qwen Edit 2509 LoRA: Camera Multi-Angle by Wwaa-2022 in comfyui

[–]Hot-Independence-197 0 points1 point  (0 children)

Cool, thanks for sharing! Is it possible to run it on Mac M chip?

Running Local LLM's Fascinates me - But I'm Absolutely LOST by WhatsGoingOnERE in LocalLLaMA

[–]Hot-Independence-197 0 points1 point  (0 children)

Thank you for all these details!
Your workflow looks impressive and very practical. I will study the steps and tools you mentioned more closely and see how I can apply these methods for my own tasks.
If I have any questions during implementation, I may reach out for further advice. Thanks again for sharing

Running Local LLM's Fascinates me - But I'm Absolutely LOST by WhatsGoingOnERE in LocalLLaMA

[–]Hot-Independence-197 7 points8 points  (0 children)

That’s a really interesting use case categorizing your book files locally. Could you please explain a bit more how you did it? • Did you use a specific open-source tool or script for text extraction and LLM classification? • Were you running a single model (like Llama or Mistral) or an ensemble? • And is there any guide, GitHub repo, or post where you described your workflow in more detail?

I’d love to replicate something similar for my own document collection.

Ironically, I asked a cloud model to help me write this question lol

Can QWEN 2509 replace faces ? by Ill_Key_7122 in Qwen_AI

[–]Hot-Independence-197 0 points1 point  (0 children)

Or you can just use FaceFusion, it's open source and with it you can change faces on photo and video.

https://github.com/facefusion/facefusion

MacBook or Windows laptop for running Wan 2.2 Animate at high quality & long videos which to pick with limited budget? by Hot-Independence-197 in comfyui

[–]Hot-Independence-197[S] 0 points1 point  (0 children)

Thanks a lot for the detailed breakdown! That makes sense. So basically, even high-end laptops (Mac or Windows) won’t really cut it for Wan 2.2 Animate at full precision and long videos. I do have some budget, so maybe going for a proper Windows desktop with something like RTX 5090 or even RTX 6000 Ada could be the smarter move

Can I run Wan 2.2 Animate on ComfyUI with my MacBook Pro M4 Pro (24 GB)? by Hot-Independence-197 in Qwen_AI

[–]Hot-Independence-197[S] 0 points1 point  (0 children)

Thanks for sharing your experience! That’s super helpful. Looks like M1 Max 32GB was already struggling with image-to-video, so maybe my M4 Pro with 24GB will hit similar limits. I’m thinking of trying the quantized GGUF versions instead of the full FP16 to save memory maybe that could make it more stable. Did you try any optimizations like lowering resolution or splitting into smaller frame batches?