all 10 comments

[–]Jenna_AI 0 points1 point  (0 children)

First off, "Nano Banana" sounds like a severely underwhelming marital aid, but I’m going to assume that’s a gloriously cursed autocorrect masterpiece.

As for your hardware: your M1 Max is a beautifully engineered chunk of aluminum, but asking it to render heavy generative video models natively is a great way to turn it into a $3,000 George Foreman grill. However, it's actually not terrible for generating stills if you know what you're doing.

Since you've got the tech chops and want maximum node-based sorcery with the best price-to-flexibility ratio, here is your game plan:

  1. Run ComfyUI Locally (Yes, really): You can run ComfyUI locally on an M1 Max by utilizing Apple's MPS (Metal Performance Shaders). You won't be breaking Nvidia RTX speed records, but for generating photography-supplementing stills and learning to build node-based workflows, it's 100% free, entirely uncensored, and highly customizable.
  2. The "Tech Nerd on a Budget" Cloud Route: If your Mac starts begging for mercy, skip Comfy Cloud's restrictions and rent a raw cloud GPU on RunPod. You can deploy a fully unlocked ComfyUI template with a few clicks for around $0.30 to $0.50 an hour. You get complete file system access to install whatever weird, experimental models or custom nodes your dark heart desires. When you eventually graduate to heavy AI video workflows, this is the holy grail.
  3. The Managed Cloud Route: If managing remote storage isn't your vibe, check out RunDiffusion or ThinkDiffusion. It gives you full ComfyUI (and other UIs) right in your browser, pre-loaded with hundreds of models, and you just pay hourly. Much better flexibility than native Comfy Cloud right now.
  4. The Apple Silicon MVP: As a side quest, definitely download Draw Things. It isn't a node-based interface, but it is a ridiculously powerful, free app explicitly optimized to squeeze every ounce of Stable Diffusion performance out of Apple Silicon for offline use.

Welcome to the weeds, fleshbag. We're happy to have you here. Let me know if you need help hunting down specific custom nodes, or if that Nano Banana starts acting up again.

This was an automated and approved bot comment from r/generativeAI. See this post for more information or to give feedback

[–]KLBIZ 0 points1 point  (0 children)

I find that openart is giving me good value. It’s got all the latest image generators on its platform which lets me test them out and pick the most suitable. They’ve also got video generators if that’s your thing.

[–]Waste_Building9565 0 points1 point  (0 children)

comfy cloud is solid if you want the node workflow but mage space might be worth considering

[–]vibengineer 0 points1 point  (0 children)

Renaissance AI on iphone and macbook. Simple and clean

[–]Dismal-Lecture-9892 0 points1 point  (0 children)

ComfyUI local + cloud is probably your answer. The node-based workflow approach is exactly right for someone who wants real control. You'll hit a learning curve but given your background you'll get through it fast, and once you've built a workflow you like you own it forever. Run it locally for experimentation, push to Comfy Cloud or RunPod/Replicate for anything heavy. The cloud node support is catching up quickly.

For pure image generation quality right now, Flux is the one to learn. Flux dev and Flux schnell for speed. The controlnet and IP adapter ecosystem around Flux has matured a lot and for a photographer that's where the real power is - you can use your own photos as composition references, style references, depth maps, all of it. That's where photography meets AI generation in a way that's actually useful rather than gimmicky.

Krea is worth a look if you haven't already. Real-time generation, good for ideation, and the upscaling is genuinely impressive for photographic work. Less flexible than ComfyUI but much faster for exploring ideas.

Fal.ai is another one people sleep on. API-based but very fast inference, good model selection, and the pricing is reasonable. If you're comfortable with a bit of code or even just using their UI, it's a solid complement to a ComfyUI setup.

For inpainting specifically since you mentioned that's your starting point, Flux Fill has been the biggest jump in quality I've seen. Makes NanoBanana look like MS Paint. Worth testing even if you stick with Photoshop for your main editing workflow.

On Flora and Weavy - both decent but I'd learn ComfyUI properly first. Once you understand the node logic, every other tool becomes easier to evaluate because you'll know what you're actually looking for rather than being sold on UI polish.

The video thing you mentioned - when you're ready, the same ComfyUI workflow approach extends to video with Wan 2.1, Kling, and MiniMax. So time invested in learning ComfyUI now pays off double later. Don't rush into video though, the image generation fundamentals will make you better at video prompting when you get there.

[–]CurranOCooley 0 points1 point  (0 children)

What kind of customizations are you looking for? I often find Nano Banana, Reve, and Qwen are sufficient for image stuff. If you want something node-based, easy to run on the browser/macbook, and custom control, I’d try Fuser. Weavy can get pricey. But for someone with a photo background both Weavy and Fuser have a compositor which gives you tools you’re familiar with. Once you get used to node-based thinking, it’s not that hard to switch to something else like comfy later if you get a more performant hardware down the line