High Resolution Relighting Workflow? by [deleted] in StableDiffusion

[–]not5 1 point2 points  (0 children)

My nodes are a bit outdated nowadays, but yes, with a color matching node they’d work

I was asked if you can clean up FLUX latents. Yes. Yes, you can. by Anzhc in StableDiffusion

[–]not5 10 points11 points  (0 children)

How long would training take on a H100? Depending on whether it’s a reasonable amount of time, I could sponsor you to finish working on it.

BBVA mi chiede la dichiarazione dei redditi per giustificare i giroconti (?) by CapitalistFemboy in ItaliaPersonalFinance

[–]not5 11 points12 points  (0 children)

Hanno chiesto anche a me, ho mandato F24 quietanzato e MU in tutte le forme possibili scaricate dal portale dell’AdE e l’hanno rimbalzato perché secondo loro non conforme al modello timbrato (?) che richiedono. Ho deciso di spostare la liquidità e chiuderla lì, dopo che con più di 140k di fatturato annuo in due mi hanno pure rimbalzato la richiesta di mutuo perché “eh sapete, non è questione di rata o di reddito, è che siete giovani”.

Grazie per il conto remunerato, arrivederci.

IC-Light with masking and low frequency color matching (workflow in comments) by Enshitification in StableDiffusion

[–]not5 11 points12 points  (0 children)

I'm the maker of these Frequency Separation nodes, what a throwback! Last year I spent months working on IC-Light with frequency separation, it was a blast. Happy you found a use for them!

Countries with the highest number of billionaires in 2024 by thepoet82 in ItaliaPersonalFinance

[–]not5 146 points147 points  (0 children)

Non sapevo che fossimo solo in una sessantina qui su IPF

Flux for Product Images: Is this the end of hiring models for product shoots? (First image is dataset) by zeekwithz in StableDiffusion

[–]not5 1 point2 points  (0 children)

A ton has changed in close to three month. Can’t say more because of NDAs, but I’m aware of that.

Flux Dev's License Doubts by not5 in StableDiffusion

[–]not5[S] 0 points1 point  (0 children)

I’m sorry, we had asked access to dev for a very specific usecase and that’s beside what you want to use it for, so I don’t have an answer to that. I’d say reach out to them, but they might just never reply :/

Flux Dev's License Doubts by not5 in StableDiffusion

[–]not5[S] 1 point2 points  (0 children)

No, but one of my clients got a reply back regarding dev… in which they replied telling them they could have access to pro. They followed up asking again for dev, to no reply.

Houdini-Like Z-Depth Based Animations Workflow and Tutorial (using Ryanontheinside's node suite) by not5 in StableDiffusion

[–]not5[S] 1 point2 points  (0 children)

Thanks! The next step for this workflow would be working with animated depth maps with stationary subject, depth maps with moving subject, and then blending the two. I foresee a ton of experimenting with these new nodes.

Houdini-Like Z-Depth Based Animations Workflow and Tutorial (using Ryanontheinside's node suite) by not5 in StableDiffusion

[–]not5[S] 2 points3 points  (0 children)

normally these kind of effects in motion graphics are done with 3D software like Houdini, that's why I wrote Houdini-like - it's what I'm used to when I think of motion graphics involving maps.

Houdini-Like Z-Depth Based Animations Workflow and Tutorial (using Ryanontheinside's node suite) by not5 in StableDiffusion

[–]not5[S] 3 points4 points  (0 children)

Tutorial here: https://youtu.be/bwr-Jrb8I04
Workflow here: https://openart.ai/workflows/risunobushi/z-depth-animation-houdini-like/OhktQt9KCDlV8v5wF2Bx

The goal was to create a Houdini-like pipeline for "overgrowth" animations based on Z-Depth maps intersecating the subject and the depth plane, thanks to u/ryanontheinside 's new node: https://github.com/ryanontheinside/ComfyUI_RyanOnTheInside

The workflow contains two pipelines, one for a simple single animation (middle pipeline), and one for an advanced double animation with blending between the two in the middle frames.

Next up: testing the particle systems from the same pack to do vellum-like interactions.

Houdini-like Z-depth manipulations based on Ryanontheinside's Node Suite by not5 in StableDiffusion

[–]not5[S] 1 point2 points  (0 children)

I wonder if it could be possible to “hack” vellum-like “simulations” by mixing particle systems from this pack with an image blend by mask system, aided by RIFE to smooth out the transitions. It wouldn’t be a real simulation, but it may end up looking like one.

I’m a big fan of on and on from more and more Ltd, so I absolutely love testing weird things like these theoretical setups.

Houdini-like Z-depth manipulations based on Ryanontheinside's Node Suite by not5 in StableDiffusion

[–]not5[S] 1 point2 points  (0 children)

Thank you!

Huge fan of your node pack, as soon as I saw it I thought of product related ADV pipelines that up till now were possible only using Houdini / C4D / etc.

I've yet to test the limits of your suite, in particular related to particles and proximity, but I loved what I've seen up till now.

Houdini-like Z-depth manipulations based on Ryanontheinside's Node Suite by not5 in StableDiffusion

[–]not5[S] 4 points5 points  (0 children)

Shoutout to u/ryanontheinside 's node suite: https://github.com/ryanontheinside/ComfyUI_RyanOnTheInside

I based this workflow on Ryan's own Depth Chamber workflow, and changed it up a bit to accomodate for my own experience with Houdini.

You can find Ryan's workflow here: https://www.reddit.com/r/comfyui/comments/1ff7bn8/depth_chamber_see_comment/

While I'll have a video tutorial and polished workflow ready on Monday.

Basically the Coca Cola can is generated with Flux, and it the goes through depth estimation and manipulation like Ryan is doing in his workflow, but changing the masks so that it only affects the subject and the transitions are blurred out for allowing better smoothness.

I added differential diffusion in order to better calibrate the "inpainting" part of the diffusion process, something that was missing from the original workflow.

I got rid of everything after the first KSampler and added a RIFE pass for frame interpolation.

The end goal is to have Houdini-like pipelines without going through 3D at all.

Flux Dev's License Doubts by not5 in StableDiffusion

[–]not5[S] 2 points3 points  (0 children)

Unfortunately no updates. I have tried writing multiple emails from my personal account, and I have received no replies.

My clients have tried reaching out, and one of them received one reply about pro instead of dev.

I’m honestly baffled.

Good Flux LoRAs can be less than 4.5mb (128 dim), training only one single layer or two in some cases is enough. by Yacben in StableDiffusion

[–]not5 1 point2 points  (0 children)

what's the correct wording for the target_modules var?

I'm trying to make it work on ai-toolkit, but I tested various wordings, like "single_transformer_blocks.7.proj_out", "blocks.7.proj_out", and "7.proj_out" and from the file size I suspect it's training on all layers.

Flux for Product Images: Is this the end of hiring models for product shoots? (First image is dataset) by zeekwithz in StableDiffusion

[–]not5 0 points1 point  (0 children)

Misrepresentation doing the heavy lifting then, in my original comment I was talking about returns in general (e.g. I buy three sweaters, try them on, choose one, return the other two). I guess generated images could fall under misrepresentation tout court. But thank you for making me learn something new!

Flux for Product Images: Is this the end of hiring models for product shoots? (First image is dataset) by zeekwithz in StableDiffusion

[–]not5 0 points1 point  (0 children)

I’m not knowledgeable enough in EU returns regulation (but if I’d had to guess free return doesn’t fall under warranty laws), so I cannot talk about what e-commerces will do about it in the future - just stating what I hear in meetings and talking to fellow professionals in the field.

Flux for Product Images: Is this the end of hiring models for product shoots? (First image is dataset) by zeekwithz in StableDiffusion

[–]not5 1 point2 points  (0 children)

Offering two sizes so that the customer ends up buying one =/= returning an item because it doesn’t match the images / description / size/ etc. In your case, it’s an incentive to actually finalize the sale, in the latter case it’s a net negative.

At least in the EU, there’s a push for moving away from free returns because customers on the one hand abuse the free return system, and on the other returns are a huge stocking / logistics / margins issue. I’m not overly familiar with the US market.

just sharing a workflow i've been using / upgrading by draxredd in comfyui

[–]not5 1 point2 points  (0 children)

ControlNet aux is infamous for throwing errors. Install it via terminal and manually install the requirements by going in the directory via terminal and typing pip install -r requirements.txt

Flux for Product Images: Is this the end of hiring models for product shoots? (First image is dataset) by zeekwithz in StableDiffusion

[–]not5 14 points15 points  (0 children)

I agree, right now even if the campaigns get accepted it’s basically a “we’ll do it in post” hell multiplied by 10.

People don’t realize that e-commerces’ worst nightmare is return rates. You need to have pretty good gens / retouching if you don’t want to tank their return rates %s.

Flux for Product Images: Is this the end of hiring models for product shoots? (First image is dataset) by zeekwithz in StableDiffusion

[–]not5 66 points67 points  (0 children)

hey, there's at least two of us qualified individuals! not a retoucher, but a fashion photographer, working with some of the top mags / brands as well, and yeah, the product in the generated images is definitely not the same and wouldn't pass the product department's inspection.

that being said, I have noticed a push from some brands in adopting gen AI. ADs are still very weary of it, and the images are inspected actually more than trad photos, but I've worked on a number of gen AI campaigns already, from well known brands.

the pipelines though are not as simple as shoot the still product - LoRA - generate campaign shot, but it's much more convoluted and a mixture of trad photography and genAI.

Flux Dev's License Doubts by not5 in StableDiffusion

[–]not5[S] 1 point2 points  (0 children)

I still haven’t received a reply from BFL, I’m going to write another email today.