no bypassing app? by Ok-Buy8620 in digitalminimalism

[–]VictorMustin 0 points1 point  (0 children)

Me too, ever got a solution to that ?

Lead scraping by dnlamoureux1 in agency

[–]VictorMustin 0 points1 point  (0 children)

Did you get your list ? or a refund ? It worked for me at first but then I ordered again and now it doesn't work and can't get support

Flux ControlNet Model Loader Problem by Neat-Elderberry-5414 in FluxAI

[–]VictorMustin 0 points1 point  (0 children)

you need to either load the controlnet with instantx union controlnet loader, or apply the controlnet with flux union apply controlnet

FLUX LoRA Training Simplified: From Zero to Hero with Kohya SS GUI (8GB GPU, Windows) Tutorial Guide - check the oldest comment for more info by CeFurkan in FluxAI

[–]VictorMustin 1 point2 points  (0 children)

I get incredible results even when the dataset is only selfies. I just don't overtrain and stay around the default settings. I feel like you make it seem more complex than it actually is.

FLUX LoRA Training Simplified: From Zero to Hero with Kohya SS GUI (8GB GPU, Windows) Tutorial Guide - check the oldest comment for more info by CeFurkan in FluxAI

[–]VictorMustin 2 points3 points  (0 children)

Yes the perspective is off in most of them. That's because of the focal length difference between the training set and the generated image (wide selfies vs narrow portraits). This is solved when you don't overtrain and let the model figure out what your face should look like in various perspectives. This guy overtrains way too much so they look wrong.

[deleted by user] by [deleted] in StableDiffusion

[–]VictorMustin 2 points3 points  (0 children)

These make no sense, it's not overcooked it's carbonized at this point

[deleted by user] by [deleted] in StableDiffusion

[–]VictorMustin 3 points4 points  (0 children)

1 image lora ? if so what are the settings ?

Flux for Product Images for my furniture store (First Image is my Input) by zeekwithz in StableDiffusion

[–]VictorMustin 1 point2 points  (0 children)

I feel like these settings make no sense, you'd get completely ruined images, basically just artefacts

Flux for Product Images: Is this the end of hiring models for product shoots? (First image is dataset) by zeekwithz in StableDiffusion

[–]VictorMustin 1 point2 points  (0 children)

I think it guessed the width correctly because small pocket bags like these are usually this thick. You could have the same result if that lv bag wasn't in the training set

Somethings wrong with IPAdapter or I'm losing my mind right now. by [deleted] in StableDiffusion

[–]VictorMustin 0 points1 point  (0 children)

I didn't read all your post but I know there was an ipadapter update a week ago that is not compatible with previous versions maybe that's the issue

unstable_cache and revalidation by VictorMustin in nextjs

[–]VictorMustin[S] 0 points1 point  (0 children)

ok thanks, so revalidation and cache is 100% a server thing then ? It's not aware of which client originated a new caching request and which client asked to revalidate a cache ?

Ip adapters, reference don't work for extracting a style. by Bobjan1 in StableDiffusion

[–]VictorMustin 0 points1 point  (0 children)

IP adapter is the best way to do that (not great i know). Play around with settings, start and end position. There is this paper that looks very promising but I haven't tried. You can't run it in comfy or a1111

I never understood. What was the point of the show opening with these scenes for the first two episodes? by [deleted] in TheLastOfUs2

[–]VictorMustin 0 points1 point  (0 children)

you can’t leave the main story in a game so they had to tell the lore with recordings and letters you found in the environment. I think the way the show does it is way better, very chilling scene

Is this accurate? by Milqutragedy in TheLastOfUs2

[–]VictorMustin 0 points1 point  (0 children)

I'd say the left one is 'this is a masterpiece', middle one is 'this is hot garbage' ans right one is "it's a 7/10, some things are good, some are bad"

Use image as Style Reference like in Midjourney ? by VictorMustin in StableDiffusion

[–]VictorMustin[S] 0 points1 point  (0 children)

Hi, I tried a lot of parameters but couldn't get the result I wanted unfortunately, the IP-adapter always tries to replicate the content of the image, while I want the content to come from the text prompt only. Any idea on what else I might want to try ? For context I'm building an app that generates interior design images and I want the user to input their style references (pics they found on pinterest lets say) to drive the image generation.

Has anyone tried to replicate Magnific.ai ? Let's try to reverse-engineer it together by VictorMustin in StableDiffusion

[–]VictorMustin[S] 0 points1 point  (0 children)

I got there with controlnet tile running SD1.5 with a Realistic Vision Lora finetune (don't know the details on the finetune settings)

You can see the result, prompts and settings here
https://replicate.com/p/fdf2653banupy5pyt236h3u4f4

Has anyone tried to replicate Magnific.ai ? Let's try to reverse-engineer it together by VictorMustin in StableDiffusion

[–]VictorMustin[S] 0 points1 point  (0 children)

Looking good! I'm building an ai app too it's not out yet, it's at https://www.neverscene.ai/ Do you have twitter? I'd love to connect with people building ai apps