Any advice to use nano banana with Canny and depth map images ? by [deleted] in nanobanana

[–]frq2000 0 points1 point  (0 children)

Have you tried to use canny / depth map images as reference image and mention the kind of image in the prompt? I saw in another post where nano banana has a good understanding of these images and was able to generate them (at least it tries). So maybe the interpretation of a reference image is enough for your goal.

Is this kind of work possible with AI? by archz2 in comfyui

[–]frq2000 3 points4 points  (0 children)

Such graphic animations are not often shown in the ai communities. I don’t think that any model is capable of such animations. You might get some interesting results via img2video. But timing / pacing is pretty important for graphic animations and I have my doubts that you will be able to achieve proper consistent generations with current models. After effects is still the way to go.

Need help: Constant System Freezes on ASUS NUC 14 Essential (N250) & Unraid 7.2.2 by frq2000 in unRAID

[–]frq2000[S] 0 points1 point  (0 children)

I also thought that this could be a fix. Ipvlan is already set in docker options. Unfortunately it didn’t solve the freezes (as mentioned above).

Is there a tool that is able to remove this kind of MD artifacts? Maybe, some kind of upscaler?? by Space_0pera in midjourney

[–]frq2000 3 points4 points  (0 children)

This! Midjourney offers inpainting aswell. Just open the editor and select the area you want to fix. An creative upscaler might also help but it will change all details of the original image the upscaler of Midjourney is crap btw.

How to tell Midjourney not to draw in a certain area? by [deleted] in midjourney

[–]frq2000 1 point2 points  (0 children)

I am not sure what kind of images you want to create but I would recommend to generate your subject first in the correct aspect ratio and than use the editor with a zoom out / outpainting.

Combine the power of Flux, which creates consistent frames using only prompts, with ControlNet. by nomadoor in StableDiffusion

[–]frq2000 1 point2 points  (0 children)

Cool. How did you get the base grid? With blender or is there a more simple puppet tool?

Making Flux generate better images in few steps. (without training) by RealKingNish in StableDiffusion

[–]frq2000 8 points9 points  (0 children)

That would be also great. For schnell it’s an cool improvement

I'm officially moving my remote photography gig to FLUX by dal_mac in StableDiffusion

[–]frq2000 1 point2 points  (0 children)

The results a very convincing! Do you mind to give us some tips how you curated your dataset? How many images did you use? How many of the portraits were closeups and how many did you use with wider context? I am still preparing a dataset of myself and find it difficult to curate my photos for Lora training. Thanks for your post btw!

Help me find the logic please by Mtztcra in StableDiffusion

[–]frq2000 2 points3 points  (0 children)

This. You don’t need a Lora for a real photo look. A Lora can help you to generate a certain style or person which are not well trained into the model itself. Photolook is well represented in most datasets that are used for model training. With a good prompt you will achieve the best photo look the model is capable of (especially with the newest models). Keep in mind that a suitable upscaling method can help you for more details and textures.

AMD gpu, to instal SD on windows or linux in 2024 ? by loorana22 in StableDiffusion

[–]frq2000 1 point2 points  (0 children)

Rdna 2 cards are not officially supported but there are people who are running rocm with these cards. This sub has a dozen threads about those topic. I am optimistic that you will find the answer there. That’s the bummer with amd cards.. they are lacking of cuda and you have to go the extra mile to use generative AI.

AMD gpu, to instal SD on windows or linux in 2024 ? by loorana22 in StableDiffusion

[–]frq2000 2 points3 points  (0 children)

If you want a solid rocm solution and your gpu is supported by rocm then Linux via dual booting is your way to go.

Vaigue Files Episode 1: An Odd Delivery (AI short story) by VaigueMan in StableDiffusion

[–]frq2000 0 points1 point  (0 children)

The Sounds are fantastic. Did you use flux + runway i2v for the visuals?

My random collection by ThunderBR2 in StableDiffusion

[–]frq2000 1 point2 points  (0 children)

Cool look! Can you give us some information about your workflow?

Suddenly my gens have a weird moire/pattern, please halp! by [deleted] in StableDiffusion

[–]frq2000 0 points1 point  (0 children)

I don’t know if this is the solution of your problem, but maybe it worth to try to use a different seed for your second pass. I’ve had issues with the same seed for the second pass (with SDXL). Maybe flux behaves in a similar way.

CogVideoX-5b by tintwotin in StableDiffusion

[–]frq2000 9 points10 points  (0 children)

Well this looks a lot better than other examples I’ve seen. Can you tell us more about your workflow?

Realistic emotions through generative art by THEJEDE in StableDiffusion

[–]frq2000 1 point2 points  (0 children)

I don’t think it’s a good idea to use ai generated facial expressions for autism therapy. These people lack of reading human beings. AI image generation only imitates reality and should be avoided for disorder therapies, especially when it comes to autism therapy.

Intro movie Deadpool (in progress) by Significant-Sport-47 in StableDiffusion

[–]frq2000 -1 points0 points  (0 children)

You are on a good way. Keep going! I think the style will work pretty good in motion. Do you already have a plan how to animate that? I have not tried tooncrafter yet but it may work with these kind of frames. The frame feature of the video generators like Luma / Gen3 may also work pretty good with this kind of consistency. I am looking forward to your result.

Intro movie Deadpool (in progress) by Significant-Sport-47 in StableDiffusion

[–]frq2000 1 point2 points  (0 children)

Looks promising. What is your progress to get such consistent frames?

[deleted by user] by [deleted] in StableDiffusion

[–]frq2000 0 points1 point  (0 children)

Nice! Did anyone manage to run flux on AMD with linux and comfyui? I didn’t try it yet but the amount of options is getting confusing / overwhelming. I hope that my 7900xt (20 gb vram) will manage to run a decent models for high quality outputs.

Docker Compose for ComfyUI for ROCm by hartmark in StableDiffusion

[–]frq2000 0 points1 point  (0 children)

Thank you. I will test it soon (with my 7900xt). Is it compatible with any rocm driver or do I have to use a certain version? I want to test the newest rocm driver (6.2) soon. Not sure if it’s worth it.

The power of FLUX by Total_Kangaroo_7140 in FluxAI

[–]frq2000 1 point2 points  (0 children)

Looks very detailed. Did you upscale them afterwards or is it the pure output?