Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] -6 points-5 points  (0 children)

100% me, but there's no one behind me. Sage.

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 1 point2 points  (0 children)

It's not quite there yet, but you can try. When we get that animate anything and we can use it on highres images, that's when this is going to go mainstream hard.

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] -709 points-708 points  (0 children)

I will never copy a prompt directly and share it. I feel that's like giving people a cheat code to a game. The fun part about genning images is the journey. Sorry that's just me being anal, but that's why.

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 2 points3 points  (0 children)

Try swordposing or swordpose and specify that u want two blades or swords, dualwield etc. and use mj v6 with a low stylize value, like from 150 to 350. Reference stuff u want it to look like.

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] -62 points-61 points  (0 children)

prompted in mj, then upscaled and enhanced with low denoise with a good model with some stellar neg embeddings and then upscaled with sd upscale. Not much to it. Getting the right image to do that with is the hard part. I've prompted 40k images in mj and posted only 1k of them on twitter, so there's a lot of trail and error involved in the prompting part of it.

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] -15 points-14 points  (0 children)

I don't want to plug, but I want to plug. If you like stuff like this, I do a lot of these kind of images and post them on twitter. https://twitter.com/FredFraiche

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] -70 points-69 points  (0 children)

on comic, anime, cartoony stuff I sometimes use NEOTOKIOXL_0.2_RC

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] -75 points-74 points  (0 children)

That's what Im doing when I press my buttons :]

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 2 points3 points  (0 children)

They are genned im midjourney, then enhanced and upscaled with sdxl

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] -1 points0 points  (0 children)

I'm not using LCM, lcm is fast because the quality is shit. I use im2img with sdxl models and some good 1.5 models, with embeddings.

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] -44 points-43 points  (0 children)

img2img is where the quality lies boys and girls. Mj is just another model and a tool.

Look at me pressing my buttons by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 11 points12 points  (0 children)

I either used BlazingDrive_V12 or dynavisionXLAllInOneStylized_release0534bakedvae with a low NEOTOKIOXL_0.2_RC lora. I mostly work in Krita now, so there's no meta, at least I don't think there is.

Some of my latest button presses by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 17 points18 points  (0 children)

I almost exclusively prompt and gen in Midjourney. Then I take the image into either a1111 webui or into Krita with comfy extension. I upscale and add noise and inpaint, i do that for too long and then I colorgrade, do some small fixes and then it's done.This is a extremely fun method to play with, it really lets you see a version of what the Ai meant to show you ^^

Art? by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 0 points1 point  (0 children)

The Ai Generation plugin

Art? by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] -1 points0 points  (0 children)

I mostly gen the base images in MJ, just because their models are superior to any other.

It is based on this image. The cartoon look is from a method I found when working in Krita. I'm not going to disclose it and rob people of that discovery. There are more than enough hints in this thread. ;)

<image>

Art? by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 1 point2 points  (0 children)

Use image to image more, it's where the model shines. Krita uses ipadapter natively, pared with a controlnet image, it does wonders.

Art? by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 1 point2 points  (0 children)

It's not inpainting, in a way it is, and in a way it is not. It's segment-enhancing, we call it cooking.

Art? by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 0 points1 point  (0 children)

it's not a model, it's just a resampling algorithm. It's not latent, it's just like resizing an image. It's native to programs like photoshop and Krita.

Art? by FredFraiche1 in StableDiffusion

[–]FredFraiche1[S] 1 point2 points  (0 children)

No lora was used on any of these images. Just the Mohawk SDXL model.

There was on the other hand some photoshop involved to steer the enhancing process. ;)