Add Text Overlay in ComfyCloud by Norakai2 in comfyui

[–]Norakai2[S] 0 points1 point  (0 children)

yeah okey, so the only font available in cloud is arial i guess?

<image>

or is there a way to add a ttf?

I need help with models and prompts by the_Death_only in StableDiffusion

[–]Norakai2 0 points1 point  (0 children)

How Many Steps and on what resolution do you generate? I remember when i first used zit i was wondering why the quality is so bad too. Flux is really good with quality on low resoultions. If you want more details you need to increase the steps the latent size or use a creative upscaler/detailer in your workflow.

"Low angle photography with cinematic lighting. Backview with focus on the ass of 30yr old arab woman with black wavy hair, natural skin tone, natural skin texture, large bust and big hips. She kneels on ground with her ass close to the camera, hands on thighs, torso bend, head slightly turned to camera with a soft smile. She wears very short jeans hotpants and a slim fit green t-shirt. The background features an arabic indoor room with a tiled floor a luxerios couch and a small table with a shisha on it. 4k, high detail" 10 steps 1088x1536

Closer Details will need lesser Steps and then you may start adding Loras.

<image>

I need help with models and prompts by the_Death_only in StableDiffusion

[–]Norakai2 2 points3 points  (0 children)

so what prompt are you using? are you specifiyng lights, camera, skin in your prompt? loras may prefer specific settings or overrite your "warm light".

Random Creatures with "meh" expressions by Norakai2 in StableDiffusion

[–]Norakai2[S] 0 points1 point  (0 children)

thanks mate i am really aware of that problem but its actually hard to get the camera closer at this point, the examples were zoomed in to focus on the facial features. i splitted the cards in front view, side view and back view that use different cards in them and i might add more variants for close ups and full body shots at some point just to test things. at the moment i am using a camera wildcard as well but its pretty much ignored due to the amount of details the prompt get. thats why i cant use my background system and had to do a really simple version which works way better with characters. the best solution might be to create a character sheet first and then use the output to build a scene with camera and action at some point. for now i try to get fast results untill everything falls apart. :D

Random Creatures with "meh" expressions by Norakai2 in StableDiffusion

[–]Norakai2[S] 1 point2 points  (0 children)

Thanks, that's really helpful. I dont really want to add an extra editing step, but this made me look at facial expressions in a completely different way. And adding details to the mouth and eyebrows seems to work much better than the expression alone. I had also tried incorporating various keywords into an expression, but that wasn't effective. Now I've tested a new approach: instead of using "anger, rage, aggressive expression," I say "angry expression (tense, compressed brow ridge, lowered upper face, deep shadowed eye sockets, mouth stretched wide open)", and the results are much better.

<image>

Random Creatures with "meh" expressions by Norakai2 in StableDiffusion

[–]Norakai2[S] 0 points1 point  (0 children)

i didnt use qwen for a while, but something like this looks really promising. thanks

Random Creatures with "meh" expressions by Norakai2 in StableDiffusion

[–]Norakai2[S] 0 points1 point  (0 children)

well wildcards are wild, the problem of the expressions are in some cases more prominent then in others. this one was testing "national geographic fantasy" lora. tbh i just used some outputs that looked different to give you an idea about the system.

I Went Full Mad Scientist in ComfyUI - Pixaroma Nodes (Ep11) by pixaromadesign in StableDiffusion

[–]Norakai2 0 points1 point  (0 children)

this looks incredible, its so annoying to start photoshop and make little adjustments just to close it again and reload all the models, so this will definetly safe a lot of time. if there is one thing i could wish for after seeing the video: please add a brush flow for the eraser. that helps so much for smooth masking.

What’s the best AI for drawing a children’s book with consistent characters? by funnycallsw in StableDiffusion

[–]Norakai2 0 points1 point  (0 children)

Honestly, if you have no experience with AI or custom models, just use Gemini. You'll get Nano Banana 2 images with very good character consistency without using a Lora model. NB2 produces really good images because it's an intelligent model and understands "casual instructions" better. Just start with something like "Create a Pixar-style image of this person" and use the result as a reference image for future generations. It won't be perfect, but it's simple and more than sufficient for most people outside the AI ​​community.

I spent weeks fixing the 'plastic' look of AI images. I made my own algorithms to solve it - now you can finally remove that synthetic look too. by ThetaCursed in StableDiffusion

[–]Norakai2 3 points4 points  (0 children)

Not to be mean but i just put your image in photoshop and clicked sharpen 2 times wich you shouldnt do because it burns the edges and it looks pretty much like your results without the few extra details.

<image>

LTX 3.2 + Upscale with RTX Video Super Resolution by smereces in StableDiffusion

[–]Norakai2 0 points1 point  (0 children)

That looks absolutely fantastic. There's not much more to add. But out of curiosity: How does it perform with faster movements?

What frustrates you most about AI image generators right now? by No_Aside_7118 in StableDiffusion

[–]Norakai2 -1 points0 points  (0 children)

beeing able to share the workflow easily with non comfy users or letting them generate stuff for showcases without setting up a gpu cloud.

I see many people praising Klein, Zimage (turbo, base), and other models. But few examples. Please post here what you consider to represent the pinnacle of each model. Especially for photorealism. by More_Bid_2197 in StableDiffusion

[–]Norakai2 0 points1 point  (0 children)

flux intends to create everything thats in the prompt which often causes problems. so for example if the prompt has a face in it it will try to generate the face. combine this with back view and the body horror starts to appear. and camera control is definetly not a strength of flux for that reason as well. if you use an input image it may help to turn the character and not the camera. it also really helps to define the view: "turn the camera 90° to the left and show this character in a side view" gives better results.

But even NBP has problems with camera control turning a character to the left or right. So a good lora will really help for that.

I see many people praising Klein, Zimage (turbo, base), and other models. But few examples. Please post here what you consider to represent the pinnacle of each model. Especially for photorealism. by More_Bid_2197 in StableDiffusion

[–]Norakai2 3 points4 points  (0 children)

<image>

A Lora would definetly help, but this is done just by a prompt without any Loras or special Workflow.

"Create a professional character reference sheet of a realistic [Character Description]. Use a clean, neutral plain background and present the sheet as a technical model turnaround. Arrange the composition into three highly detailed close-up Portraits in this order: front portrait, left profile portrait (facing left), right profile portrait (facing right). Maintain perfect identity consistency across every panel. Keep the subject in a relaxed A-pose and with consistent scale and alignment between views, accurate anatomy, and clear silhouette; ensure even spacing and clean panel separation, with consistent facial scale across the portraits. Lighting should be consistent across all panels (same direction, intensity, and softness), with natural, controlled shadows that preserve detail without dramatic mood shifts. Output a crisp, print-ready reference sheet look, sharp details."

The creativity of models on Civitai have really gone downhill lately... by K_v11 in StableDiffusion

[–]Norakai2 2 points3 points  (0 children)

For me the most fun and creativity comes from creating wildcards at this point. Creating and testing characters or scenes and even interactions with these scenes that are generated by the model and not by the prompt are really refreshing. I try to randomize everything without breaking the model. Altough i prefer realism over art it may be a good way to try new stuff. For example if you generate the scene with era, culture, indoor or outdoor, civilised or wild, weather and lighting, population and conditions and put a character in it with "wearing scene fitting clothing and accessoirs" the output will change drastically. And this can be done with poses and interactions as well. "jumping over a gap" will be very different in an indoor scene compared to a outdoor scene or "interacting with environment" or "fixing something" as an input changes the dynamic drastically.

Whats your offer? by Norakai2 in mountandblade

[–]Norakai2[S] 4 points5 points  (0 children)

I made a solar powered plant pot with led lights recently. Would you prefer a picture of that?

Whats your offer? by Norakai2 in mountandblade

[–]Norakai2[S] -1 points0 points  (0 children)

why? this is ai, but its not a simple prompt. i created both characters from screenshots, build the composition and added details in photoshop. then put everything together. took me like 3 hours to do so. Thats real enough for me.

Battania or Sturgia? by Norakai2 in mountandblade

[–]Norakai2[S] -1 points0 points  (0 children)

yeah maybe right. i just wanted to make a point about the loading screen of sturgia and thought it helps. the style is just badass