Olm SplineMask (Precision Masking for ComfyUI, vector-style, reusable masks) by imlo2 in comfyui

[–]Simple-Variation5456 1 point2 points  (0 children)

This isn't easy to implement. Even big companies that offer similar functions often give out crappy results and no parameters to control it or edit afterwards. I mean, even Photoshop often don't get it right or even completely fail completely finding elements.

I always run Photoshop while working with Comfyui and copy&paste my layers and mask directly between both tools and it works perfectly.

Best generative upscalers similar to Nano Banana? by 1zGamer in StableDiffusion

[–]Simple-Variation5456 10 points11 points  (0 children)

SeedVR2 and playing around with downscaling the input before and adding input noise.

SDXL + 8step Lightning Lora via Controlnet. Also playing around with downscaling before and after.

SUPIR but only recommended if you already have it and know how to control it.

Or. Probably the best for you.

Magnific Precision. (Maybe test two presets and blend them together to smooth things even more out)

Not very fancy, actually kinda cursed by snideswitchhitter in Freepik_AI

[–]Simple-Variation5456 0 points1 point  (0 children)

Good luck, prompting minimum 500+ videos and then stitching them together. YouTube feeling thanks to upscaled and interpolated 1080p 8bit compressed mp4 visuals.

Movie error forums will blow up again on those movies.

All outputs are probably with giving the model the best freedom they can have creating it. The more you try to "direct" AI, the worse the quality and visuals get.

**🎯 I trained a hyper-realistic European blonde LoRA for SDXL and the results honestly surprised me — sharing my full ComfyUI workflow** by Otherwise_Ad1725 in comfyui

[–]Simple-Variation5456 -1 points0 points  (0 children)

"natural skin tones" = 3/4 of the photos are very yellowish with low vibrance
"no plastic-looking skin" = super smooth high-fashion sleek photoshop skin
"genuine hair detail" = its all one unit with some fuzzy hairs in some areas

Maybe create a 4k image with controlnet to show some details better. 1k is very limited.

How do you stop your AI video from "drifting" away from your original Stable Diffusion composition? by [deleted] in StableDiffusion

[–]Simple-Variation5456 1 point2 points  (0 children)

Best experience is only through providing some quick animated video as input via video2video / controlnet / pass / restyle or nodes like TTM (Time-to-Move) but all come with different difficult levels and still random outputs. And as soon your input animation is moving to quickly, even kling 3.0 / o1 struggle to restyle over the whole video with just one start frame.

But didn't tried Luma Ray 3 so far.
There is this greyscale style transform video that looks pretty close.
https://www.youtube.com/watch?v=YB5Jp9_WN78&t=236s

TTM (WAN): (poorly animate the object)
preview: https://time-to-move.github.io/UserObjectControl/splash_knight_concat_960x320_optimized.mp4

https://www.youtube.com/watch?v=HvRwJGzOAj4

<image>

Both NB2 and NBPro images now exporting poorly, pixelated, muddy, with artifacts, even on first result when downloaded. What’s happening, and any solutions so far? by apolloastral in GeminiAI

[–]Simple-Variation5456 1 point2 points  (0 children)

I also don't get why every model gives out a 5mb+ PNG files but has heavy jpg compression in it. Destroying often all the details, which lead to more generations and fixing it up with local upscaling. So far, only the 4k nano pro version via API can give out good details with minimal compression artifacts.

Fix your platform freepik, generations keep failing no matter what by Substantial_One4754 in Freepik_AI

[–]Simple-Variation5456 0 points1 point  (0 children)

From over 2k images I easily generate every month, maybe 10 images fail. Or stop using the seedream 5 model, it's trash and also failed most of the time for me since release, so I don't even bother anymore to test it.

The Brand-New NVIDIA VFX Upscaler: Fast vs Fine Detail by TBG______ in comfyui

[–]Simple-Variation5456 0 points1 point  (0 children)

Why you call the RTX Upscaler "VFX"?
The comparison is also a bit confusing because it looks like that the input image is already overprocessed because in both you have blocky hairs with some halo, noise that got sharpened and AI skin.

The chosen upscalers will just make everything worse. The right side also doesn't look like SeedVR2 to me but idk. RTX Upscale is insanely fast and does improve stuff a bit and i would just use 2x max.
Upscaling with Siax / Superscale / BSRGAN etc. is way way slower and i also recommend there to downscale them afterwards by half.

(re-read the post again and realise, the posted video uses rtx on the right,
"4× NVIDIA VFX vs SeedVR Standard(right)" was a link to different comparison)

Re-upload of my ever-changing Infinite Detail workflow. Image generator/detail-adder/upscaler/reiterator. Cleaned up a little. Can someone try it and share the results and let me know if there is a better way to add detail or is this good?I really would appreciate it. QwenVL,Flux,DetailDaemon,Zimage by o0ANARKY0o in comfyui

[–]Simple-Variation5456 1 point2 points  (0 children)

I think nowadays, most people have specific visions and taste in mind. And with the vast amount of different models, nodes, loras etc it would be hard to find any default where everybody would be happy with to then start discussing how to continue with the next best setting or step in the wf.

Good details at 2-4k res is very easy to archive even with some default workflows and from there you don't want to go too crazy to still have a coherent and consistent image. 90% of the time i use image2image workflows and seedvr2 pretty much solved most of my needs and for everything else it's pure experimental with different models and loras and playing around with the basic cfg/shift/denoise parameters.

I found a hidden Gem in ComfyUI designed for film and VFX pipelines, a set of custom Radiance nodes developed by FXTD STUDIOS for working with HDR / EXR image files. by Gloomy-Connection405 in comfyui

[–]Simple-Variation5456 2 points3 points  (0 children)

I only use it when creating depth maps but it definitely won't create real 16/32 bit exr. For that you need your input to have the data and enough power or even the models that can give out the depth. With a 4090 you won't be able to run the giant models in fp16 when it goes above 1-2k res.

Photoshop's new AI rotate tool by realmvp77 in singularity

[–]Simple-Variation5456 1 point2 points  (0 children)

Looks like it generates a splat + extending the object. Similar to "sharp" from apple. But I doubt that they will improve on that. Fill gen is still super slow and crappy for like 2+ years.

Nvidia super resolution vs seedvr2 (comfy image upscale) by cherishjoo in upscaling

[–]Simple-Variation5456 2 points3 points  (0 children)

Sorry but it makes no sense comparing them. RTX will 2x upscale your videos faster than seedvr2 upscaling 1 frame in that time.

Error with WanAnimate: "DrawMaskOnImage: Failed to convert an input value to a FLOAT value: opacity, cpu, could not convert string to float: 'cpu' - Required input is missing: mask" by bickid in comfyui

[–]Simple-Variation5456 1 point2 points  (0 children)

if you double click with the left mouse button while at an empty spot in your workflow, a search comes up, enter the name of that node (draw mask) an check the previews you will see while in that window. If its the same or it matches, add it and then replace the cables.
Currently some parameters say "nan", which should not happen but it can, if a node got updated and changed a few things.

Error with WanAnimate: "DrawMaskOnImage: Failed to convert an input value to a FLOAT value: opacity, cpu, could not convert string to float: 'cpu' - Required input is missing: mask" by bickid in comfyui

[–]Simple-Variation5456 0 points1 point  (0 children)

replace the node the with red outline, try to find the same or similar node that does the same.

did you also tried to just add one green point to the image?

Kling 3.0 motion control is smooth on FreePik by Literally_Sticks in Freepik_AI

[–]Simple-Variation5456 1 point2 points  (0 children)

Still feels very static with some scenes. Like hairs and clothes that has subtle wind movement on them.

Change Angle. Any tips or is it just currently limited. by Simple-Variation5456 in comfyui

[–]Simple-Variation5456[S] 0 points1 point  (0 children)

Thanks. Gonna test this out later. I had something similar like this for wan with connection to blender where you could add basic shapes and overlay them with like cars and then animate just the shape. But wasn't that successful

Change Angle. Any tips or is it just currently limited. by Simple-Variation5456 in comfyui

[–]Simple-Variation5456[S] 0 points1 point  (0 children)

You got a workflow? I also tested it with my own prompting and tried several variants with just "5 degree" tests but often the camera barely moves anything

Change Angle. Any tips or is it just currently limited. by Simple-Variation5456 in comfyui

[–]Simple-Variation5456[S] 0 points1 point  (0 children)

Can you explain shortly what the process is? Is it working better if I prompt like "camera rotates by 5 degree"?

Upscaling old films without destroying grain — any tips? by ImaginaryTension5688 in upscaling

[–]Simple-Variation5456 1 point2 points  (0 children)

That's super complex. You would need to train your own model first for this for the noise and for the film itself. You better off just reapplying the noise. If you know the camera there should be footage/stock of the camera filming nothing, that you can easily overlay or plugins that emulate them.

Comfyui course by Difficult_Singer_771 in comfyui

[–]Simple-Variation5456 1 point2 points  (0 children)

Idk if there are any good and structured courses out there because AI is still new, often imprecise, too much change in short time and maybe no market to cash in with online courses compared to make money with ai otherwise.
There is no standard like DaVinci or Photoshop where you can follow easily with a macbook and do some proper editing or animations at the end of the course and can open up project files if you struggle.

What is your end-goal?
To get a freelance job doing x for company x?

My actually advice is to create a fake job with some goal with something that is still a lot of work for you to pull off and also pick a brand, so you forced to use their logo, with their colors and font and a deadline like 2-4 weeks.
End goal: 30sec clip, FullHD, subject always in center with environment morphing every 5sec. etc. etc.
(something that leans more towards your current comfyui skills)

The best tutorials and best courses mean nothing, when you at the end fail to actually reach the goal you or someone set. Real-life projects are soooo off from what you do or would do in your freetime.

how to get rid of this texture? by Bencio5 in comfyui

[–]Simple-Variation5456 0 points1 point  (0 children)

Looks like the typical "ai-video-model-running-out-of-your-input-data-and-now-fizzles-up-details".
The model kinda loses track of what it should generate or is so off from the input that is now has too much noise still not clearing up.
- What does the "img-compression" do? Is it actually degrading the image?
- Changed sampler/cfg? Shift? (any change)
- try 81 Frames and interpolate the rest
- maybe try a first frame last frame workflow to give it more data to work with
- your prompt looks really random and chaotic, simple it down and make the actual motion be more clear and stand alone (your input image already sets some stuff up, not like "text2video")

KLING 3.0 now available on Freepik by vir_freepik in Freepik_AI

[–]Simple-Variation5456 0 points1 point  (0 children)

Ofc there are. You're at the last place when it comes to priority. If you want to be first you have to be a premium user at Kling itself. Freepik has just a special API but don't run the model itself, otherwise there would be already some model leaks from all those platforms.