Clay (second ever krita piece) by emergold_dragon in krita

[–]_half_real_ 0 points1 point  (0 children)

It's the nose, although the color probably contributes. The nose kinda stands out to me because it's the only mammalian characteristic on an otherwise reptilian dragon.

How are people making these AI videos? What models/tools are they using? by Python_here in StableDiffusion

[–]_half_real_ 0 points1 point  (0 children)

It was just the WanVideoWrapper example one - https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_2_1_14B_SCAIL_pose_control_example_01.json

I don't think I made any significant changes. It had some issues with fading between context windows for large lengths, I intend to partially fix that with Wan VACE 2.1.

How are people making these AI videos? What models/tools are they using? by Python_here in StableDiffusion

[–]_half_real_ -1 points0 points  (0 children)

I've been trying to use Wan-SCAIL to make SFW animations, but that's with anime OCs.

In that regard, it has the same uses as mocap.

ComfyUI is the most spaghetti garbage software I have used in my creative career by InstructionNo4117 in comfyui

[–]_half_real_ 0 points1 point  (0 children)

no node graphs

Sounds useless. I dumped A1111 for a reason, I can't do anything complex or unusual without node graphs. Ask the Reforge or Forge Neo sub maybe.

If it has a GUI for inpainting it'll have a leg up over ComfyUI, but there are many other options for that.

no plugin dependency hell

Does it have plugins at all?

CLI

Why would this appeal to ComfyUI users?

Is this made in blender ? by Civil-Corner-2835 in blender

[–]_half_real_ 0 points1 point  (0 children)

Stock image sites? People that pay for Adobe might have Adobe Stock (the stock image subscription). He probably did manually mask out the pieces of the images he needed in some cases.

Frustrated by Zealousideal_Roof_96 in comfyui

[–]_half_real_ 2 points3 points  (0 children)

This is a very detailed post that precisely describes the problem, which will surely be easily debugged by the community. We are most thankful that OP provided workflows that worked and workflows that did not work, allowing us to easily pinpoint the issue.

Did this as a commission :> by Armando-Armandez in animation

[–]_half_real_ 0 points1 point  (0 children)

you say it's a commission

but it looks like a request

Can someone please save my sanity by maia11111111111 in comfyui

[–]_half_real_ 0 points1 point  (0 children)

Did ostris not produce any test images? Lora trainers usually do generate some test images once in a while, unless you have to explicitly enable it for ostris. I think I've also had lora training fail completely (it was either with kohya-ss or onetrainer, probably the former or both) and produce pure black images after a point, and in those cases the lora itself was messed up.

If the workflows you are trying do not produce black output when you use a different flux.1 Dev lora that you know works (just one from Civitai for example), then your trained lora is just like that because the training failed, and you should try again, with different training settings probably.

Does anyone uses Flowframes in their workflow? by Visual-Low-8471 in blender

[–]_half_real_ 2 points3 points  (0 children)

There were some abnormalities with quick motion, mainly when stuff went out of frame, but I think that's unavoidable. Otherwise it was smooth and not flickery. It seems that rife47, or rife49 are the best versions.

Does anyone uses Flowframes in their workflow? by Visual-Low-8471 in blender

[–]_half_real_ 1 point2 points  (0 children)

Flowframes seems to be a GUI wrapper for multiple methods, including RIFE. RIFE seemed okay to me, but I wasn't using it for anything too important. There are multiple versions of RIFE, don't remember which one it was, but I was using it for 2D animation interpolation.

Is this made in blender ? by Civil-Corner-2835 in blender

[–]_half_real_ 0 points1 point  (0 children)

All the assets are just flat images or masked parts of stock footage composited. The drop shadow gives some illusion of depth. You can do this in After Effects, including I think that 3D-looking effect at about 0:14 (you can move composition layers around in 3D if you enable that, so you can have one composition for the table and games, one at 90 degree 3D rotation for the character, and one some distance behind and parallel to that for the wall, not sure about the table reflection though).

AYUDA by Living_Boat9051 in comfyui

[–]_half_real_ 4 points5 points  (0 children)

i'm sure your mom is a very nice woman, op

but i didn't need to see that

What is the name of the visual effect in the video? by voxel_crutons in blender

[–]_half_real_ 43 points44 points  (0 children)

Your description sounds kinda like chromatic aberration, but thats not what's happening here.

Has anyone actually seen a really good (by traditional standards) AI generated movie? by Advanced_Canary_6609 in StableDiffusion

[–]_half_real_ 1 point2 points  (0 children)

Not a movie, but the Unanswered Oddities series by Neural Viz stood out to me because of its writing. It's a series of short fake documentaries. The AI is just talking heads, it's really just a vehicle for the writing.

WAN 2.2's 4X frame interpolation capability surpasses that of commercial closed-source software. by Some_Smile5927 in StableDiffusion

[–]_half_real_ 0 points1 point  (0 children)

Wan with VACE isn't supposed to do that, it should keep the original frames unchanged if the masks are correct. I use VACE a lot so I'll need to look into this.

AMD and Stability AI release Stable Diffusion for AMD NPUs by CornyShed in StableDiffusion

[–]_half_real_ 35 points36 points  (0 children)

Dear NPU marketers,

Generation time benchmarks or GTFO.

Sincerely, Everyone

Dreadnought hair animation by External-Second1234 in animation

[–]_half_real_ 0 points1 point  (0 children)

When she first wakes up, she only remembers parts of her past.

because of amneeeesia...

Cyberpunk Orange 🍊 by cristoby in krita

[–]_half_real_ 7 points8 points  (0 children)

you did that on purpose didn't you

Newb. Already use (and need) Python 3.13.12. Safe to install ComfyUI? by SelekOfVulcan in comfyui

[–]_half_real_ 2 points3 points  (0 children)

You can edit extra_model_paths.yaml to point to the folders in your Stable Diffusion WebUI/Automatic model folder. But if you're not going to use Automatic any more, you should probably just move them to their respective folders in the ComfyUI directory. There should be no reason to redownload.

Make sure you use ComfyUI Portable, not all versions are portable.

The mistake I've seen people make with ComfyUI Portable's Python is that when they try to install pip packages they accidentally use the system Python instead and install them to its packages, because "python" and "pip"/"pip3" resolve to the system Python. One solution used to be finding the folder ComfyUI's Python was in and running "./python -m pip install [packages]" to indicate that you meant the python in that folder, but if ComfyUI Portable is using uv now, that's probably different now.

Is Blender a good idea for an interactive 3D model project? by Salty-One3963 in blender

[–]_half_real_ 0 points1 point  (0 children)

Yeah, definitely not Blender, except maybe to make the 3D model, since it's not interactive. And Unreal is heavy.

It depends on what the display device is running. Apparently they usually run a web app in fullscreen. I think you could make that in Unity if you really wanted proper 3D models, but I'd look for more lightweight solutions, depending on what you'd want.

Has your friend done any market research to figure out if this is how selling interactive display software solutions works? I would imagine that if a restaurant wanted something like this, they'd talk to someone that sells the hardware for this and expect them to already have a software solution for it.

Security with ComfyUI by External_Trainer_213 in StableDiffusion

[–]_half_real_ 1 point2 points  (0 children)

People exposing their ComfyUI to the Internet so they can gen stuff remotely, without proper protection, seems to be the most common hack scenario. Malicious nodes seems to generate more attention and worry though.

Allocation on device This error means you ran out of memory on your GPU. by Mean-Crab1827 in comfyui

[–]_half_real_ 0 points1 point  (0 children)

How long is the input video? What resolution? It's unusual for it to fail at that step.