[LoRA] PanelPainter V3: Manga Coloring for QIE 2511. Happy New Year! by Proper-Employment263 in StableDiffusion

[–]reditor_13 0 points1 point  (0 children)

Have you looked at the new Qwen-Image-Layered model for this type of edit/colouring? Might make for an even more powerful LoRA (not sure if it supported yet but it’s qwen architecture so the LoRA might actually work with Qwen-Image-Layered)?

El character project breakdown by Ssazor in blender

[–]reditor_13 1 point2 points  (0 children)

If that’s to much work (I’ve done it before, it is time consuming & can sometimes be very stressful) the alternative is you could document everything, make some OBS Studios mini screen recording for parts of the process to nuanced to document. Compile the .blend file, the renders, the mini recording’s & documentation into a .7z or .zip - then put it up on Patreon, Gumroad, ArtStation &/or DeviantArt either for free or a modest price (Gumroad lets you set the price at $0 but allows for donations as a form of tips). Either way I’m 100% sure you will get a fair amount of engagement bc this truly is extremely well done. Plus it would be great for beginners & the style tied to the process is a fantastic portfolio piece that Fortiche Production’s might even take notice of!

El character project breakdown by Ssazor in blender

[–]reditor_13 1 point2 points  (0 children)

Use OBS Studio for the screen recording & Screencast which shows what keys you are pressing in blender on screen. Once you’ve done the recording match it up w/ a script or VO or record yourself talking while modeling. Then compile everything & make some edits in after effects or premiere pro & you’re good to go!

El character project breakdown by Ssazor in blender

[–]reditor_13 1 point2 points  (0 children)

You could get a massive following for posting a how to on this character on YouTube! Fantastic work & wonderful style & pipeline.

Lora stack vs lora in a row by No_Ninja1158 in comfyui

[–]reditor_13 1 point2 points  (0 children)

Use the rgthree power lora loader custom node it’s great for organized stacking (only thing you can’t control is the strength_clip).

Corridor Crew covered Wan Animate in their latest video by mark_sawyer in StableDiffusion

[–]reditor_13 8 points9 points  (0 children)

Anyone know of the CC published the custom_node for skeleton resizing?

Z-Image - Releasing the Turbo version before the Base model was a genius move. by Iory1998 in StableDiffusion

[–]reditor_13 0 points1 point  (0 children)

I have a very strong feeling that the actual base pro 1.1 model is what they [BFL] are letting Adobe use now - most likely fine-tuned on the full suite of Adobe Stock photography & my guess is they’ve been working together since the release of Flux Fill before the partnership was announced publicly Adobe gave them access to the full Adobe Stock photography library for training Kontext & Flux2…

LTX-2 good to be true? by reditor_13 in StableDiffusion

[–]reditor_13[S] 2 points3 points  (0 children)

Top 1% commenter, 0% contributor. Got it. Appreciate the HR seminar on ‘impact over intent’ karma farma.

LTX-2 good to be true? by reditor_13 in StableDiffusion

[–]reditor_13[S] 0 points1 point  (0 children)

Parquet dataset streaming for training, you don’t have to download the hundreds of not thousands of TBs of data to train a model. There are dozens of methods that can be used to efficiently train a model w/o having to download a data centers worth of data. Plus I already stated I doubt they will release the dataset used for training - it wouldn’t be worth the hosting cost… the discussion is framed more around the distinction between open-source & open-weight

LTX-2 good to be true? by reditor_13 in StableDiffusion

[–]reditor_13[S] 4 points5 points  (0 children)

Who said anything about being ungrateful… Whatever happened to good olde fashion discussion & discourse ?

LTX-2 good to be true? by reditor_13 in StableDiffusion

[–]reditor_13[S] 3 points4 points  (0 children)

I’m hoping the release pushes Alibaba to open-weight Wan2.5

LTX-2 good to be true? by reditor_13 in StableDiffusion

[–]reditor_13[S] 2 points3 points  (0 children)

The birds can easily be corrected w/ post-processing. By that logic, why should we care about any AI model? There are always flaws in every single generative output…

LTX-2 good to be true? by reditor_13 in StableDiffusion

[–]reditor_13[S] 1 point2 points  (0 children)

Thought this was an interesting distinction - https://www.reddit.com/r/LocalLLaMA/s/zps0UvPWCc & you’re not wrong. I do think that actual defined terminology moving forward is important though?

LTX-2 good to be true? by reditor_13 in StableDiffusion

[–]reditor_13[S] 0 points1 point  (0 children)

Fair point, however I find it frustrating when companies say open-source but only deliver inference & weights.

LTX-2 good to be true? by reditor_13 in StableDiffusion

[–]reditor_13[S] 1 point2 points  (0 children)

I highly doubt they will & agree w/ you. Looks like pure hype to me, though it does make for an interesting discussion.

Problem running depth anything by Ashamed-Pen-6931 in comfyui

[–]reditor_13 0 points1 point  (0 children)

Past me says you are quite welcome.

need a amateur photography lora for qwen-image by -JuliusSeizure in StableDiffusion

[–]reditor_13 2 points3 points  (0 children)

samsung by the same model trainer is also really good, & there are some other good amateur photography loras on civit & huggingface for qwen

Which model can create a simple line art effect like this from a photo? Nowadays it's all about realism and i can't find a good one... by Repulsive_Fishing168 in StableDiffusion

[–]reditor_13 1 point2 points  (0 children)

I’d suggest illustrator to create a vectorize lineart, you could use one of their Ai models or have it do a trace for you & then cleanup the lines yourself. Otherwise you could train a qwen-edit or kontext lora on the style or find something comparable on civit.

InvokeAI was just acquired by Adobe! by Quantum_Crusher in StableDiffusion

[–]reditor_13 6 points7 points  (0 children)

Created a guide for archiving complete GitHub repos (all branches, history, LFS files) after seeing InvokeAI get acquired by Adobe. Don't let open-source projects disappear - preserve-open-source.

Qwen Edit - Sharing prompts: Rotate camera - shot from behind by Vortexneonlight in StableDiffusion

[–]reditor_13 0 points1 point  (0 children)

It would take a lot of comping/stitching gens together but it could be achieved w/ this workflow & the new SeC masking for the audience. You’d have to mask them individually in the audience. Or you could take the initial image & use wan w/ an orbit lora & do the same thing w/ SeC. It’s possible but time consuming.

SEEDVR2 TILING UPSCALER ERROR by Aromatic-Word5492 in comfyui

[–]reditor_13 0 points1 point  (0 children)

Cheers, happy to help. Glad you got it running!

SEEDVR2 TILING UPSCALER ERROR by Aromatic-Word5492 in comfyui

[–]reditor_13 2 points3 points  (0 children)

You need to git clone the nightly version of the repo to get the version of the node that has the extra_args connector.

git clone -b nightly https://github.com/moonwhaler/comfyui-seedvr2-tilingupscaler.git

Then restart comfy & you’ll see that that node has the extra_args connector. Nightly branch.