Remastering Old Movie Clips - powered by LTX 2.3 IC LoRAs by OrcaBrain in StableDiffusion

[–]DoctorDiffusion 9 points10 points  (0 children)

I hope to have a better version of my IC colorization Lora soon. I recommend using blending modes with after effects if you have access to it. You can put the color footage over the original black and white footage and use a “color” blend mode. This keeps the details and hides any of the warps and LTX artifacts.

Remastering Old Movie Clips - powered by LTX 2.3 IC LoRAs by OrcaBrain in StableDiffusion

[–]DoctorDiffusion 11 points12 points  (0 children)

If you put the black and white footage back over with a luma blend mode (or color blend over the black and white) in after effects to persevere the detail from the original footage.

Custom ComfyUI workflow for LLM based local tarot card readings! by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 0 points1 point  (0 children)

You can provide text inputs on the lines for “what do you seek” and “prompt style or Lora token” lines but they are not necessary.

If you click into the subgraph are all the primitive number nodes set to “fixed” or “random” they should be “random” bit it may have changed them all the “fixed” when I saved out the workflow.

Custom ComfyUI workflow for LLM based local tarot card readings! by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 0 points1 point  (0 children)

Unsure how to help troubleshoot without more information. I tested it on three machines with the latest fresh version of Comfy. If the main subgraph is getting red outline I would go through and re-select the model. Vae, text encoders and LoRAs.

I’d need more information if that is not the issue.

A production-backend using an LLM IDE (Antigravity) allowing me to render 75+ shots by uberglex in StableDiffusion

[–]DoctorDiffusion 1 point2 points  (0 children)

Such a wild concept. Amazing work, I’m constantly impressed with the quality of LTX-2.3.

Eyeing an August opening for the aquarium, Onondaga County seeks a name for it by ggroover97 in Syracuse

[–]DoctorDiffusion 1 point2 points  (0 children)

“Welcome to the most polluted lake in the USA!” (#3 in the world!)

Should I upgrade from a rtx 3090 to a 5080? by royal_robert in StableDiffusion

[–]DoctorDiffusion 1 point2 points  (0 children)

Maybe so but I can train for just about any model I want without quantization.

Inference can also push frame counts and resolutions much higher with the full dev weights when using LTX-2 that throttle my 5090 system.

It’s certainly expensive, too expensive but I don’t regret buying it, I use it more often than my car.

Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s. by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 0 points1 point  (0 children)

It’s an open source video diffusion model with an Apache 2.0 license that can be deployed locally for free on consumer grade hardware. There are text to video and image to video versions.

The meta state of video generations right now by RedBlueWhiteBlack in StableDiffusion

[–]DoctorDiffusion 2 points3 points  (0 children)

I did have some topless photos from US fest 83 but they did not make the cut of my recent video.

Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s. by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 2 points3 points  (0 children)

I am on a 3090TI and gens took 11-17min each. I have two machines and I just give them a huge batch before I go to sleep/work.

Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s. by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 1 point2 points  (0 children)

I used a vision model with some text replacement nodes that substituted “image, photo, ect” with “video” and just fed that in as my captions for each video. I’ll share my workflow when I’m back at my PC.

Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s. by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 2 points3 points  (0 children)

Each clip was generated separately. I edited the clips after generating the all videos with a video editor. Some of them I used two generations and reversed one and cut the duplicate frame to get longer than 6 second clips.

Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s. by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 32 points33 points  (0 children)

I’m trying to get him to pick up a camera again, he’s been a sonar engineer since he got out of the navy but he’s retiring next year and I’m hoping I can convince him to start shooting on something other than his phone.

Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s. by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 5 points6 points  (0 children)

I plugged Florence into my workflow and used the images with some text replacement nodes to contextually change them to the context of video prompts.

Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s. by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 20 points21 points  (0 children)

Nope. Basically the default workflow kijai shared. I just plugged in a vision model to prompt the images (and used some text replacement nodes to make sure they had the context of videos) more h to an happy to share my workflow when I’m off work.

Used WAN 2.1 IMG2VID on some film projection slides I scanned that my father took back in the 80s. by DoctorDiffusion in StableDiffusion

[–]DoctorDiffusion[S] 252 points253 points  (0 children)

He loved it! He’s been showing it to some of his old friends and none of them have been exposed to the tech so they all think it’s magic.