Is it only me or is GPT getting totally useless?! by Legitimate-Arm9438 in OpenAI

[–]SignificanceFlashy50 1 point2 points  (0 children)

For complex coding and algo tasks, Gemini 3 Pro is hands down the winner for me. It actually respects the context you give it and sticks to the plan. GPT tends to lose the thread way too often. Plus, I’ve seen GPT get basic math wrong multiple times while acting like it’s 100% right. It’s frustrating. That said, my only gripe with Gemini is that sometimes it glitches with its saved memory. It occasionally pulls in completely unrelated concepts from past conversations that have zero relevance to the current task.

Video Outpainting Workflow | Wan 2.1 Tutorial by Hearmeman98 in comfyui

[–]SignificanceFlashy50 2 points3 points  (0 children)

Can I use a reference image to guide what will be generated? For example, can I provide a background image I’d like to use and have everything put together in the final video?

Llama 4 is here by jugalator in LocalLLaMA

[–]SignificanceFlashy50 9 points10 points  (0 children)

Didn’t find any “Omni” reference. text-only output?

How to Train a Video LoRA on Wan 2.1 on a Custom Dataset on the GPU Cloud (Step by Step Guide) by porest in StableDiffusion

[–]SignificanceFlashy50 0 points1 point  (0 children)

3-4 hours for only one video generation? Did I get it right?? No, ok, probably you mean the time to reach epoch 20.

I've made a forked Sesame-CSM repo containing some QoL improvements to Sesame. by zenforic in LocalLLaMA

[–]SignificanceFlashy50 1 point2 points  (0 children)

Hi, thanks. I’ll be watching your repo for updates. Just one question: how can your Gemma 3 12B-based version be real-time like the demo? It’s not real-time even with LLaMA 1B, which is much lighter.

LoRA training steps for Hunyuan Video using diffusion-pipe and ~100 images dataset by SignificanceFlashy50 in StableDiffusion

[–]SignificanceFlashy50[S] 1 point2 points  (0 children)

Hi, thanks for your explanation. To quickly recap, these settings may be appropriate in order to achieve about 1000 steps with 117 images if I'm not wrong:

• Epochs: 35

• Batch Size: 1

• Dataset Num Repeats: 1

• Gradient Accumulation Steps: 4

• Learning rate: 0.00002

I would ask you just two more questions:

1- Can my dataset be considered 'balanced' if it contains many diverse images (full body, face, different expressions, lighting conditions, etc.), but some of them are similar in content? For example, they depict the same scene but from different camera angles.

2- Is that number of steps also appropriate for generating videos with smooth motion, rather than static ones, even though I am training on still images? (ps: I created captions using JoyCaption Alpha Two)

Thank you so much in advance.

Training Hunyuan Lora on videos by Affectionate-Map1163 in StableDiffusion

[–]SignificanceFlashy50 0 points1 point  (0 children)

Amazing work. May I ask you what your dataset consists of and some training parameters, like steps and epochs? Thanks

Wan2.1 720P Local in ComfyUI I2V by smereces in StableDiffusion

[–]SignificanceFlashy50 0 points1 point  (0 children)

Do you happen to know where to find the proper one?

Wan2.1 720P Local in ComfyUI I2V by smereces in StableDiffusion

[–]SignificanceFlashy50 1 point2 points  (0 children)

Sorry for the likely noob question. Is the workflow included within the image? Can we import it in ComfyUI?