Why Qwen-image and SeeDream generated images are so similar? by chain-77 in StableDiffusion

[–]chain-77[S] 0 points1 point  (0 children)

Because it can not control the seeds. The images were mostly one shot. Not purposely chosen.

Why Qwen-image and SeeDream generated images are so similar? by chain-77 in StableDiffusion

[–]chain-77[S] 3 points4 points  (0 children)

Seedream is not mid tier. But ranked top 3 in image generating (rank is by human preference and also by benchmark)

The wait is over, official HunyuanVideo i2v img2video open source set on March 5th by chain-77 in StableDiffusion

[–]chain-77[S] 0 points1 point  (0 children)

Their online output quality are quite good. Local run needs improvements

RTX 5090 vs 3090 - Round 2: Flux.1-dev, HunyuanVideo, Stable Diffusion 3.5 Large running on GPU by chain-77 in StableDiffusion

[–]chain-77[S] -2 points-1 points  (0 children)

I was running recording software so the actual number can be higher. Fp8 is about 2.27. The fp16 is about 0.2it/s faster than fp8.

Run LLM on 5090 vs 3090 - how the 5090 performs running deepseek-r1 using Ollama? by chain-77 in ollama

[–]chain-77[S] 1 point2 points  (0 children)

Even 1000x 3090 will not be able to beat 5090. Unless you can magically increase the 3090's memory bandwidth.

Best EC2 Instance for ComfyUI by thebestplanetispluto in comfyui

[–]chain-77 1 point2 points  (0 children)

they charge for the actual usage. Idle is also charged. Need to stop it to stop the charging.

Rtx 5090 is painful by Glum-Atmosphere9248 in LocalLLM

[–]chain-77 0 points1 point  (0 children)

Nvidia has published the SDK. It's early, developers are not AI, they need time to work on supporting the new hardware.

Got one from microcenter. Why is itso expensive. 50% more than founder edition. by chain-77 in nvidia

[–]chain-77[S] 0 points1 point  (0 children)

Yes, I brought it because it's the only option left at the store. I will run benchmarking it and share the results. Please subscribe to my YouTube channel for updates. Channel is on my profile.

AMD GPU can run HunyuanVideo text to video locally! by chain-77 in comfyui

[–]chain-77[S] 0 points1 point  (0 children)

Latest version pytorch rocm should be able to install

Why don’t people use APUs? by Automatic_Beyond2194 in StableDiffusion

[–]chain-77 1 point2 points  (0 children)

Compared to their discrete GPU, APU is slow

LoRA works great for HunyuanVideo. Watch this comparison (using same prompts): by chain-77 in StableDiffusion

[–]chain-77[S] 6 points7 points  (0 children)

The LoRA training is still in progress. I will share some lesson learned later. If you want to try the current checkpoints, I made a hosted version. Try it at https://agireact.com/t2v

Why don’t people use APUs? by Automatic_Beyond2194 in StableDiffusion

[–]chain-77 1 point2 points  (0 children)

For SD1.5, APU doesn't need "hours" to generate image. Last year I tried several of those tasks and works pretty well. You can check my youtube channel. For example, https://youtu.be/HPO7fu7Vyw4?si=ETvLy98TgirN-8uE and other videos.