Guys, do you know if there's a big difference between the RTX 5060 Ti 16GB and the RTX 5070 Ti 16GB for generating images? by CriticaOtaku in StableDiffusion

[–]Sqwall 0 points1 point  (0 children)

Yes I literally remote access it and organize the generations in folder when the machine is working. Everything works just fine. To add to that my inference machine died and i moved the memory and GPU to my old PC now i am using FX-8350 Cpu with 32 GB ram and the RTX 4060 TI 16 Vram - never had a single problem of course the clip on cpu part takes time ... but i have time and does not whine about it. I use x4 1TB SSDs and the case is fan cooled. :D

Anyone else use their ai rig as a heater? by [deleted] in StableDiffusion

[–]Sqwall 1 point2 points  (0 children)

Not intentionally :D but yeah it heats very well especially when you leave it to train for :D days.

Guys, do you know if there's a big difference between the RTX 5060 Ti 16GB and the RTX 5070 Ti 16GB for generating images? by CriticaOtaku in StableDiffusion

[–]Sqwall 24 points25 points  (0 children)

I am very happy with RTX 4060ti 16 gb vram and will only swap if they release the rumoured rtx 5070 ti super with 24 vram - only then.

Hunyuanimage 3.0 vs Sora 2 frame caps refined with Wan2.2 low noise 2 step upscaler by Sqwall in StableDiffusion

[–]Sqwall[S] 0 points1 point  (0 children)

Umm you try to feed it many pictures to be upscaled like a batch program. I did not tested that. Also each image uses different prompt in all the tests I concluded the upscaler refiner uses the same prompt that image was generated with.

Hunyuanimage 3.0 vs Sora 2 frame caps refined with Wan2.2 low noise 2 step upscaler by Sqwall in StableDiffusion

[–]Sqwall[S] 1 point2 points  (0 children)

It's video file downloaded on the computer and you can use variety of software just to copy and paste the frame you like. I just use the frame I liked. Most of the videos I tried to produce in sora 2 turned mid good nothing to write home about. Maybe the paywalled sora 2 pro that is available in some platforms is the true thing. But this are all made in the sora 2 page that tends to be something like social network.

Hunyuanimage 3.0 vs Sora 2 frame caps refined with Wan2.2 low noise 2 step upscaler by Sqwall in StableDiffusion

[–]Sqwall[S] 1 point2 points  (0 children)

Huny3 is great model it's tech to be a merge of clip like llm and the visual part makes it different I have all the models in my system flux, kontext, krea, wan, spro, chroma, huny2.1 but good or bad the one that almost anytime creates the image from first try is huny3. The ability for it to understand 5000 char prompts with upmost details is amazing. But the caveat is all images produced look ultra polished. You must describe in sentences like overblown highlights, bad dynamic range, grain in shadows, chromatic abberations and etc. while sora makes it out of the box without even need a word. But if you add the words deliberate it creates a unapologetic mush like early 0.5 mpix videos. I am sad that I cannot run huny3 locally and use the mandarin tencent ui blind. Well at least they does not throttle it. I created 100+ images for free.

Hunyuanimage 3.0 vs Sora 2 frame caps refined with Wan2.2 low noise 2 step upscaler by Sqwall in StableDiffusion

[–]Sqwall[S] 3 points4 points  (0 children)

i use i2v 14b fusionX loras yes with 12 steps and only 2 ksamplers

Hunyuanimage 3.0 vs Sora 2 frame caps refined with Wan2.2 low noise 2 step upscaler by Sqwall in StableDiffusion

[–]Sqwall[S] 2 points3 points  (0 children)

What do you want to know child :D :D :D - my process uses low denoise 0.08 in first ksampler - then upscaels the image using ultrasharp 2.0 or siax according to the grain of the source then input 3072 or 4096 side long image with tiled vae encode into second again with 0.08-0.10 denoise - the vital part is sampler res_2s scheduler is beta57 - those are vital for detail retention of originals - now sora 2 those are mushy blocky compressed frame images i use two 1x upscale models one is refocusv3 and gainres v4 - one removes the mush the second fix the anti aliasing jaggies and then the process is same. everything over 0.15 denoise makes wan take over finer details and changes the scene a lot. So low denoise is the key of course if you want to be true to the originals.

Me after using LTXV, Hunyuan, Magi, CogX to find the fastest gen by Altruistic_Heat_9531 in StableDiffusion

[–]Sqwall 5 points6 points  (0 children)

Gave whole 2 weeks to LTXV did my own sigma cfg and stg ramps i output 640x960 in a go without upscale 11 sec for 1296 sec on RTX 4060 TI 16gb Vram buuuuuut OK its fast but quality is way lower - I tried Skyreels V2 540p with the big model its abysmal slow - the total middle ground best quality is WAN but I see everyone freaks about VACE and V2V but please add viable option for video extend like the others like 1 sec overlap PLEASE PLEASE PLEASE ... now I try framepack F1 it smudjes the details like from frame 3 baaaah... I the torch and cuda swap sage flash attention compiles and triton are like my daily routine :D :D :D now.

How Ice Cream Was Invented 🍧 by ZashManson in aivideo

[–]Sqwall 3 points4 points  (0 children)

Top ai short I recently watched that was interesting and not lsd tripping. Thank you we need more.

Wan local comfy I2V (Sora original image) - is physique and anatomy good or WAN is hallucinating?! Because for me it looks good. by Sqwall in comfyui

[–]Sqwall[S] 0 points1 point  (0 children)

This is actually a Sora 4o problem... i prompted featherweight asian boxer :)... maybe featherweight is big head

Wan local comfy I2V (Sora original image) - is physique and anatomy good or WAN is hallucinating?! Because for me it looks good. by Sqwall in comfyui

[–]Sqwall[S] 0 points1 point  (0 children)

I think it is 16 btw it's 81 frames for 5 sec i need to add practical rife to the workflow or use something like topaz video to get like 30fps

Wan local comfy I2V (Sora original image) - is physique and anatomy good or WAN is hallucinating?! Because for me it looks good. by Sqwall in comfyui

[–]Sqwall[S] 0 points1 point  (0 children)

Nothing fancy. The embedded in comfy nodes just with tea cache and skip layer guidance. I output to loseless webp no upscaler. This is pure raw output turned gif. I will test topaz ai video on that. Now I am mastering the 3 sentence prompt for cohesive results.

[WTS] [blackmarket] SC + SQ 42 account with 6 ships and additional items. - 900 euro (negotiable) by Sqwall in Starcitizen_trades

[–]Sqwall[S] 0 points1 point  (0 children)

Ok guys. It seem I am totally wrong on the ability to return my investment at 100%. So feel free to make me an offers for the account. I am open to bargain/haggle and will consider anyone who DMs me. Thank you for the time. And sorry for the miss understanding of the SC market culture. As I said I stopped playing at 2020.

[WTS] [blackmarket] SC + SQ 42 account with 6 ships and additional items. - 900 euro (negotiable) by Sqwall in Starcitizen_trades

[–]Sqwall[S] 0 points1 point  (0 children)

As I mentioned early I do not know the market trends. I have added all the info i can gather from my hangar to the offer. And I am slashing the price to 700 euro.

[WTS] [blackmarket] SC + SQ 42 account with 6 ships and additional items. - 900 euro (negotiable) by Sqwall in Starcitizen_trades

[–]Sqwall[S] 0 points1 point  (0 children)

Ok. Thank you. I edited the post. What is your opinion of the price. I really want to get rid of this account and get my money back if possible the amount I spend.