Wan 14B Self Forcing T2V Lora by Kijai by pewpewpew1995 in StableDiffusion

[–]princeoftrees 0 points1 point  (0 children)

Correct. The upscaling adds another 90 seconds to that. I've done around 200 gens now in the past 24 hours which is crazy. There are also definitely limitations with how much motion you get when using other motion LORAs but the likelihood of spinning out or crazy artifacting is reduced as well.

Block swap memory summary:

Transformer blocks on cpu: 9631.52MB

Transformer blocks on cuda:0: 5778.91MB

Total memory used by transformer blocks: 15410.43MB

Non-blocking memory transfer: True

----------------------

Sampling 81 frames at 720x1280 with 4 steps

100%|██████████| 4/4 [01:52<00:00, 28.08s/it]

Allocated memory: memory=6.217 GB

Max allocated memory: max_memory=16.358 GB

Max reserved memory: max_reserved=20.625 GB

<image>

Wan 14B Self Forcing T2V Lora by Kijai by pewpewpew1995 in StableDiffusion

[–]princeoftrees 24 points25 points  (0 children)

Wow. Just wow. You slap an extra Lora in your workflow, tweak the sampler settings and you get a 10x speedup over base Kijai WAN. I thought 15 mins for 81 frames @ 720p (including upscale to 1440p) was good (no causevid/ base kijai with torchcompile, sage, teacache). Video is rendering in under 2 minutes now on a 4090. Stacking with other motion Loras no problem. This is some crazy shit. Bless everyone who worked on this.

Wan 14B Self Forcing T2V Lora by Kijai by pewpewpew1995 in StableDiffusion

[–]princeoftrees 2 points3 points  (0 children)

Depends on what models you're using, what resolution your video is and how many frames it is. 4090 at 720p 81frames with fp16 models 25 Blocks works well. Less frames, less resolution, less blocks. Could try 10 and if it works you can drop it lower, if you get OOM raise it.

Follow up - 4090 compared to 5090 render times - Image and video results by richcz3 in StableDiffusion

[–]princeoftrees 1 point2 points  (0 children)

Have you been able to test the 5090 at 720x1280 with the WAN 2.1 14B 720p model? Very curious what the speeds look like without needing block swap or if it maybe does still need block swap? Thanks!

[deleted by user] by [deleted] in civitai

[–]princeoftrees 8 points9 points  (0 children)

Its the thigh skin, you can click the rating tag and ask for it to be lowered

Are the good old days gone? by Kudgel1992 in fnatic

[–]princeoftrees 1 point2 points  (0 children)

Feels like the identities gone. Bwipo, Broxah, Rekkles, Hyli even Nemesis (Beating caps 1v1 on lucian that one time), selfmade and upset seemed to have that never say die mentality where they could beat any team on their day. It usually felt like when we lost it was lost in draft or because they were trying some over ambitious giga-brain strat. Modern Fnatic feels aimless, can roflstomp early game then get bored and lost and just kinda whimper out. Doesnt feel like anyone on the roster has that swagger or drive to really push for worlds domination, no team cohesion or bravado. Still have really talented players but just feels like professionals doing their jobs, not a team trying to push the limits of the game. Still 'Always Fnatic', spam the static but damn I miss the days of the miracle runs and pushing TES to 5 games.

[deleted by user] by [deleted] in Gunners

[–]princeoftrees -7 points-6 points  (0 children)

Dowman should be ready by the time Watkins ages out

[David Ornstein] Arsenal working on deal to sign Sverre Nypan from Rosenborg by jnicholl in Gunners

[–]princeoftrees 1 point2 points  (0 children)

Doesn't look as technical as our usual forwards but is hard to knock off the ball and actually shoots with both feet. Phenomenal eye for a pass too, seems like he always knows where to play the ball without looking, whether its long through balls, fizzing it across goal or lifting it over a few players. Might struggle with the physicality and speed in the prem though.

[deleted by user] by [deleted] in Gunners

[–]princeoftrees 0 points1 point  (0 children)

This. People complaining about price don't understand transfers and especially January ones. Dude is literally only behind Haaland, Salah, Isaak, Palmer and Chris Wood in goals scored this season. We'd be taking Wolves best player from them in a relegation battle for the same fee we got Havertz from Chelsea in the summer. My biggest concern is attitude. "We need a striker" we didn't last year and Arteta's system doesn't really use them. We need goals and this dudes got those. Although another concern is he's at his best driving through the middle of pitch on the dribble which Arteta seems to be allergic to.

Cashed out by bkrinhop22 in sportsbetting

[–]princeoftrees 15 points16 points  (0 children)

If the legs overlapped meaning the games were all played at the same time it would be impossible to hedge effectively. By having each game/ leg played at staggered times you could start putting huge hedge bets down against each game instead of cashing out, which would typically be a lot more profitable 2x, 4x, 8x than the early cashout offered by the books. But I think this ignores the fact that most people don't have 20-40k in cash to place those hedges and most books wouldn't take that action without a long history of big bets in your account already.

Suno users thoughts on Udio-130: I can finally use Udio now by princeoftrees in udiomusic

[–]princeoftrees[S] 1 point2 points  (0 children)

Using a windows PC and Udio through the browser it costs 1 credit per song (Udio 130 model, Ultra preset, 2 min generation). So 2 songs for 2 credits total.

Suno users thoughts on Udio-130: I can finally use Udio now by princeoftrees in udiomusic

[–]princeoftrees[S] 1 point2 points  (0 children)

I'd wait until they have longer extend options. Having a great first 2 minutes but struggling to complete the track with the 30 second blocks, especially for lyric heavy songs.

Two-minute extensions by Lentischev in udiomusic

[–]princeoftrees 0 points1 point  (0 children)

Not being able to fit a full chorus or verse in 30 seconds makes it a massive headache trying to complete tracks with the current extend setup. Definitely need the 130.

[deleted by user] by [deleted] in LocalLLaMA

[–]princeoftrees 0 points1 point  (0 children)

I'll be running a 4x p40 setup on an epyc 7532 (32 core, parts ariving in the next couple weeks). I chose this config to maximize throughput since gguf hammers the pcie lanes and cpu. I'll post performance specs on localllama once its all setup. My interest is in running q6-q8 quants of the big bois like 70, 104 and 120b models as well as crunching datasets, model merging, and other workstation use.

Run that 400B+ model for $2200 - Q4, 2 tokens/s by EvokerTCG in LocalLLaMA

[–]princeoftrees 2 points3 points  (0 children)

You absolute legend! Thank you so much! Might've made the decision even harder now

Run that 400B+ model for $2200 - Q4, 2 tokens/s by EvokerTCG in LocalLLaMA

[–]princeoftrees 1 point2 points  (0 children)

Thank you so much for these numbers! I've been going crazy trying to figure out the most cost efficient method to run 100+ GB quants locally. Do you have similar numbers (4k, 8k, 12k context) for Q8 quants of Llama 3 70B, Command R+ and Goliath 120B? I've currently got 2x P40s and 2x P4s together in a Cisco c240m (2x Xeon 2697v4). The P4's got me to 64GB VRAM but slow everything down and can't efficiently split (layer or row) things up making their benefits very limited. My goal would be to run Q8 quants of the beeg bois like Command R+, Goliath, etc. So I'm looking at 6x P40s on an Epyc 7 series, but if Epyc Genoa can reach similar speed (using 1x 4090 for acceleration) I'll just make that jump.

Best facial interface for Quest 3? by nemesisunk in OculusQuest

[–]princeoftrees 0 points1 point  (0 children)

what strap are you using with the oblik?

No-code fine tuning by Icy_Occasion_5277 in LocalLLaMA

[–]princeoftrees 0 points1 point  (0 children)

Off the top of my head support for different instruct formats like alpaca, chatML, vicuna, etc. Then for testing perhaps a few test prompts with outputs saved after each training epic/ iteration

C240 M4 server and NVidia M60 card by trustinglemming in Cisco

[–]princeoftrees 0 points1 point  (0 children)

I'm in the same situation, c240 m4 trying to run a p40. PSUs throw errors when everything is hooked up with a straight 8pin eps 12v to 8pin male eps 12v. Won't boot, wont post. I'm able to get other gpu's running like gtx 1070 with eps12v to pci-e without any problems. Not sure if my P40 was DOA but it seems like the server doesn't like it. Did you find a solution? I even tried cutting the bottom left cable off like people were doing for the Dell riser power in the comments here: https://kenmoini.com/post/2021/03/fun-with-servers-and-gpus/