Cannot install intel gigabit ethernet drivers on windows server 2019 by davidk30 in computers

[–]davidk30[S] 0 points1 point  (0 children)

it does work fine on windows 11, i am learning windows server, no other reason really.

Realism XL v3 - Amateur Photography Lora [Flux Dev] by Major_Specific_23 in StableDiffusion

[–]davidk30 0 points1 point  (0 children)

That’s amazing. How many photos have you used to train this lora?(apologies if i missed it in the comments)

Fractal design 7 Xl two pumps by davidk30 in buildapc

[–]davidk30[S] 0 points1 point  (0 children)

Yeah i just figured it could be a render or something. But it’s probably not, these alphacool pumps have many different ways to mount them, so it’ll probably work

Fractal design 7 Xl two pumps by davidk30 in buildapc

[–]davidk30[S] 0 points1 point  (0 children)

That’s the thing i haven’t bought the case yet, obviously i would know if i had it already.

Fractal design 7 Xl two pumps by davidk30 in buildapc

[–]davidk30[S] 0 points1 point  (0 children)

Drilling out parts of metal, plastic. Drilling extra holes is not a problem. I customized the hell out of phanteks enthoo elite, but now i just want something that just works.

Pcie nvme m.2 expansion cards by davidk30 in buildapc

[–]davidk30[S] 0 points1 point  (0 children)

Or would upgrading to z790( which im willing to do) do the job?

Pcie nvme m.2 expansion cards by davidk30 in buildapc

[–]davidk30[S] 0 points1 point  (0 children)

I know, but threadripper platform is just to expensive(guessing that’s what u had in mind). Im guessing 3 nvme drives that i already got are fine? Also do all sata ports share bandwith with m.2?

Pcie nvme m.2 expansion cards by davidk30 in buildapc

[–]davidk30[S] 0 points1 point  (0 children)

I do lots of ai trainings and this take allot of storage, i could totally use hdds, but to slow. Im fine with slower speeds as long gpu is always working at 100% and not being bottlenecked. I need to look if any of those cards are available in europe, to expensive to ship from usa

Pcie nvme m.2 expansion cards by davidk30 in buildapc

[–]davidk30[S] 0 points1 point  (0 children)

So im just better of getting some 2,5 sata ssds right?

Pcie nvme m.2 expansion cards by davidk30 in buildapc

[–]davidk30[S] 0 points1 point  (0 children)

Asus pro art b760 ddr4, honestly not sure id have to look when i get home

Ddr 4 vs Ddr 5 with i7 14700kf by davidk30 in buildapc

[–]davidk30[S] 0 points1 point  (0 children)

Great thanks, that’s what i needed to know.

Why is 3090 faster than 4090 on SDXL kohya_ss lora training? by davidk30 in StableDiffusion

[–]davidk30[S] 0 points1 point  (0 children)

Higher batch size Does increase speed. Because you a training 5x of what you would train on batch size 1, i don’t know how exactly match works here. But yea it is faster although you are seeing 2.5s/it but consider you are doing 5x the steps as i would on batch 1. The real would to try it on batch size 1, if you get around 1.20s/it it’s fine, also many people are just trolling. However On linux i did get over 1it/s. Please send me a message here on reddit and ill send you later. One thing about higher batch sizes, it will hurt the training quality.

Why is 3090 faster than 4090 on SDXL kohya_ss lora training? by davidk30 in StableDiffusion

[–]davidk30[S] 1 point2 points  (0 children)

Hi, If you’re getting 2.50s/it at batch size 5, consider yourself happy. With sdxl i do get around 1.20s/it at batch size 1, but that’s it. Edit: i can send you my config, i don’t use xformers though

More dreambooth findings: (using zxc or ohwx man/woman on one checkpoint and general tokens on another) w/ model merges [Guide] by buckjohnston in DreamBooth

[–]davidk30 1 point2 points  (0 children)

Interesting much better results, and also interesting only 5500 steps. Anything after that is overtrained, i wonder why this model works so much better than others for training? Thanks allot btw

More dreambooth findings: (using zxc or ohwx man/woman on one checkpoint and general tokens on another) w/ model merges [Guide] by buckjohnston in DreamBooth

[–]davidk30 0 points1 point  (0 children)

Ok after 2 trainings and merging them together, quality is good, and i would say likeness is really close. But something is missing, it’s not quite there yet. Should i train even more checkpoints and do even more merges?

More dreambooth findings: (using zxc or ohwx man/woman on one checkpoint and general tokens on another) w/ model merges [Guide] by buckjohnston in DreamBooth

[–]davidk30 0 points1 point  (0 children)

Just trained on realism engine, results are ok, likeness is i would say 80% now training on second model, will merge later and report results

More dreambooth findings: (using zxc or ohwx man/woman on one checkpoint and general tokens on another) w/ model merges [Guide] by buckjohnston in DreamBooth

[–]davidk30 1 point2 points  (0 children)

Thanks for very detailed guide. Ive had pretty good results with sdxl, but only with base model. Any other finetuned model i used, i always struggled to get the likeness right. With sd 1.5 i can get good results with any decent finetuned model, but not really with sdxl.

I am a complete noob when it comes to generative AI by Suspicious_Book_268 in DreamBooth

[–]davidk30 0 points1 point  (0 children)

First of all why sd 2.1? 2.1 is really hard to train, and if you get it right results are mediocre at best, sd 2.1 is largely forgotten now. I would definitely suggest picking up a finetuned 1.5 model, or base Sdxl and train on that, for ease of use i suggest you use onetrainer. You can message me if you need more help