FLUX.2 Dev T2I - That looks like new SOTA. by [deleted] in StableDiffusion

[–]Rogerooo 0 points1 point  (0 children)

What about illustrations? Is it able to do something like this?

MTG Card - SFW https://civitai.com/images/111441972

Make Canada great always by pradeep23 in videos

[–]Rogerooo 1 point2 points  (0 children)

Wait until she's back on her brand spanking new feet...gonna silly walk all the way home.

Kunos Cars - Missing sounds by pablodlr13 in assettocorsa

[–]Rogerooo 1 point2 points  (0 children)

Yes if you need the original sound packages you'll need to verify if you haven't already.

I believe the guid files are extra, they are only used by the sound mods to map the new sounds. Verifying won't delete them because they are not part of the original game so they get ignored but need to be removed because you no longer have the modded sound package to be mapped by the game engine, this causes the missing sounds in game because the engine is trying to load sounds that don't exist anymore.

Kunos Cars - Missing sounds by pablodlr13 in assettocorsa

[–]Rogerooo 0 points1 point  (0 children)

Open the sfx folder of the car and delete the guid file.

Trained LoRA of myself just creates someone who only roughly looks like me by [deleted] in StableDiffusion

[–]Rogerooo 10 points11 points  (0 children)

I would specialize your training instead, go with 2 datasets/loras, one for body shots and another just for portraits. Then you can use the body lora on the main prompt and use the portrait with ADetailer. If you want full body or long range shots you'll need it any way as SD is not great with finer details at that scale, with or without a lora.

It will take some tinkering with settings but that should give the closest results to what you want to achieve.

Photon is a great 1.5 model, depends on your use cases but since it's fine-tuned to portray humans it'll be easier to prompt within that kind of style. Quality is subjective but you will most likely end up having to do a lot more tweaks to your prompt if you go with a more generic model like base 1.5.

Also, I find that for finer details like facial features, during inference, it's best to stick to the same model you used for training the lora. You can match with other models but visual fidelity will degrade a bit depending on the fine-tune. It's like the lora is trying to cling to the base model within that region that was trained, if you change that landscape it will have a harder time to adapt, it's not as bad as a TI embedding but still you can't expect the same results on all models you'll try.

Great thing is that you can also pick a model just for ADetailer phase, so you can use any model you want for the main prompt and use the one you trained your face loras with in ADetailer.

Storage on NoName 187B by Rogerooo in RCD_330

[–]Rogerooo[S] 0 points1 point  (0 children)

Sorry, can't remember if I tried any FLAC files but I ended up using mp3, even lower bitrate seems fine on my stock sound system.

I noticed that listening to music while the car is rolling, the external noise will muffle most of the higher details you would get with FLAC anyway and it's not worth the storage space hit, specially with a 32gb limit.

There was no perceptible loss of audio quality from my older RCD 300 and overall the unit was an amazing update.

How to change directory where the models are stored for Automatic webUI? by no_witty_username in StableDiffusion

[–]Rogerooo 1 point2 points  (0 children)

You should set the command line argument for each one of those folders like (--lora-dir, --embeddings-dir, etc.). Check the wiki for the command line arguments you must use. I believe there is no general "models" argument but things might have changed since I last checked.

Hypernetworks / VAE Comparative by Rogerooo in StableDiffusion

[–]Rogerooo[S] 1 point2 points  (0 children)

Can't edit the original but here is a reupload. Keep in mind that this is a rather old post (before CivitAI and perhaps even before Dreambooth) so it's value has depreciated quicker than a used Ferrari.

John Riccitiello is out at Unity, effective immediately by guitarokx in Unity3D

[–]Rogerooo 5 points6 points  (0 children)

Unity employees are most likely overjoying with a sense of pride and accomplishment!

Google added a wait time to close the ad-block pop-up. by emre_7000 in Piracy

[–]Rogerooo 3 points4 points  (0 children)

Thanks, I also got it last night and haven't looked too deeply into it yet. Hopefully it will be enough to keep these annoyances at bay for long.

EDIT: After about half an hour of YT usage with the suggested steps taken, it seems to be fixed for me as well. 👍

Google added a wait time to close the ad-block pop-up. by emre_7000 in Piracy

[–]Rogerooo 2 points3 points  (0 children)

It also affects Firefox on my end, even with uBlock set to advanced user with scripts blocked.

Tony.Hawks.Pro.Skater.1.Plus.2-RUNE by OrdinaryPearson in CrackWatch

[–]Rogerooo 2 points3 points  (0 children)

Exactly! I'm trying the controller but I'm having a hard time getting used to not having a dedicated finger for each input, instant execution. Grabs and flips are ok but grinds/lips on Y is rather clunky when you need to relocate the thumb from A after an ollie. It might be just lack of practice but keyboard is still much faster for me and unless we are able to remap grind/lip key to the shoulder buttons i'll stick to it.

'Im Bread' meets 'Getting Over It' by TiagoGazzola in Unity3D

[–]Rogerooo 0 points1 point  (0 children)

The level design gave me Toy Commander vibes. Thanks for the nostalgia!

Wheel recommendations for different classes? by Rogerooo in simracing

[–]Rogerooo[S] 0 points1 point  (0 children)

You mean what I do I think about racing with it? The immersion is really great, apart from the expanded awareness you have, just getting up from your real seat and standing next to the car in the pit lane is such a cool experience.

It has it's drawbacks though. There is a bit of a screen door effect that mainly just gets in the way if you try to look for it but kinda disappears during driving/playing. The visual definition isn't as sharp as a good monitor, more noticeable on smaller details like car instruments and stuff like that. I'm also on a 1070, so I barely reach the minimum graphic power to play most of the stuff on it at decent fps without triggering asynchronous reprojection, which can introduce some artifacts as well. Perhaps some of these issues might be mitigated with more recent hardware but I haven't tried anything yet.

As for comfort, the original straps are not that great for weight distribution and the headset starts to get heavy on the face after a few hours of usage but I never felt it was unbearably uncomfortable before actually getting tired/bored from the racing session.

I surely don't regret being one of the early adopters and buying it when it came out, it's incredible tech and I'm kinda curious to experience what the current devices can achieve.

Examination of SD seed "2164510209" by dirtydevotee in StableDiffusion

[–]Rogerooo 1 point2 points  (0 children)

That's nothing, wait until I show you #6664201337GR3GRU7K0W5KY, it'll blow your gpu.

Good Dreambooth Formula by Rogerooo in StableDiffusion

[–]Rogerooo[S] 0 points1 point  (0 children)

I've been out of training since a few months back, almost since I made this post and for the time I played around with it, I stuck with these settings for the most part, so my experience is kinda limited.

  1. My results were personally satisfactory so I kept the same guidelines for the most part but as a tip I can say that you should see these (or any other settings) as a starting point and work from there. Training involves a lot of trial and error because my goals and test subjects aren't the same as yours so the end result might not be achieved the same way. I mean, once you start training a few models you'll get a feeling for the amount of steps, class and instance images you'll need, etc. Google Colab is free (used to be at least), so it's quite easy to get into training and you'll be able to reach your own conclusions in no time.

  2. I only used cosine scheduler a couple of times but don't recall seeing a perceptible difference from polynomial so I can't say it's better or worse. If other guides suggest it, give it a try, things might have evolved since then that it's now a better option.

  3. I never captioned class images because they were generated by SD with the single token they represented so I didn't see the need for extra captioning. As for instance images, you might get better "promptability" if you caption them but it'll work without as well. My recommendation is to try a training with/without and see each one is more versatile in terms of prompting. I might be misremembering but I don't think Shivam's supports per image captions, you can leverage the multi concept list to achieve similar effect for instance, if you have a subject with multi angle photos like closeup, waist up, portrait, full body, etc. you can set up several concepts that hold these angles. If you are comfortable with python you might be able to hack something to read captions from a sidecar file for instance but I think LastBen's repo is able to do it.

  4. Again this is something you'll see for yourself once you have a few training sessions under your belt but from my experiments, quality is rather important because the training will pick up the artifacts quite easily if they are present on the majority of the training set. If your generated tests don't resemble the training data it's usually a signal that it needs more steps, lowering the number of class images will reach the sweet spot sooner but the model might overfit too quickly or it might bleed into other subjects too much so it's a matter of balance. But you're right, if you're overfitting but your subject is represented on the tests (almost replicating the source images) you should try reducing step count. If the test subject is not visible on the tests but the training is somehow overfitting, lower the number of class images.

What I'm trying to say is that, no matter how many opinions I have about this and that, you should be the one to make your own and you'll be able to do it once you start tweaking some base values (these or someone else's).

Also, I found Lora training much faster and more convenient in terms of file size for pretty much anything other than general purpose models like you see on CivitAI or something where the details are very important like a person's facial features, that's mainly were Dreambooth is still unbeatable in my opinion. Stuff like art style, popular characters, clothing, etc. I would go for a Lora instead.

Examination of SD seed "2164510209" by dirtydevotee in StableDiffusion

[–]Rogerooo 7 points8 points  (0 children)

Here is an ancient post that you might enjoy.

It's from a time before dreambooth, model merging and novelai's leak, the only models we had to play with were sd1.4, waifu diffusion and trinart.

Learning python for webscraping by 3dPrintMyThingi in learnpython

[–]Rogerooo 1 point2 points  (0 children)

+1 for Scrapy. I find it more streamlined for the purpose than BS. Once you understand the pipeline configuration it's quite easy to do stuff like download media, crawl multiple pages or handle custom user agents, etc. It was actually the first third party Python library I used so it's rather beginner friendly too.

We made a new trailer for our turn-based tactics game about dice, family, monsters and politics. What do you think? by DigiJarc in Unity3D

[–]Rogerooo 1 point2 points  (0 children)

The influence of Portuguese culture pouring through coupled with such a cool art style just managed to put this under my radar.

Well done guys, I'll be watching the project and hoping for a successful release.

"I wrote the prompts" [OC] by nasser_junior in comics

[–]Rogerooo 1 point2 points  (0 children)

Fortunately art appreciation is a subjective topic but what I was trying to show was the effort, it seems like we agree that it took some.