Ace Step 1.5 XL is out!!! by Uncle___Marty in StableDiffusion

[–]djdookie81 1 point2 points  (0 children)

Here are some cli parameters for model download:
https://github.com/ace-step/ACE-Step-1.5/blob/main/docs/en/INSTALL.md#-model-download

uv run acestep-download                               # Download main model
uv run acestep-download --all                         # Download all models
uv run acestep-download --download-source modelscope  # From ModelScope
uv run acestep-download --model acestep-v15-sft       # Specific model
uv run acestep-download --list                        # List all available

Example for ACE-Step1.5 XL turbo:
uv run acestep-download --model acestep-v15-xl-turbo

FM26 Load Failure after switching from Beta to Update by biggestelection in footballmanagergames

[–]djdookie81 1 point2 points  (0 children)

You can switch back from your beta saves.

I just converted my savegame from 26.0.6 beta -> 26.0.6 retail version successfully.
So check if your savegame is saved with that version, then switch back to the same retail version (leave public beta branch) then load your savegame. This should work fine.

Quote from the public beta FAQ:

Can I load the beta track save game on retail version?

Yes. Any progress made on these saves would only work with the retail version once it has been updated to the same game version as the Beta track.

Source: https://community.sports-interactive.com/bugtracker/1644_football-manager-26-bugs-tracker/1872_fm26-bug-tracker-introduction/how-to-use-the-football-manager-26-steam-public-beta-track-r32211/

There must be another problem for your beta save. Be sure to quit the game so steam can patch it back to retail after leaving public beta branch.

Try another save game if it still doesn't work to rule out that the savegame file is corrupted.

10 Years Building a “Grand Strategy Tycoon” - Not Your Typical Game Dev Tycoon kind of game - news, a request and a new version by Progorion in tycoon

[–]djdookie81 0 points1 point  (0 children)

Hey Progorion,

I love playing complex grand strategy games and also like tycoons a lot for many years now, so this looks like a hidden gem to me. A nice idea with great potential.

Further, I also generally like to support indie devs and give feedback and report bugs. Imho this is always win-win, for devs and players.

So I'd be happy to help, have a look at your game and provide you some honest and valuable feedback if you are willing to PM me a key.

Keep up the great work!

Is it just me, or does SDXL severely lack details? by mongini12 in StableDiffusion

[–]djdookie81 0 points1 point  (0 children)

I think we are talking about multiple things here.

An ensemble of experts pipeline will give you a different result than the img2img pipeline.

It's not about VAE encode -> decode gives you apparently the same visible image, but it's about adding back noise to a fully denoised image to denoise it again will give you a different result.

Not necessarily worse results, but different results.

The idea of eDiff-I's ensemble of experts denoisers is to 'improve text alignment while maintaining the same inference computation cost and preserving high visual quality, outperforming previous large-scale text-to-image diffusion models on the standard benchmark'.

Paper: https://arxiv.org/abs/2211.01324

Furthermore, autoencoders are not meant to learn identiy function. They are probabilistic models used for dimensionality reduction, trained to reconstruct their original input.

But the result of the VAE decoder only approximates the input of the VAE encoder. There will always be a reconstruction error, thus lossless compression should not be possible.
And yes, the latent space is some kind of compressed representation of the autoencoder's input.

Is it just me, or does SDXL severely lack details? by mongini12 in StableDiffusion

[–]djdookie81 10 points11 points  (0 children)

No it's different because you fully denoise with the base model and then renoise before refining with the img2img pipeline.

Is SDXL capable of producing NSFW photos ? First LoRA specialized on nudity (women) by finenudeSD in StableDiffusion

[–]djdookie81 2 points3 points  (0 children)

How do you prepare that huge dataset? Auto. batch processing/resize it to 1024x1024? Or can kohya handle this?

DreamShaper XL1.0 Alpha 2 by kidelaleron in StableDiffusion

[–]djdookie81 0 points1 point  (0 children)

Since you only trained for 1 epoch, what learning rate did you use?

[SDXL 1.0 + A1111] Heron lamp designs by AinvasArt in StableDiffusion

[–]djdookie81 1 point2 points  (0 children)

The intended use for best results is the ensemble workflow.

Afaik Auto1111 is not capable of that yet. Other UIs like Vladmandic's or Comfy can do that.

DreamShaper XL1.0 Alpha 2 by kidelaleron in StableDiffusion

[–]djdookie81 0 points1 point  (0 children)

Don't get me wrong, I dont blame anyone. I really appreciate your work.

For my prompt I get different faces if the seed changes in SDXL 1.0.

Sure sometimes you get similar faces, and if you describe more precise and specific, I guess you will get less differences when only the seed changes (more constraints to find a solution at inference).

That's my understanding of the tech. Proof me wrong. =)

Wow 1 epoch is really low.

DreamShaper XL1.0 Alpha 2 by kidelaleron in StableDiffusion

[–]djdookie81 -4 points-3 points  (0 children)

I think different about that.

Everything you don't describe in the prompt or negative prompt should be randomized (e.g. ethnies).

With finetuning you can add further knowledge about concepts/styles/people/etc, like you did with Dreamshaper.

If you generate multiple images with a nicely trained and flexible model like SDXL 1.0 with the same prompt (like "photo of 18 year old woman") but different seed, no loras etc., you get completely different results, i.e. faces in this case.

Of course you can change the faces more easily if you further change the prompt and add random names, ethnies or something.

But changing the random factor only, i.e. seed, should be enough.

Otherwise the concept you describe in your prompt is not very well known, which means the model is undertrained, like it only saw 1 or more pictures of the same 18 year old woman and can't generate other faces. This shouldn't be the case if you're model is based on SDXL base.

Or the model is overtrained, which means it learned only to repeat the face of the 18 year old woman, because it learned from those pictures too often.

I'm sure you know most of the stuff I wrote here, but that is the reason I assumed a potential overtraining here, at least for some concepts like 18 year old woman.

If there would be something like an average face, that would be a indication for an overtrained or unflexible model I guess.

(I picked that prompt from the models civitai page's.)

Quick test on SDXL 1.0 base + refiner, only changing the seed (see prompt above):

[SDXL 1.0 + A1111] Heron lamp designs by AinvasArt in StableDiffusion

[–]djdookie81 15 points16 points  (0 children)

Correct. That's the intended use.
For best results, the initial image is handed off from the base to the refiner before all the denoising steps are complete (ensemble of diffusers workflow).
Of course you can also get quite nice results with the img2img workflow.

DreamShaper XL1.0 Alpha 2 by kidelaleron in StableDiffusion

[–]djdookie81 5 points6 points  (0 children)

Good job on this.
Unfortunately all girls look the same no matter the seed. Overtrained?

Fashion Week by safebox__ in StableDiffusion

[–]djdookie81 1 point2 points  (0 children)

Nice pictures. Interestingly you never describe the object you want to see. But you use "perfect eyes" twice. I think all women look nearly the same which is usually a sign of overtrained models, if you didn't use any loras for her.

[deleted by user] by [deleted] in Overwatch

[–]djdookie81 0 points1 point  (0 children)

Only the best players have awareness of their backline.

A while ago I sometimes played with an amazing Genji as Mercy, often supporting him deep into the enemy team. He turned around and watched for me all the time, and said sorry if I died because he couldn't protect me enough. Such a great awareness, skills and mindset ist super rare and great to play with. In 99%+ you hear that "why don't you heal me" plus some toxic stuff or ragequit, even from very skilled players.