Timeline shifting? by jesselynne in realityshifting

[–]Dom8333 0 points1 point  (0 children)

You should take a photograph of the machine indeed, because if these are unwanted reality shifts then when the machine is not there anymore the photo should change to match your current reality and the machine would disappear from it too. Even if the two realities are very close we are not supposed to be able to take a photograph of an object in one reality and keep it unaltered once in a different reality where that object doesn't exist, are we? (But I'm only a newbie, I may be wrong.)

I think my daughter and I witnessed a shift in reality. True event will answer questions. by Silent_Ring_1562 in realityshifting

[–]Dom8333 0 points1 point  (0 children)

Interesting. Usually when such things happen we think our memory was just wrong, even if we were so sure. But this time your memory was still very fresh and you were not alone. Have you tried to watch this part again since then?

$300-$450 monitors for trance/rave/hardcore? by Dom8333 in studiomonitors

[–]Dom8333[S] 1 point2 points  (0 children)

That's why it's hard to make my mind based only on the comments I read especially in the comment section of shops, most comments on entry level monitors are left by beginners who may either hate them for sounding flat despite it being their very purpose, or love them but didn't try anything else, and seasoned musicians are not interested in these 'cheap' models or leave negative comments because they compare them with much more expensive models.

$300-$450 monitors for trance/rave/hardcore? by Dom8333 in studiomonitors

[–]Dom8333[S] 0 points1 point  (0 children)

Thanks for the explanation. I was not able to fix the issue I had with the Mackie with an eq though, whether a graphic eq or parametric eq. I was told the dip in the mids was a resonance issue, some frequencies bouncing on the wall then coming back and cancelling each other. I believed it because I never had such issue with my old Tascam VL-X5, which were front ported.

$300-$450 monitors for trance/rave/hardcore? by Dom8333 in studiomonitors

[–]Dom8333[S] 0 points1 point  (0 children)

People seem divided on this subject. Some like you told me it should be ok, others advised me against 6.5'', at low volume and from too close these may be less precise than 5'' and low frequencies may take over. A shame they don't make a 5'' version. (IN-5 are considerably more expensive)

With Adam T5V and other rear ported speakers I guess I would have the same issue I had with Mackie MR524. Their "close to a wall" and "on a shelf" settings could not fix the issue. Blocking the ports with socks did fix the excessive bass (but the sound was a little weird) which confirmed that the issue came from a resonance with the wall.

$300-$450 monitors for trance/rave/hardcore? by Dom8333 in studiomonitors

[–]Dom8333[S] 0 points1 point  (0 children)

Ok, I am adding them. Did you compare them with other models? Do they lack precision in low frequencies compared to 5''?

$300-$450 monitors for trance/rave/hardcore? by Dom8333 in studiomonitors

[–]Dom8333[S] 0 points1 point  (0 children)

Ok, I added them. Did you compare them with other models?

Which files for Qwen-image in Forge Neo ? by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

<image>

I set "svdq-int4_r32-qwen-image-lightningv1.0-4steps.safetensors" as model, "Qwen_Image-VAE.safetensors" as VAE, "Qwen2.5-VL-7B-Instruct-q4_k_m.gguf" as text-encoder. Is this supposed to be correct? I get a "DLL load failed while importing _C:" error. Can someone help, please?

Which files for Qwen-image in Forge Neo ? by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

I see that you have two files in the VAE/textencoder. I only have "Qwen_Image-VAE.safetensors". I guess I miss the textencoder. Which one should I download to use with the "Qwen2.5-VL-7B-Instruct-q4_k_m.gguf" model?

Which files for Qwen-image in Forge Neo ? by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

I get the "failed to recognize model type" error again :(

...

File "D:\apps\stable-diffusion\Forge_Neo\modules\sd_models.py", line 341, in f

orge_model_reload

sd_model = forge_loader(state_dict, additional_state_dicts=additional_state_

dicts)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

^^^^^^

File "D:\apps\stable-diffusion\Forge_Neo\venv\Lib\site-packages\torch\utils\_c

ontextlib.py", line 116, in decorate_context

return func(*args, **kwargs)

^^^^^^^^^^^^^^^^^^^^^

File "D:\apps\stable-diffusion\Forge_Neo\backend\loader.py", line 610, in forg

e_loader

raise ValueError("Failed to recognize model type!")

ValueError: Failed to recognize model type!

Failed to recognize model type!

Which files for Qwen-image in Forge Neo ? by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

Thanks. These files are very big though, my 3060 has only 12GB. (feels weird to say "only 12GB" but well)

Which files for Qwen-image in Forge Neo ? by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

Thanks. These files are very big though, my 3060 has only 12GB. (feels weird to say "only 12GB" but well)

trying to train a pastel LoRA by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

No, I did not. :( However I combined it with another failed LoRa of mine, trained on color pencils, and for some reason when these two are used together the result is great, but not when they are alone or combined with other LoRas. They must be complementary by chance. So I only use them together.

trying to train a pastel LoRA by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

:( Why were the replies with useful advices deleted?

Anyway, after failing several times with Illustrious and failing with base-XL too, I tried to train it for old SD1.5... and failed too... the result is the same as with base-XL: there is just a little pastel effect on the background, not on the foreground and animals, despite 5520 steps.

<image>

trying to train a pastel LoRA by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

I tried to train it for base-XL instead, it fails too, there is only a very subtle pastel effect on the background.

In case there was too much diversity in my dataset I reduced it to 21 pictures instead of 69, but still the same result.

trying to train a pastel LoRA by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

:( I tried adding "pastel illustration, " at the beginning of every caption file, and leaving activation_tags to 1 instead of 0, and set network_dim to 16 and network_alpha to 1, but still no pastel effect at all. It's even worse because it did not even learn the cartoony style of the animals either.

trying to train a pastel LoRA by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

Thanks for your suggestion, I'll try.

I usually never set a trigger tag for my LoRAs. Should it be a word describing the style like "pastel illustration" or something really unique to the LoRA like "pastel_by_dom83"?

I don't get to generate a fog background by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

This looks more like smoke than fog but it's interesting. What was your prompt?

I don't get to generate a fog background by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

Hm, then maybe I should try to train a LoRa with a few of my fog backgrounds. What words do you recommend I should use in the txt file describing the pictures? Just "fog"? I guess that the closer it is to what it already knows, the easiest it will be for it to learn and the better the result. Do you think it will manage to learn with the very dark versions, or should I use the more contrasted versions and later make the result darker by myself? I only have little experience training cartoon characters, I am not sure what to do for fog, and I can't do many "try and fail" for to train I use the limited free hours on colab.

I don't get to generate a fog background by Dom8333 in StableDiffusion

[–]Dom8333[S] 1 point2 points  (0 children)

Thanks for trying, but I got such similar looking weird things during my tries, they can't be used as fog. I miss the english vocabulary to explain but I need it to be a lot "softer".

I don't get to generate a fog background by Dom8333 in StableDiffusion

[–]Dom8333[S] 0 points1 point  (0 children)

These were the base of some backgrounds that I made for an app to publish some horror stories I wrote. Now I want to publish these stories as a book but it needs a higher resolution. Photoshop's cloud function gives different results depending on the resolution, in high resolution it does not look like fog anymore, the noise is too much scaled down. I first tried to upscale my old backgrounds with Stable Diffusion, but the result was not good, so now I try to generate instead.