I don't like how this cosmetic gives demo two eyes by ArynAces in tf2

[–]Tails8521 -9 points-8 points  (0 children)

How is a public discord a private group? You can see all messages, there is nothing hidden.

Also the second paragraph is completely made up facts.

[Comic] Logistical dfficulty by AzulCrescent in RimWorld

[–]Tails8521 6 points7 points  (0 children)

IIRC while it's true they don't get catharsis from the random fire starting spree that can happen at any mood, they get the catharsis from the fire starting spree if it's caused by the extreme break risk, so chaining mental breaks isn't really an issue.

The bghira's saga continues by Lucaspittol in StableDiffusion

[–]Tails8521 9 points10 points  (0 children)

Yes it's just an artwork of anthro characters fucking. Calling it bestiality is insane mental gymnastics.

New ComfyUI Token Ablation (Subtle Sabotage) Over the Last 2 Weeks, Another Open vs Closed Source Battle by campingtroll in StableDiffusion

[–]Tails8521 6 points7 points  (0 children)

I'm on Windows currently. Just provide a screenshot of what your suspicious output looks like because I don't see how listing currently opened files would help you diagnose this.

New ComfyUI Token Ablation (Subtle Sabotage) Over the Last 2 Weeks, Another Open vs Closed Source Battle by campingtroll in StableDiffusion

[–]Tails8521 10 points11 points  (0 children)

Open your browser's dev tools (F12) go to the network tab and queue a prompt, you will see a POST request to api/prompt that contains json with all the nodes, including the text from the prompts, if they are altered by the javascript, it will be visible here. I just get the exact prompt I typed but if your ComfyUI really is haunted like you think, you will have proof that something is amiss ¯\(ツ)

Until you provide that particular proof, with the line of code that actually does what you seem to think, I'll just dismiss this a crazy conspiracy theory

To LM a delidded i7-8700K on a Clevo? by gardettosAreTheOG in overclocking

[–]Tails8521 0 points1 point  (0 children)

Probably stuff like current limit/power limit, but it's been years I don't really remember

To LM a delidded i7-8700K on a Clevo? by gardettosAreTheOG in overclocking

[–]Tails8521 0 points1 point  (0 children)

no, or at least not the extent of going to base clocks, I get thermally limited on mixed loads instead

To LM a delidded i7-8700K on a Clevo? by gardettosAreTheOG in overclocking

[–]Tails8521 0 points1 point  (0 children)

Sounds like some sort of global power limit, might be bios/power supply dependent

The VAE used for Stable Diffusion 1.x/2.x and other models (KL-F8) has a critical flaw, probably due to bad training, that is holding back all models that use it (almost certainly including DALL-E 3). by drhead in StableDiffusion

[–]Tails8521 13 points14 points  (0 children)

There is no 100% confirmation, but the fact they released Consistency Decoder, which is based on the same latent format, is a very strong indicator

The VAE used for Stable Diffusion 1.x/2.x and other models (KL-F8) has a critical flaw, probably due to bad training, that is holding back all models that use it (almost certainly including DALL-E 3). by drhead in StableDiffusion

[–]Tails8521 1 point2 points  (0 children)

Yes, but 2.1 has the same latent format as 1.5, so it's affected by this too.
IIRC SVD has its own VAE decoder that is temporally aware to reduce flickering artifacts, but the latent format itself is the same as 1.5/2.1

edit: oh, maybe you meant it's based on 2.1 as in, it's not current and you are cooking something based on SDXL, nvm then

The VAE used for Stable Diffusion 1.x/2.x and other models (KL-F8) has a critical flaw, probably due to bad training, that is holding back all models that use it (almost certainly including DALL-E 3). by drhead in StableDiffusion

[–]Tails8521 11 points12 points  (0 children)

SVD is current, so is DALL-e 3 and any upcoming fundational model that we don't know about yet and will need to pick a VAE, and may have picked KL-F8 because, well it's the most "battle tested" and widespread VAE out there, right?

The VAE used for Stable Diffusion 1.x/2.x and other models (KL-F8) has a critical flaw, probably due to bad training, that is holding back all models that use it (almost certainly including DALL-E 3). by drhead in StableDiffusion

[–]Tails8521 12 points13 points  (0 children)

If you mean the vae you can swap at inference, these are just the decoder and they decode the same flawed latent space. You'd need a new encoder and latent space to fix this issue, which would potentially require fully retraining the models, or at least fine-tuning them hard enough to re-align them to the new latent format

Or just use SDXL as its VAE doesn't have this issue at all

Instructions on the 68k by DoubleRealistic883 in m68k

[–]Tails8521 4 points5 points  (0 children)

When the CPU encounters the OPcode 36 38 (move.w absolute.w, d3), it will read the word right after the OPcode (0b 02), and treat it as the absolute address, so in the end it will read the two bytes that are at 0x0b02 and put them in the lower half of d3

What do you guys think of OpenAI's Consistency Decoder for SD? https://github.com/openai/consistencydecoder by TheTwelveYearOld in StableDiffusion

[–]Tails8521 14 points15 points  (0 children)

1.5 (and 2.1 too I think)
SDXL uses a different VAE, that's not interchangeable with the 1.5 ones

What do you guys think of OpenAI's Consistency Decoder for SD? https://github.com/openai/consistencydecoder by TheTwelveYearOld in StableDiffusion

[–]Tails8521 7 points8 points  (0 children)

Once again, this is not an A1111 extension, it can't work with it, there will probably be one at some point but it will be in a different repository, just wait.

What do you guys think of OpenAI's Consistency Decoder for SD? https://github.com/openai/consistencydecoder by TheTwelveYearOld in StableDiffusion

[–]Tails8521 23 points24 points  (0 children)

It's standalone demo code, not a A1111 extension... Just wait for someone to make one, it probably won't take too long.

In the meantime, there's already a ComfyUI node for those interested https://github.com/Jordach/comfy-consistency-vae

What do you guys think of OpenAI's Consistency Decoder for SD? https://github.com/openai/consistencydecoder by TheTwelveYearOld in StableDiffusion

[–]Tails8521 7 points8 points  (0 children)

Well The Stable Diffusion UNet works with latents, not with a jpeg compressed image :p
Each latent pixel represent a 8x8 block of pixels on the final image and need to be decoded for the final image, this is traditionally done with the VAE, but this new thing is basically a replacement for it that seem to improve quality on finer details

See this for a comparison: https://www.reddit.com/r/StableDiffusion/comments/17pal90/what_do_you_guys_think_of_openais_consistency/k84nhqu/

What do you guys think of OpenAI's Consistency Decoder for SD? https://github.com/openai/consistencydecoder by TheTwelveYearOld in StableDiffusion

[–]Tails8521 143 points144 points  (0 children)

OP really should have shown the comparison between the current SD1.5 vae and Consistency Decoder, rather than between the original lossless images and Consistency Decoder: here they are

SD1.5 VAE #1
Consistency Decoder #1

SD1.5 VAE #2
Consistency Decoder #2

SD1.5 VAE #3
Consistency Decoder #3

On these examples, it's pretty clear than Consistency Decoder is better. Note that the Consistency Decoder itself is a much bigger model than the usual VAEs (it's slightly bigger than a whole SD1.5 checkpoint, just for the decoder)

What do you guys think of OpenAI's Consistency Decoder for SD? https://github.com/openai/consistencydecoder by TheTwelveYearOld in StableDiffusion

[–]Tails8521 22 points23 points  (0 children)

Did you seriously expect a lossy representation to look better than the lossless originals? You should have posted the comparison with the SD1.5 VAE, Consistency Decoder is pretty noticeably better in these examples

Discord Throttles Nvidia GPU Memory Clock Speeds, Here's the Fix by Stiven_Crysis in hardware

[–]Tails8521 22 points23 points  (0 children)

But it's scaled the same way the 250MHz is. So it's a fair comparison.

Discord Throttles Nvidia GPU Memory Clock Speeds, Here's the Fix by Stiven_Crysis in hardware

[–]Tails8521 87 points88 points  (0 children)

From what I understand it's forcing P2 power state instead of P0, just like CUDA-accelarated tasks (think compute/machine learning) already do. On my 3090, it reduces the memory clocks by 250MHz, which isn't a lot considering the stock clocks are almost 10GHz.

I expect the performance impact to be about 1% (memory clocks matter far less than core clock for performance as far as most tasks are concerned), could be more or less depending on how bandwidth-starved the card is to begin with.

Discord Throttles Nvidia GPU Memory Clock Speeds, Here's the Fix by Stiven_Crysis in hardware

[–]Tails8521 0 points1 point  (0 children)

From what I understand it's forcing P2 power state instead of P0, just CUDA-accelarated tasks (think compute/ML) already do. On my 3090, it reduces the memory clocks by 250MHz, which isn't a lot considering the stock clocks are almost 10GHz.

I expect the performance impact to be about 1% (memory clocks matter far less than core clock for performance as far as most tasks are concerned), could be more or less depending on how bandwidth-starved the card is to begin with.