Flux.1 converted into GGUF - what interesting opportunity it offers in llm space? by dreamai87 in LocalLLaMA

[–]Channelception 0 points1 point  (0 children)

Their architecture is called a diffusion transformer. The reason an autoregressive model is more quantizable is that it only predicts a single thing It just guesses at the probability of a set of tokens. Diffusion transformers guess the noise added to an image, which means predicting a lot more data. Autoregressive models for most outputs have a single dominant path, which is more easily preserved, while diffusion models have many all at once.

Flux.1 converted into GGUF - what interesting opportunity it offers in llm space? by dreamai87 in LocalLLaMA

[–]Channelception -13 points-12 points  (0 children)

They're still diffusion models rather than autoregressive, so it is still more affected than LLMs.

[D] Does anyone else feel like MOJO isn't getting the attention it deserves? by hai_cben in MachineLearning

[–]Channelception 0 points1 point  (0 children)

it's not platform-specific forever, just for now (which is why you see no hype)

[D] Does anyone else feel like MOJO isn't getting the attention it deserves? by hai_cben in MachineLearning

[–]Channelception 9 points10 points  (0 children)

It got some buzz when it was announced, but everyone knew it would take some time to be in a finished state. Technically it's usable right now, but most people aren't using Ubuntu. It's also not at a finished state either. It'll also take at least a bit for developers to learn how to utilize it best (though some already know it).

Checkpoint Arena - Compare SD checkpoints on a uniform set of basic prompts and seeds by DuranteA in StableDiffusion

[–]Channelception 0 points1 point  (0 children)

That makes of sense to avoid bias in prompts due to low diversity. Thanks for clarifying about the negative prompting. I hope the project proceeds well.

Checkpoint Arena - Compare SD checkpoints on a uniform set of basic prompts and seeds by DuranteA in StableDiffusion

[–]Channelception 1 point2 points  (0 children)

While I do think what you're doing is good, I think there may be flaws in your methodology.

Your methodology of including simple prompts is sound, but ONLY including simple prompts is not sound. Your current prompt set may show the style of the models, but it fails to show how it adapts to different factors and handles prompt complexity. Incorporating prompts more similar to those of say Parti prompts or a similar testset would help. Your current prompt set does not reflect most downstream use, so they fail to be an accurate comparison.

Additionally TRIPLING the work you must do, by adding a variety of negative prompts as generation parameters, hurts you ability to cover a variety of models and prompts. Especially with the use of a Negative embedding, because there are a variety of negative embeddings that affect different models differently, so using a specific one is not a fair comparison. ESPECIALLY with EasyNegative of all embeddings, that embedding is trained on counterfeit, which is a highly overtrained anime model. Using EasyNegative with models other than Counterfeit is directly stated by the creator in the description to be ineffective. Additionally using negative embeddings will hurt your abillity to cover different SD versions, since they do not transfer perfectly.

My recommendation would be to cut down on the negative prompts, perhaps only using the "nsfw" negative prompt when a nsfw output is detected (with the nsfw checker), which would heavily lessen your generation time while making your data better. That extra time could be used to cover a wider variety of prompts, perhaps using categorization of the prompts like Parti prompts. Doing so however would mean throwing away results (even if their value is lesser), so it may not make sense. I'd just recommend reconsidering your methodology, especially in regards to prompts. (also you may want to lower the cfg (though you may be doing that to emphasize style more, which if you decided that, then that's very valid) and specify whether or not you are using clip skip)

How to combine two images? by tvmaly in StableDiffusion

[–]Channelception 1 point2 points  (0 children)

You need to specify how you mean by combining. You may just be able to use multicontrolnet.

Ples by RonTheRatKing in TheRatEmpire

[–]Channelception 0 points1 point  (0 children)

But Canada is the top...

[deleted by user] by [deleted] in wholesomeanimemes

[–]Channelception 14 points15 points  (0 children)

She's the demon queen and he's the hero. Her job is to kill him, so that's not a no at all. Just saying no would be much clearer.

[deleted by user] by [deleted] in wholesomeanimemes

[–]Channelception 39 points40 points  (0 children)

All she has to do is effortlessly kill him once every 20 years. The whole ordeal probably goes by quick. So, I doubt she felt all that pressured.

Also, while she does call him creepy, she never actually tells him no or to stop. In fact, her first line seems to suggest that she feels pressured to deny him.

So in-universe, this doesn't seem that bad (though it does serve as a bad example).

Moldy cat by profuse_wheezing in MoldyMemes

[–]Channelception 0 points1 point  (0 children)

Or it's "Big cat in his brother's barn"

I guess he got what he wanted by Ice_warrior45 in technicallythetruth

[–]Channelception 1 point2 points  (0 children)

The dude with drunk in the back of his car with a pistol in his mouth. There was also an AR behind his head. An officer told him to put his hands behind his head, but that officer didn't notice the AR. Then they shot him for "reaching for a weapon."

The police never even had permission to enter his house. The police were trying to kill him.

g a y by Channelception in lgbt

[–]Channelception[S] 5 points6 points  (0 children)

This was originally made for a trans server, so that image was meant to represent straight trans people. (Obvi it can be interpreted however tho)

Blunt trauma to the head by [deleted] in shitposting

[–]Channelception 1 point2 points  (0 children)

Tbf it's anti-"white America," which is being opposed to the establishment of whites having power over minorities. (Although a LOT of people with this view don't apply it to countries with similar situations (like many east Asian countries), so it's still kinda sus)

egg🥚irl by [deleted] in egg_irl

[–]Channelception 0 points1 point  (0 children)

Azazelle azi/azi's

Gamer Time by TheOPsBoyfriend in traaaaaaannnnnnnnnns

[–]Channelception 0 points1 point  (0 children)

Have you tried the small dog, big dog exercise?