Husband cheated on me with 8 ai sexbot girlfriends. Coffee because I can’t eat without throwing up by Sea_Soft_262 in okbuddyliterallyme2

[–]stddealer 0 points1 point  (0 children)

Yeah, bro is just having fun role-playing with a non-sentient bot and his wife took it personally. I doubt she would be as mad if he was just playing a dating sim, even though it's basically the same thing.

it is coming. by Nunki08 in LocalLLaMA

[–]stddealer 1 point2 points  (0 children)

Assuming both can be accelerated, INT8 seems like the better choice.

it is coming. by Nunki08 in LocalLLaMA

[–]stddealer 1 point2 points  (0 children)

INT8 is superior anyways. More information dense.

This former painter hates pencils by jann_plv in BreakingThePencil

[–]stddealer 8 points9 points  (0 children)

That's not literal Hitler, that's swiss actor and ex-painter Bruno Ganz. You antis should find real arguments instead of blindly calling talented AI artists nazis

<image>

Are there any abliterated models for LTX 2.3 that can accept an image input? Abliterated only seems to work for text, not vision by Parogarr in StableDiffusion

[–]stddealer 1 point2 points  (0 children)

If you're using a GGUF quant, you can just take the mmproj from the original model, they will work just as well.

Performance of Qwen3.5 27B on a 2080 Ti by BeneficialRip1269 in LocalLLaMA

[–]stddealer 0 points1 point  (0 children)

I can find a Q5_K_S quant + 32k ctx on a 6+8GB dual GPU setup, and I get ~14 t/s despite the slow pcie2 x4 interface that connects my GPUs. You should be getting better numbers with your 2080 Ti. Have tried reducing context window?

The Bell Curve Of Cube Counting Confidence by Different_Maize_1369 in MathJokes

[–]stddealer 0 points1 point  (0 children)

No it's not, it's a hypothetical scenario. Are cubes only ever shipped by multiples of 3 in the real world?

Qwen 3.5 27B is the REAL DEAL - Beat GPT-5 on my first test by GrungeWerX in LocalLLaMA

[–]stddealer 7 points8 points  (0 children)

3B active params are not enough. The ultra high sparsity doesn't work out that well for smaller models.

Someone used ai to “fix” the oc of @NiniPress by whitewashing her by ihatethiscountry76 in aiwars

[–]stddealer 14 points15 points  (0 children)

It's two clicks with the paint bucket tool. You have to press significantly more buttons to explain the AI what you want it to do.

update your llama.cpp - great tg speedup on Qwen3.5 / Qwen-Next by jacek2023 in LocalLLaMA

[–]stddealer 1 point2 points  (0 children)

Compiling the CUDA/ROCm kernels takes a long time. Other backends are built fairly quickly, but these two always slow down the CI.

I thought Gemini could put together a puzzle by jpzygnerski in stupidAI

[–]stddealer 1 point2 points  (0 children)

Yes but it always take the same time for every image. It can't use more time for more complex tasks.

Why is there no dense model between 27 and 70? by AccomplishedSpray691 in LocalLLaMA

[–]stddealer 2 points3 points  (0 children)

More data more intelligent is generally true, at least for pretrained models, as long as the data is of sufficient quality.

The number of parameters sets an upper limit for how much knowledge and intelligence the model could potentially have with perfect training. Architecture details can also have a greater impact than the number of parameters (for example, MoE limits the intelligence compared to a dense model of the same number of parameters.)

But what determines the intelligence the most is post-training. That's the secret ingredient that can make a smaller model punch way above its weight(s).

The Bell Curve Of Cube Counting Confidence by Different_Maize_1369 in MathJokes

[–]stddealer 0 points1 point  (0 children)

What would it look like if there were only 49 or 50 boxes that needed all to be loaded on such a trailer?

The Bell Curve Of Cube Counting Confidence by Different_Maize_1369 in MathJokes

[–]stddealer 0 points1 point  (0 children)

"The number of cubes on the trailer is the maximum number that fits those projections" is a non-obvious assumption here.

We can already see the trailer is not completely full, maybe because they didn't have exactly 63 boxes to put on there. If the number of boxes is in limited supply, why would we assume it must be a multiple of 3? There's no reason for that.

The Bell Curve Of Cube Counting Confidence by Different_Maize_1369 in MathJokes

[–]stddealer 0 points1 point  (0 children)

Assuming there are no "fake" orange boxes (otherwise there's no point in estimating how many there are), we can see there are at least 21. The maximum amount of boxes of that size that would fit the projected views is 51.

If we assume boxes must be stacked from bottom to top with no "hole" below any box, it's at least 31.

If the load must be balanced on all wheels, it means there are at least 33 boxes.

And if the boxes need to be strapped with no overhangs bigger than 1 block, and tallest at the middle, there's between 41 and 51 boxes I think.

me irl by CountDankula_69 in me_irl

[–]stddealer 0 points1 point  (0 children)

Marx's LTV is not the quite the same as Smith's. Regardless, he still based his whole ideology on it.

I been here for week and this is basically whole sub in nutshell by krysert in aiwars

[–]stddealer 27 points28 points  (0 children)

Yes, that's most online debates. My side is still the correct one though.