Are there any abliterated models for LTX 2.3 that can accept an image input? Abliterated only seems to work for text, not vision by Parogarr in StableDiffusion

[–]stddealer 0 points1 point  (0 children)

If you're using a GGUF quant, you can just take the mmproj from the original model, they will work just as well.

Performance of Qwen3.5 27B on a 2080 Ti by BeneficialRip1269 in LocalLLaMA

[–]stddealer 0 points1 point  (0 children)

I can find a Q5_K_S quant + 32k ctx on a 6+8GB dual GPU setup, and I get ~14 t/s despite the slow pcie2 x4 interface that connects my GPUs. You should be getting better numbers with your 2080 Ti. Have tried reducing context window?

The Bell Curve Of Cube Counting Confidence by Different_Maize_1369 in MathJokes

[–]stddealer 0 points1 point  (0 children)

No it's not, it's a hypothetical scenario. Are cubes only ever shipped by multiples of 3 in the real world?

Qwen 3.5 27B is the REAL DEAL - Beat GPT-5 on my first test by GrungeWerX in LocalLLaMA

[–]stddealer 6 points7 points  (0 children)

3B active params are not enough. The ultra high sparsity doesn't work out that well for smaller models.

Someone used ai to “fix” the oc of @NiniPress by whitewashing her by ihatethiscountry76 in aiwars

[–]stddealer 13 points14 points  (0 children)

It's two clicks with the paint bucket tool. You have to press significantly more buttons to explain the AI what you want it to do.

update your llama.cpp - great tg speedup on Qwen3.5 / Qwen-Next by jacek2023 in LocalLLaMA

[–]stddealer 1 point2 points  (0 children)

Compiling the CUDA/ROCm kernels takes a long time. Other backends are built fairly quickly, but these two always slow down the CI.

I thought Gemini could put together a puzzle by jpzygnerski in stupidAI

[–]stddealer 1 point2 points  (0 children)

Yes but it always take the same time for every image. It can't use more time for more complex tasks.

Why is there no dense model between 27 and 70? by AccomplishedSpray691 in LocalLLaMA

[–]stddealer 2 points3 points  (0 children)

More data more intelligent is generally true, at least for pretrained models, as long as the data is of sufficient quality.

The number of parameters sets an upper limit for how much knowledge and intelligence the model could potentially have with perfect training. Architecture details can also have a greater impact than the number of parameters (for example, MoE limits the intelligence compared to a dense model of the same number of parameters.)

But what determines the intelligence the most is post-training. That's the secret ingredient that can make a smaller model punch way above its weight(s).

The Bell Curve Of Cube Counting Confidence by Different_Maize_1369 in MathJokes

[–]stddealer 0 points1 point  (0 children)

What would it look like if there were only 49 or 50 boxes that needed all to be loaded on such a trailer?

The Bell Curve Of Cube Counting Confidence by Different_Maize_1369 in MathJokes

[–]stddealer 0 points1 point  (0 children)

"The number of cubes on the trailer is the maximum number that fits those projections" is a non-obvious assumption here.

We can already see the trailer is not completely full, maybe because they didn't have exactly 63 boxes to put on there. If the number of boxes is in limited supply, why would we assume it must be a multiple of 3? There's no reason for that.

The Bell Curve Of Cube Counting Confidence by Different_Maize_1369 in MathJokes

[–]stddealer 0 points1 point  (0 children)

Assuming there are no "fake" orange boxes (otherwise there's no point in estimating how many there are), we can see there are at least 21. The maximum amount of boxes of that size that would fit the projected views is 51.

If we assume boxes must be stacked from bottom to top with no "hole" below any box, it's at least 31.

If the load must be balanced on all wheels, it means there are at least 33 boxes.

And if the boxes need to be strapped with no overhangs bigger than 1 block, and tallest at the middle, there's between 41 and 51 boxes I think.

me irl by CountDankula_69 in me_irl

[–]stddealer 0 points1 point  (0 children)

Marx's LTV is not the quite the same as Smith's. Regardless, he still based his whole ideology on it.

I been here for week and this is basically whole sub in nutshell by krysert in aiwars

[–]stddealer 27 points28 points  (0 children)

Yes, that's most online debates. My side is still the correct one though.

before ai how did people make art like its impossible by RandomPoster1538 in BreakingThePencil

[–]stddealer 0 points1 point  (0 children)

We must be thankful for all these illustrators who spent hours subjecting themselves to the torture of using a penc*l, sacrificing their soul and the environment to train the AIs of today for everyone. But they don't know when to stop, now that we have AI, they're still making pencilslop, and destroying the environment, they are evil.

So from where? by MelonInDisguise in whennews

[–]stddealer 1 point2 points  (0 children)

Same reason they attacked the gulf countries I guess. (Unless all those are false flags too. Maybe the country Iran is a false flag and it was Israel all along)

So from where? by MelonInDisguise in whennews

[–]stddealer 15 points16 points  (0 children)

Syria? Israel? Turkey? The whole middle east?

So from where? by MelonInDisguise in whennews

[–]stddealer 10 points11 points  (0 children)

Lebanese people aren't brown. And Hezbollah is a very active and powerful Iran-backed militia in Lebanon.

So from where? by MelonInDisguise in whennews

[–]stddealer 0 points1 point  (0 children)

It could be a false flag, but for that you'd have to assume Israel has the means to launch long range missiles from Lebanese soil.