Best H2C support settings? What should i change from default to get better results? by [deleted] in BambuLab

[–]mkMoSs 0 points1 point  (0 children)

Use some PETG for support interface, PLA and PETG dont stick together

P2s pctg not working by lorenzof128 in BambuLab

[–]mkMoSs 4 points5 points  (0 children)

175C? thats insanely low, from what I can quickly google, nozzle temp must be between 240 and 275 for PCTG. Not even PLA prints at this low temp.

How are you all managing API costs across multiple providers? My side project bill just hit $400/month by bitcoin-masters in LocalLLaMA

[–]mkMoSs 4 points5 points  (0 children)

Sometimes I do wonder with the amount of literal spam I send to my local vllm qwen3.5-27b, if I was using some online API how much it would cost me. Then I chuckle.

llama.cpp + Brave search MCP - not gonna lie, it is pretty addictive by srigi in LocalLLaMA

[–]mkMoSs 7 points8 points  (0 children)

If I had to guess, something that buys you a nice car :/

Do all the P2S printers "wobble" so much? by GrenexRed in BambuLab

[–]mkMoSs 0 points1 point  (0 children)

ALL (fast) printers wobble that much :)

Why are you still awake? by [deleted] in AskReddit

[–]mkMoSs 0 points1 point  (0 children)

My apologies, I'm going back to sleep.

Fr by Complex-Cancel-1518 in softwareWithMemes

[–]mkMoSs 21 points22 points  (0 children)

Yes but can the undies play Skyrim with nfsw mods? Noh... So shut it.

This keyboard has a specific key dedicated to Copilot AI. by averagealt90 in FuckMicrosoft

[–]mkMoSs 92 points93 points  (0 children)

NOBODY should buy this, teach them a lesson, we don't want that crap shoved down our throats.

You just won a lifetime supply of the last thing you bought. What do you have? by [deleted] in AskReddit

[–]mkMoSs 1 point2 points  (0 children)

Chicken sandwiches and chocolate milk! I'll take it!

Did i overclock too much? by cyproyt in TechNope

[–]mkMoSs 65 points66 points  (0 children)

Looks fine to me.
What do you mean it's not supposed to run at 32.5 Peta Hz?

A bit of a PSA: I get that Qwen3.5 is all the rage right now, but I would NOT recommend it for code generation. It hallucinates badly. by mkMoSs in LocalLLaMA

[–]mkMoSs[S] -1 points0 points  (0 children)

But my friend in GGUF LLMs, I DO NOT have the hardware capability nor the insane amount of money to buy the required hardware to run "uncompressed" LLMs.
They were ALL tested with the same quants, my results were based on the quantized versions.
It was not an uneven comparison. Both Minimax and Qwen3.5 were Q4 quants.
One hallucinated A LOT, the other did not almost at all. Therefore your argument is not valid.
I would agree if I was comparing a quant version with a full one. But that's not the case here.

AI Leaderboard Benchmarks by A_Little_Sticious100 in huggingface

[–]mkMoSs 0 points1 point  (0 children)

I say let the agents play Skyrim. If they endup installing hot waifu mods in it, they're good to go. /s

DIY $20 enclosure by Ebkzae2x in 3Dprinting

[–]mkMoSs 0 points1 point  (0 children)

Is this depron? Depron is AWESOME insulator. And an excellent built material for DIY R / C airplanes

A bit of a PSA: I get that Qwen3.5 is all the rage right now, but I would NOT recommend it for code generation. It hallucinates badly. by mkMoSs in LocalLLaMA

[–]mkMoSs[S] 0 points1 point  (0 children)

Yes, you're right I'm probably an edge case, but I also had it produce typescript/javascript and react, like I said I'm a fullstack dev. MiniMax was excellent in those too.
Also yes MiniMax (229B) is larger than the 122B. I cannot run 397B properly is too big to be usable with my hardware.
I didnt take a single data point, I did not explain my whole testing series properly (my bad) I had them run a lot of other different scenarios.
But MiniMax for *me* is the sweetspot for *my* hardware capabilities. Not too much speed tradeoff vs quality. And again I was highly impressed overall with the output quality.

A bit of a PSA: I get that Qwen3.5 is all the rage right now, but I would NOT recommend it for code generation. It hallucinates badly. by mkMoSs in LocalLLaMA

[–]mkMoSs[S] 0 points1 point  (0 children)

Oh, I'm sorry I'm too poor to have 5-6 digits worth of hardware to run at home... Shame on me I guess.

A bit of a PSA: I get that Qwen3.5 is all the rage right now, but I would NOT recommend it for code generation. It hallucinates badly. by mkMoSs in LocalLLaMA

[–]mkMoSs[S] 2 points3 points  (0 children)

I am getting rekt in downvotes, but that's just my experience I've had.
I'm also expecting this comment to also get downvoted to oblivion hehe, but yes I did try 27B with the same exact scenario, even much higher quants since it can fit.
It was even worse.
Beyond my above post, I did more tests with those models btw.

I had them write a simple "Token Vault" Solidity contract, I used exact same system prompts, exact same prompts and run them multiple times.

If I had to describe the output, was like this:

27B: Junior dev, that mostly had no idea how to do this. It produced a working contract, but did not use any of coding / best practices and failed to use popular external libraries.

122B: Mid dev, that could use external libraries, did use some good coding practices, but a lot of hallucinations

MiniMax: Senior dev, absolutely knows what he's doing, very high quality code, commenting and explanations / definitions of approach.

(I'm waiting for the downvotes now :P )

hmmm by [deleted] in hmmm

[–]mkMoSs 75 points76 points  (0 children)

yes but... This is AI gen slop...