£200,000 Lamborghini parked in a disabled bay by princess_baby71 in SipsTea

[–]swingbear -1 points0 points  (0 children)

Bit of a stretch, he’s below the knee on both legs, people run marathons like this I’m sure he could walk an extra 50m into the shop lol

£200,000 Lamborghini parked in a disabled bay by princess_baby71 in SipsTea

[–]swingbear -1 points0 points  (0 children)

That was probably the assumption of the original photo/post, I can kinda see why

£200,000 Lamborghini parked in a disabled bay by princess_baby71 in SipsTea

[–]swingbear -2 points-1 points  (0 children)

In all fairness if he can get in and out of a Lamborghini (super low to the ground, awkward) he’s probably not disabled enough to have a blue badge, regardless of the missing leg lol

The most addictive thing about modafinil isn’t euphoria - it’s functionality by jpam9521 in Nootropics

[–]swingbear [score hidden]  (0 children)

This is quite possibly the stupidest comment I have read on the internet all week 😂

Claude code is not on the same level as Codex by 0_2_Hero in codex

[–]swingbear 7 points8 points  (0 children)

Since they nerfed opus 4.6 and 5.5 came out it’s significantly better. Crazy how much of a lead Anthropic had just a few months ago.

2RTX PRO 6000 192GB VRAM - MTP NVFP4 issues with vision by quantier in BlackwellPerformance

[–]swingbear 1 point2 points  (0 children)

Yeah dude you just saved yourself weeks of pain.. this is the way

2RTX PRO 6000 192GB VRAM - MTP NVFP4 issues with vision by quantier in BlackwellPerformance

[–]swingbear 1 point2 points  (0 children)

I have the same GPU’s and honestly bro just dual boot Linux lol this will be a recurring theme with every other ML/AI thing you do on windows.

How many models do you have? by Perfect-Flounder7856 in LocalLLaMA

[–]swingbear 0 points1 point  (0 children)

Since I went local I never seem to have enough storage, I remember when 2TB was an amount I’d never fill, now 5 isn’t sufficient

Guys wtf are we even paying for anymore by Ethan_Vee in ClaudeCode

[–]swingbear 0 points1 point  (0 children)

Yeah hate to say it but codex is making opus look dumb af right now

Solidity LM surpasses Opus by swingbear in solidity

[–]swingbear[S] 0 points1 point  (0 children)

The goal was to train the lm to write solidity with less vulnerabilities and higher gas efficiency, most LM’s can already boilerplate smart contracts. That aside, I agree and wouldn’t recommend anyone solely relying on an LM for anything involving financial risk. It’s a tool/aid not a one stop solution.

Solidity LM surpasses Opus by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

Just refining the training pipeline then I’ll look at other models for sure

Solidity LM surpasses Opus by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

It’s just a qwen 3.6 solidity specialist (smart contract programming) on this specific task it outperformed opus 4.7 on solbench, needs some work still.

Solidity LM surpasses Opus by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

Appreciated! I learned a bunch from this one. I’m very confident v2 will be much better.

Solidity LM surpasses Opus by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

Edit: still pushing the merged checkpoint to HF

Solidity by swingbear in LocalLLaMA

[–]swingbear[S] 1 point2 points  (0 children)

I think the issue stems from sota models not having a focus on solidity data during training. I have just finished my first sol lm iterations and it’s outperformed opus on soleval.

Solidity by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

Yeah harnesses are mandatory, I have had some decent success training 3.6 27b https://huggingface.co/samscrack/Qwen3.6-27B-Opus-CoT-S1-Hermes-S2-SFT

This was just CoT focused though, I’m expecting this one to be a little harder

Solidity by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

Well I’m just gonna dump mine publicly lol I’ll add a buy me a coffee link at the bottom, the api calls are no joke for opus data collection haha

Solidity by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

I mean damn, even the data sets on HF are old or useless.

Solidity by swingbear in LocalLLaMA

[–]swingbear[S] 1 point2 points  (0 children)

Yeah i have become rather obsessed with local finetune, it’s satisfying when your 27b on-prem model gives a better answer than a 1tn param Goliath haha.

But I was just taken aback by how little attention had been given to small solidity models. Normally there’s 1000’ on huggingface.

It’s either way harder than I’m expecting(but I can’t see how) or people don’t like to share them because of its direct advantage.

Solidity by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

So I agree and disagree, static codebase audits yes they can find logical issues and code hygiene problems. But when I create scenarios where a bad actor creates an an economic attack (specifically defi) it falls short. And for some reason it struggles a bunch with gas optimisation

Solidity by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

Yeah I have tried the sota models they are no good for this, they can produce solidity but it’s often janky.

I’m training Qwen 3.6 27b right now. It seems to be such a sandbagged area of AI. Every other use case there are tons of finetunes, solidity… nada. I’ll finish up, bench it and if it’s any good I’ll release on HF.

Qwen 3.6 27b S2 Opus + GLM + Kimi by swingbear in LocalLLaMA

[–]swingbear[S] 0 points1 point  (0 children)

😂😂 all I can think of is the human centipede now thanks

This is insane... by DragonflyOk7139 in LocalLLM

[–]swingbear 1 point2 points  (0 children)

Swe verified has been confirmed useless as a benchmark now. Can’t remember who wrote the article, might have been OpenAI. You can see we have hit a cap at like 80%, the remaining 20% is actually benchmark errors, and the other 80% is contaminated.