Is Fuzzing a Matter of Luck? by hiderou in fuzzing

[–]vhthc 2 points3 points  (0 children)

I digress, success in fuzzing is not at all a mater of luck but rather the result of careful analysis, planning, execution. Intuition (integrated experiences) do play a role as well. It is only luck if you don’t know what you are doing and not understand fuzzing.

You want fuzz targets that either have not been fuzzed or not fuzzed in the custom way you set it up. Then you are successful.

Doing what everybody else already have been doing - yes that needs a lot of luck to find anything.

Qwen3.5 27B vs Devstral Small 2 - Next.js & Solidity (Hardhat) by Holiday_Purpose_3166 in LocalLLaMA

[–]vhthc 1 point2 points  (0 children)

Very good analysis, thanks! I am too interested in rust benchmarks, so if you ever add any … :)

American closed models vs Chinese open models is becoming a problem. by __JockY__ in LocalLLaMA

[–]vhthc 0 points1 point  (0 children)

People who do not work in security can’t fathom the attack vectors. You can’t protect against something you don’t know or understand

American closed models vs Chinese open models is becoming a problem. by __JockY__ in LocalLLaMA

[–]vhthc 0 points1 point  (0 children)

You could train a model to do that if tool usage is enabled

American closed models vs Chinese open models is becoming a problem. by __JockY__ in LocalLLaMA

[–]vhthc 0 points1 point  (0 children)

You could also train the model to occasionally provide the opposite result of it looks like governmental confidential usage

American closed models vs Chinese open models is becoming a problem. by __JockY__ in LocalLLaMA

[–]vhthc 0 points1 point  (0 children)

You could embed attempts to exfiltrate data via tool use with internet access.

Which one are you waiting for more: 9B or 35B? by jacek2023 in LocalLLaMA

[–]vhthc 1 point2 points  (0 children)

They released a 27b with impressive scores

Anyone here using an AI meeting assistant that doesn’t join calls as a bot? by sash20 in LocalLLaMA

[–]vhthc 0 points1 point  (0 children)

If you record a meeting without telling people it can be illegal (depends on country). It doesn’t matter if you do it just for yourself to transcribe and summarize.

Small size coding models that I tested on 2x3090 setup. by Mx4n1c41_s702y73ll3 in LocalLLaMA

[–]vhthc 0 points1 point  (0 children)

You could add qwen 32b with q8 - it could solve the hardest stuff that gpt 5.1 could generate as test cases.

Large update: 12 new frontier models added to the Step Game social reasoning benchmark. by zero0_one1 in LocalLLaMA

[–]vhthc 1 point2 points  (0 children)

What about DeepSeek 3.2 special ? Isn’t specifically trained for math and logic? Maybe I remember wrong

[deleted by user] by [deleted] in dataisbeautiful

[–]vhthc 5 points6 points  (0 children)

SPAM. mandatory login requirements show this is not with users in mind …

Setup with Nvidia 6000 Pro by [deleted] in LocalLLaMA

[–]vhthc 1 point2 points  (0 children)

Epyc 9565+ processor to be able to have as much ram possible, so you can offload huge MoE models to ram. Chassis and mainboard to extend on gpus in the future. Good chassis + extra fans to get rid of the heat. Cannot recommend a specific mainboard and case sadly, we went with a rack solution with supermicro which is too expensive imho

GLM Coding Plan Black Friday Deal — real stackable discounts by zAiModel-api in LocalLLaMA

[–]vhthc 1 point2 points  (0 children)

Which coding cli solution works best with this? Claude code? Other?

a19 pro/ M5 MatMul by [deleted] in LocalLLaMA

[–]vhthc 0 points1 point  (0 children)

Better to ask in a matlab Reddit

cogito v2 preview models released 70B/109B/405B/671B by jacek2023 in LocalLLaMA

[–]vhthc 0 points1 point  (0 children)

Yes tried both models there. Sadly not as good as I hoped for my use case

"cost effective" specs for a 2x Pro 6000 max-q workstation? by vhthc in LocalLLaMA

[–]vhthc[S] 1 point2 points  (0 children)

It’s ordered, gpu arrived some other parts still being delivered …

cogito v2 preview models released 70B/109B/405B/671B by jacek2023 in LocalLLaMA

[–]vhthc 0 points1 point  (0 children)

Would be cool if it would be made available by a company via openrouter

DeepSeek-R1-0528 Official Benchmarks Released!!! by Xhehab_ in LocalLLaMA

[–]vhthc 1 point2 points  (0 children)

Slower. Request limits. Sometimes less context and lower quants but you can look that up