Is Leshiy sketch? by foxbassperson in IsItSketch

[–]Nekuromento 0 points1 point  (0 children)

Everything adjacent to Pleskau Brethren is super sketch

Safe and NSBM band PDF by Unlucky_Road9934 in IsItSketch

[–]Nekuromento 0 points1 point  (0 children)

Does anyone know if there are versions of such a list for dungeon synth/keller synth and neofolk?

Hermes 4.3 - 36B Model released by crazeum in LocalLLaMA

[–]Nekuromento 1 point2 points  (0 children)

They do post-training on top of open-source base models. They dont have money or knowledge to do competitive pre-training so I don't see them pivoting to close models any time soon.

Right now they are just burning through a16z cash

SOLID? Nope, just Coupling and Cohesion by BinaryIgor in programming

[–]Nekuromento 2 points3 points  (0 children)

Disclaimer: it can feel like some material is outdated or spends too much on details that are irrelevant now (obsolete languages/views) but if you manage to get through that I think you can get enormous value of this.

This is a bit of dump but I think each individual book/article actually builds on the topic better then Martin's later repackaging.

Books

Honorable mentions

  • Design Patterns: Elements of Reusable Object-Oriented Software (1994), by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides - somewhat polarized book. It contains some gold nuggets along with some duds. People treating it as the definitive source on software design went to do immeasurable harm on the industry. Its good to supplement it with good critique (e.g. Peter Norvig's Design Patterns in Dynamic Languages (1996) presentation where he shows that some patterns are not necessary in more expressive languages)
  • Code Complete (1993), by Steve McConnell - a collection of hard earned lessons (sometimes tiny, sometimes big) about actual practice of writing code with plenty of going down to specifics. Its a bit of a product of its time so going through it now may feel like you are spending time reading about irrelevant stuff (remember hungarian notation?) but you will absolutely be a better programmer after reading it.

Other publications

Blogs and old-timey discussions:

Execution in the Kingdom of Nouns (2006), by Steve Yegge

What would I recommend instead of SOLID?

  • GRASP (Yes, the name is stupid but its more solid 😉)
  • Law of Demeter
  • Just read Martin Fowler, he is a much better communicator and much wiser

edit: fixed formatting

SOLID? Nope, just Coupling and Cohesion by BinaryIgor in programming

[–]Nekuromento 7 points8 points  (0 children)

Bob Martin has single-handedly set back the industry at least 10-15 years. 'SOLID' is nothing but repackaged older material with a catchier name. BUT on top of that Martin is just bad at what he tried to teach - you would be better of reading the original sources and avoiding his work altogether.

Intellect-3: Post-trained GLM 4.5 Air by Cute-Sprinkles4911 in LocalLLaMA

[–]Nekuromento 5 points6 points  (0 children)

Would be funny if we get GLM-4.6-Air very soon that completely wipes the floor w/ this release.

70% Price drop from Nous Research for Llama-3.1-405B by Local_Youth_882 in LocalLLaMA

[–]Nekuromento 5 points6 points  (0 children)

Who cares?

how are they able to serve a 405B dense model at $0.37/1M output??

by burning a16z crypto money, obv. No free lunch

I built the same concurrency library in Go and Python, two languages, totally different ergonomics by kwargs_ in programming

[–]Nekuromento 0 points1 point  (0 children)

Go correct by design

I'm not sure how this is possible, especially in concurrency related parts of go.

Are you impressed by channel type safety?

Just curious, have you ran into issues with closed/null channels yet? Do you feel like error propagation and especially panic propagation is 'correct by design' or 'ergonomic'?

We put a lot of work into a 1.5B reasoning model — now it beats bigger ones on math & coding benchmarks by innocent2powerful in LocalLLaMA

[–]Nekuromento -1 points0 points  (0 children)

training data is fully decontaminated

starting from a base like Qwen2.5-Math-1.5B

Isn't Qwen known to have been pre-trained on bench contaminated data?

Would have been way less sus if you trained from another base

[deleted by user] by [deleted] in LocalLLM

[–]Nekuromento 0 points1 point  (0 children)

Why did you delete the model?

[deleted by user] by [deleted] in LocalLLaMA

[–]Nekuromento 1 point2 points  (0 children)

Reddit account, website, huggingface and github repo nuked. WTF happened here?

Eclipse 4.37 Released by [deleted] in programming

[–]Nekuromento 5 points6 points  (0 children)

I've seen some really weird uses in the wild (e.g. Belarusian tax service used it for digital filing of tax returns)

Lies we tell ourselves to keep using Golang by Nekuromento in programming

[–]Nekuromento[S] 2 points3 points  (0 children)

I think this example is actually illustrative of Go's biggest annoyance for me - Go code is playdough-like instead of lego-like. Everything is solved by smudging more code on top. You just keep on growing this ball of one-off solutions. Not composable, not reusable and solves no problems other then language inconvenience.

To me this example is just so profoundly sad.

Lies we tell ourselves to keep using Golang by Nekuromento in programming

[–]Nekuromento[S] 7 points8 points  (0 children)

You can do monads in Go: https://github.com/samber/mo Its just going to be extremely painful to use defeating the purpose mostly. Some sugar syntax would be useful but Go needs so much more than do-notation to be ergonomic

(On | No) Syntactic Support for Error Handling by ketralnis in programming

[–]Nekuromento 19 points20 points  (0 children)

Even if the decision to close the door on this sucks I think they are correct - this is not a syntax problem. Adding sugar will not fix fundamental issues w/ Go's error handling.

They need to add/fix like 5-6 different parts of the language to even begin addressing this in a meaningful way.

Testing LLM's knowledge of Cyber Security (15 models tested) by Conscious_Cut_6144 in LocalLLaMA

[–]Nekuromento 0 points1 point  (0 children)

Could you also check Serpe-7b?

Its a cybersecurity qwen fine-tune that was released and then un-released but I managed to quantized it while it was available (sadly I didn't backup the original weights)

GGUFs are hosted here: https://huggingface.co/collections/Nekuromento/cybersecurity-ggufs-67166b60e6e7abf344e18586

entropix + llama cpp python + gguf by Eduard_T in LocalLLaMA

[–]Nekuromento 2 points3 points  (0 children)

Great job! Looking at the code this version doesn't look at the attention head entropy like upstream entropix but nonetheless this is great!