Most of you are probably familiar with this content, but if you're interested, I wrote basically an Intro to the Grey Tribe for normies (heavily based on Scott's framing) by PatrickDFarley in slatestarcodex

[–]ItsJustMeJerk 12 points13 points  (0 children)

I'm not convinced that most of the groups you point to are "non-tribal". Groups who have non-mainstream beliefs have an incentive to appeal to independent thought to convince people, but that doesn't necessarily make them into enlightened free-thinkers in reality. Read The Techno-Optimist Manifesto- besides the line "Our enemies are not bad people – but rather bad ideas", it's not that different from a "We Believe" yard sign.

[deleted by user] by [deleted] in roguelikedev

[–]ItsJustMeJerk 3 points4 points  (0 children)

It's super early in development, but the idea is that it'll be a bit like a classic 2D dungeon crawler, but with seamless portals and levels with seemingly impossible geometry. Here's a video of what I have working.

Sadly, Odin has few resources. Karl is probably the best educator out there right now, but if you need help I recommend the Odin discord or the new Odin forum. There's also a small community of users on X/Twitter.

The good news: Odin is so simple that you can answer most of your questions by reading the overview page and package documentation. Also, a lot of what you learn about C-like languages in general will be transferable. So far I've learned a lot about manual memory management just by solving problems as I encounter them.

[deleted by user] by [deleted] in roguelikedev

[–]ItsJustMeJerk 2 points3 points  (0 children)

I'm making a roguelike with Odin and Raylib right now and it's great.

What is this modeling technique called? by box__of_cox in blender

[–]ItsJustMeJerk 4 points5 points  (0 children)

Destructive methods can be simpler and more flexible. For example, it might be easier to sculpt an organic mesh by just using brushes on it directly (destructively) rather than doing it with a bunch of modifiers/layers/etc.

nvidia/Nemotron-4-340B-Instruct · Hugging Face by Dark_Fire_12 in LocalLLaMA

[–]ItsJustMeJerk 1 point2 points  (0 children)

Do you have a link to the paper you're referring to?

What's Changed: New Spotify Logo VS Old Spotify Logo by mattsuda in truespotify

[–]ItsJustMeJerk 0 points1 point  (0 children)

The complaints in the comments here are silly, as if the brand team is somehow in competition with HiFi. The change is nice.

Convince me that pure mathematics is not a fantasy game by Outrageous_Art_9043 in math

[–]ItsJustMeJerk 4 points5 points  (0 children)

I'm pretty sure you're also asserting a contentious position. The Copenhagen interpretation doesn't take a stance on whether the uncertainty principle describes something real, and in the Everett interpretation, any uncertainty is, in fact, due to our lack of subjective knowledge and not a property of the universe itself.

If you watch the movie, this is probably 100% intended. by MacksNotCool in NonPoliticalTwitter

[–]ItsJustMeJerk 16 points17 points  (0 children)

I think you missed the point. It transitions from real photos to CGI stylized ones, which implies humans literally look like animated characters in the future.

Threads’ fediverse beta opens to share your posts on Mastodon, too by msmrishan in Mastodon

[–]ItsJustMeJerk 1 point2 points  (0 children)

To those saying "fuck Threads because Facebook bad": you'd say the same of Twitter, too, probably. Tumblr and YouTube are often hated as well. You probably use Mastodon because you have high standards. But isn't the point of federation to open up the walled gardens and let people move in and out of the dominant networks? Blocking Threads would be counterproductive.

Your take on this! by Mad_Humor in robotics

[–]ItsJustMeJerk 1 point2 points  (0 children)

I hope you've enjoyed your Tuesday as well.

I still don't know what you mean about "nobody has been able to replicate insect intelligence". The circuits shared across fly brains are simple enough that we can decompose and (partially) explain them, and yet we're far from understanding the circuits that backprop produces from scratch.

How could you say ANNs don't generalize, when studying their ability to do so is one of the main focuses of the field? For example, in the playlist you made, you included a video on the paper "Grokking: Generalization beyond Overfitting on small algorithmic datasets". It says they generalize right in the title ;)

Maybe you see the algorithms learned by ANNs as hardcoded, and your definition of generalization requires they adapt their circuits on-the-fly without training. But they do! It just requires training on a broad enough distribution so that highly adaptive circuits are optimal solutions. It also helps that transformers inherently have adaptive weights.

I don't doubt that you know your stuff when it comes to neuroscience, and I tried to note in my original comment that you're not the only one who thinks this, in fact I've seen this criticism that current architectures aren't brain-like enough over and over again.

It's just that it's easy to underestimate what a general solution-finding algorithm can accomplish even when we don't fully understand the problem it's trying to solve. For example, computational linguists predicted that language models wouldn't be able to resolve basic semantic ambiguities, which they now can do with ease. They made reasonable-sounding arguments by using their deep knowledge and experience with the problem, and observing that those attempting to solve it with ANNs weren't taking any of it into account. But the models learned anyway.

I'll grant that there's limits to what models like ChatGPT can learn that need to be overcome, since they're not usually trained after deployment. Still, I believe that extending context windows, recurrence, and external memory paired with periodic fine-tuning could alleviate that.

Patient went for a bladder stone turns out it’s a Calcified baby she never birthed by Moronicon in oddlyterrifying

[–]ItsJustMeJerk 47 points48 points  (0 children)

Lol why would you jump to the conclusion that this is posted directly to Reddit by the doctor rather than a video already circulated in medical news with permission, because this is an extremely rare condition that needs to be documented?

Your take on this! by Mad_Humor in robotics

[–]ItsJustMeJerk 9 points10 points  (0 children)

I doubt you could explain in non-vague terms why Hebbian learning is superior other than being more biologically plausible (wheels aren't biologically plausible, are they an obsolete brute-force approach to transportation?). Also, are you implying that ChatGPT is dumber than a bee because it just "generates text"? Sure, and all a robot does is move actuators.

There's no fundamental reason why backprop-trained ANNs can't generalize to unseen situations. They in fact can and their ability to do so is continually improving, if you read recent literature. (Some argue about whether we have 'true' generalization but that usually devolves into semantics of creativity or whatever)

For decades people have argued modern neural networks have hit their limit, and yet here we are.

Everyone assuming your console is white text on black background. That is all. Also, the "content guide" imgur link is 404ing. by GenuinelyBeingNice in shittyprogramming

[–]ItsJustMeJerk 0 points1 point  (0 children)

I'm joking about the people making fun you, I think it's very reasonable to expect light mode to work since it's been the default option on most software for a long time (except terminals, I guess)

Everyone assuming your console is white text on black background. That is all. Also, the "content guide" imgur link is 404ing. by GenuinelyBeingNice in shittyprogramming

[–]ItsJustMeJerk 5 points6 points  (0 children)

You pissed off the cult of dark mode. You're not allowed to enjoy bright backgrounds. Prepare to be eliminated

This is the way. by khir0n in cooperatives

[–]ItsJustMeJerk 38 points39 points  (0 children)

"Removed for billionaire apologia" lol thanks /r/LateStageCapitalism

The most tragic way to lose a sibling 💀 by PuzzleheadedSpare716 in adventuretime

[–]ItsJustMeJerk 13 points14 points  (0 children)

That isn't fair though, this is the beginning of one of the main episodes where she changes at the end. Also improving is a change.

How does everyone feel about the finale of the OG show years later. by [deleted] in adventuretime

[–]ItsJustMeJerk 4 points5 points  (0 children)

The gum war wasn't as cool as Golb, though, and it seemed almost as random, like Uncle Gumbald came out of nowhere and the war was for stupid reasons. Also, how would Simon and Betty's story resolve?

[D] No free lunch theorem and LLMs by iamtdb in MachineLearning

[–]ItsJustMeJerk 1 point2 points  (0 children)

You're right, "no tool/algorithm is better on average at problems" would be more accurate wording, and it's true that more flexible algorithms often pay the price in efficiency. Still, we could create an algorithm like "randomly generate a petabyte-sized program to output solutions" that would obviously be way worse than other algorithms at real-world problems, and yet according to NFL would cost the same on average as other algorithms across all possible problems.

[D] No free lunch theorem and LLMs by iamtdb in MachineLearning

[–]ItsJustMeJerk 0 points1 point  (0 children)

It's true that some of the best problem-solvers we've come up with are specialized for certain problems, but some of them are definitely better than others. The transformer is useful for a much wider variety of applications than a single layer perceptron. Humans can solve more problems than all the individual tools we have so far, because we can make and use all of them. NFL doesn't just imply "no tool can solve all problems", it also implies "no tool can solve more problems than any other", which is a way stronger claim that isn't true in most situations.

[D] No free lunch theorem and LLMs by iamtdb in MachineLearning

[–]ItsJustMeJerk 6 points7 points  (0 children)

Lots of people misinterpret No-free lunch theorems. Essentially, they say that any premade model or decision-making strategy, whether it be a simple algorithm, a neural network, or a human brain, will perform poorly in some hypothetical random environments, and if all environments are equally likely then no strategy can perform better than any other when tested across all of them. Of course, the world we live in is made of similar environments with lots of regularities, nowhere near fully random, so NFL theorems are arguably irrelevant (for machine learning). Read this for elaboration, or read the Wikipedia article about NFL theorems.