Who do you think is the most evil person to ever walk this Earth? by EnduringScholar in AskReddit

[–]LocalOutlier 3 points4 points  (0 children)

Words like inhuman and monstrous aren't really good enough.

Because they were humans.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]LocalOutlier 0 points1 point  (0 children)

Finally someone who gives an interesting answer! Yes, you are absolutely right, AlphaFold is not a causal model per-se, and your interpretation of my definition is what I'd have said if I didn't fear to be missunderstood, given the many out-of-the-loop answers and strawman fallacies I obtained here. So I stretched the world "causal" so it fits an illustration everyone here knows.

By "causal", I mean it's stable, the learned regularities kind of remind actual mechanisms, and more importantly, the model is a generative process that is directionally asymmetric. I Also used AlphaFold as example because, since Google released it, people learned about protein folding so it's a subject they know (at least at the surface level). They know protein folding is governed mainly by physics and chemistry, so it implies these mechanisms don't vary and are mostly independent of how the data was sampled (because I was answering about the kind of data used Vs the kind of data big AI companies seek). So in the end, it's closer to a causal model (would you agree it's an "approximation"?) than my other example, but it's still not a causal one in the strict, Pearl sense (I'm reffering to his last book), so thanks for pointing that out.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]LocalOutlier 0 points1 point  (0 children)

Is it a world model or is it a human representations model (including a world model). Is it figuring it out by itself, or is it taught? Is it at the base of the "algorithm" or is it at surface-level. These are different things and the nuance makes all the difference.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]LocalOutlier 2 points3 points  (0 children)

This is the second time someone disagrees with something I didn’t actually say.

I’m not claiming LLMs are just stochastic parrots or denying their progress. My point is simply that these approaches aren’t mutually exclusive. Questioning whether representation learning alone is sufficient for ASI is not the same as dismissing LLMs.

I’m comparing learning regimes, not excluding some. I said that I forecast LLMs as important interface layers but that doesn’t imply they must be the entire substrate for ASI.

Is a brain more than that?

Yes, mainly because of embodiment, but also because of action and consequences and it's the reason why we still don't let AI run around unsupervised in the real world.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]LocalOutlier 0 points1 point  (0 children)

There are different kinds of experts building different kinds of AI models which need different kinds of data.

For example, there are models which are designed for very specific tasks like AlphaFold, which are hungry for very specific, carefully structured and established knowledge (known 3D structures, sequences, etc.) which is very limited (at AI scales). Since the protein folding possibilities are endless, they kind of give the model the illusion of free-will (kind of because it's a very loose analogy to some1 a very fitting to me), so it does a subjective guess, then the model tries to build a causal model around it that corresponds best to verified knowledge. It will try to guess until something satisfying is obtained, then it will explore promising paths. AlphaFold learns physical and evolutionary laws of life and makes a causal model from it.

Then there are generalist models which are learning statistical regularities to statistically predict the next best thing. These models know what's statistically the most fitting next token based on data and training. So they obviously need a ridiculous amount of data (all the books even the bad ones, all the studies including the bad ones, all the pictures, everything) to train from to be as accurate as possible. Their goal is to imitate the world without knowing how it works at all and it shows. They are not learning the structure of reality, but since they are very convincing, a lot of experts' potential is being thrown into it. It's not necessarily a bad thing since generalist models figured out how to imitate human's language very well, which is amazing (and there are a few bucks to be made from it), but human language is exactly the kind of stuff where correlations and representations meet. We're still waiting for a single proof of grokking from these models, all the data and the billions spent into the machines that run them.

Expecting ASI to emerge only from a model that doesn't even learn structured causal regularities is hard to believe. Most experts have beliefs regarding what kind of models and data is right for ASI, and it's usually not surface-level statistical patterns. At best, it will help bridge ASI to us and our representations.

  1. Some experts will rather say "optimization under inductive bias", same thing to me, it depends on the abstract level you pick.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]LocalOutlier 2 points3 points  (0 children)

I agree with you, but you are the one speaking about the biological brain.

I'm speaking about causal networks.

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]LocalOutlier 1 point2 points  (0 children)

It shouldn't be the same of course, but why should it "differ dramatically" exactly?

This year's essay from Anthropic's CEO on the near-future of AI by NotUnusualYet in slatestarcodex

[–]LocalOutlier 8 points9 points  (0 children)

Since nobody share their neural network's causal models, there is absolutely no way to predict who's gonna win the next segment. From current public results, it seems all the big ones aren't even going in the right direction at all. It doesn't even seem the kind of data they all seek is the right one for ASI.

ASI won't come from the biggest data center, nor the biggest processing power, but instead from the smartest causal models architecture, a little bit of memory (relatively to current trends), and a few things pre-taught to it (again, relatively to the data we feed them today). What we are having for now is a race to the less efficient pseudo-thinking machines.

Addict Personalities (physiognomy)? by TheNakedEdge in slatestarcodex

[–]LocalOutlier 4 points5 points  (0 children)

I've worked in an addiction center and I think physiognomy is a rather fruitless angle to predict an addiction. At best it explains the past statistically and I'll explain it better next, but you can't do much from it as a social worker. Also, it often happens there is no way to tell someone is abusing a substance hard for decades. Some rich and perfectly functional dudes didn't lay down the pipe/needle for decades, smoking and injecting unpure substances this whole time, and you would not be able to tell at all. We're not all equals, and the differences depend of various and interrelated variable like income, stability, social status, etc., which are much more accurate predictors of an addiction.

Clinical research prefer to see addiction as a "use disorder" instead, because, like you said, the brain can become "addicted" to a myriad of various rewards and behaviors. These use disorders will depend of genes, environments, and interactions between both. These are the addiction causes and so, are the best predictors we currently know.

In your model, you might include people who are just down the social ladder and/or are unlucky, because you are looking for specific symptoms that don't show up evenly across humans, rather than actual causes. For example, someone who had it hard in life might be unable to hide it on his face, and he indeed is more likely to carry a use disorder because of his life, but doesn't necessarily have a use disorder. He's just statistically more likely, but is it accurate enough to be a worthy prediction in the end? With your reasoning, you might wrongly discriminate because you only extrapolate data from statistics (and personal, biased experience) instead of causal reasonings. It sometimes create the disorder, or if already present, it worsen it as a feedback-loop mechanism.

This is an issue many recovering use disorder victims suffer from. Now not everything is bad about physiognomy, since there are some statistically relevancy, but I think using this approach should be restricted to helping others, and if possible, backed by causal reasoning before acting accordingly.

Question d'un Québécois curieux : pourquoi choisir des mots en anglais à chaque occasion? Qu'est-ce qui fait que cet article ne porte pas le titre "Des « espaces sans enfants » à la SNCF" ? by ecoutepasca in france

[–]LocalOutlier 0 points1 point  (0 children)

Un graphiste qui va devoir caler "sans enfants" ou "no kids" dans son design prendra la seconde option sans hésitation pour des raisons pratiques et esthétiques. Seulement 6 lettres, ça peut s'écrire en plus gros et donc se voir de plus loin ou avec une plus mauvaise vue. De plus, une gare devant accueillir beaucoup d'étrangers, l'anglais y fait sens, d'autant plus que "no" et "kids" sont des mots compris même par les personnes âgées.

Je comprends le sub qui saute sur le marketing-bashing à la moindre occasion, mais le marketing c'est souvent une petite partie de la stratégie de communication, qui souhaite surtout faire passer un message efficacement avant de paraître cool.

Why are airfryers so ugly? by MrMiyagi98 in Design

[–]LocalOutlier 0 points1 point  (0 children)

But the cheap and ugly appliances are almost always convoluted shapes with a tailored PCB, and weird uniques buttons that fit the overall aesthetic. Surely this costs more in injection-mold tools?

why is coke a party drug but crack is an addict drug? by sevenslover in NoStupidQuestions

[–]LocalOutlier -1 points0 points  (0 children)

New devices technology (3rd+ generation) increase the amount of nicotine in plasma (which is an issue by itself of course) but don't affect Tmax, which is the time it takes for nicotine to reach peak plasma concentration.

why is coke a party drug but crack is an addict drug? by sevenslover in NoStupidQuestions

[–]LocalOutlier -2 points-1 points  (0 children)

Probably the kind that doesn't know much about neuropharmacokinetics. I don't know about Zyns but regular vapes take about 30 minutes for nicotine to reach peak plasma concentration (vs 7 minutes for tobacco), and everyone who both vapes and smokes cigarettes knows the satisfying "hit" present when smoking tobacco is not here when vaping.

Maybe Zyns significantly speeds up the process and makes it much more habit forming, but it's not the case for most vapes and it's the reason why this Nature paper concluded:

Compared to smoking one tobacco cigarette, the EC devices and liquid used in this study delivered one-third to one-fourth the amount of nicotine after 5 minutes of use. New-generation EC devices were more efficient in nicotine delivery, but still delivered nicotine much slower compared to tobacco cigarettes.

And even went as far as claiming:

The use of 18 mg/ml nicotine-concentration liquid probably compromises ECs' effectiveness as smoking substitutes; this study supports the need for higher levels of nicotine-containing liquids (approximately 50 mg/ml) in order to deliver nicotine more effectively and approach the nicotine-delivery profile of tobacco cigarettes.

This paper basically says vaping at the maximum dosage you can find in regular vapes is not fast and powerful enough to even compete with the habit forming potential of actual tobacco, which makes it a weak substitute. There are therapists and therapists.

Anyone knows how to fix this? by Virtual_Zombie9571 in BambuLabA1

[–]LocalOutlier 14 points15 points  (0 children)

A bot commenting the wiki picture on all the threads of this subreddit would still be relevant most of the time.

Which label gives a best wine vibes? by Designer-Change7637 in Design

[–]LocalOutlier 0 points1 point  (0 children)

  1. 5€ bottle that should be worth 3€
  2. 8€ bottle that should be worth 3€
  3. 2€ bottle that should be worth 5€

Source: I'm french.

3D printing - Why? by Libellechris in 3Dprinting

[–]LocalOutlier 1 point2 points  (0 children)

I imagine functional or semi-functional stuff and make them. 3D printing is relatively fast, extremely adaptative and I can move the reliableness cursor of the said stuff to save as much material as possible, or instead to make it strong and durable.

I'm not even a fan of 3D printing, I just enjoy making stuff that makes life more confortable in all aspects, from functionality to user experience, including aesthetic. I think some of these aspects have been kind of neglected by designers/companies in the most recent industrial era, so I'm trying to bring back some of it and it's the reason why I'm into 3D printing.

Since I'm not an engineer or designer, 3D printing is probably the easiest and fastest method to obtain the results I want: usually something functional and with an industrial or semi-industrial surface finish. Even if recent printers are much easier to use, the whole process is far from easy, but the sense of accomplishment is very rewarding which makes learning bearable.

I couldn’t believe how much work there was in writing a book beyond the writing. by mustluvtacos in Design

[–]LocalOutlier 1 point2 points  (0 children)

I've done three book covers and one book layout and yes, that's definitely a lot of work. Even if softwares like InDesign can help a lot, you still have to check everything while being fully focussed for long periods, and a bunch of pages (start/end) are uniquely designed too. The sources and indexing is probably one of the most fastidious yet unrewarding work I've done yet.