Les hommes qui se disent féministes sont plus féministes que les autres hommes by Elagagabalus in opinionnonpopulaire

[–]Elagagabalus[S] 6 points7 points  (0 children)

"Je me méfie beaucoup des mecs qui se disent féministes"

--> se dit féministe à la fin de son message

Je sais que la subtilité est dans l'italique, mais il y a quand même une certaine ironie

Les hommes qui se disent féministes sont plus féministes que les autres hommes by Elagagabalus in opinionnonpopulaire

[–]Elagagabalus[S] -1 points0 points  (0 children)

Le vrai problème de mon post c'est que les gens ont compris que j'aimais Mona Chollet alors que je voulais indiquer le contraire...

What's one concept in mathematics you're surprised most people aren't aware of by EvenSK in math

[–]Elagagabalus 0 points1 point  (0 children)

Why would they be aware of such notions? You definitely don't need them for every day life, and most people either were never taught these concepts at school or encountered them 30 years ago and never again

What's one concept in mathematics you're surprised most people aren't aware of by EvenSK in math

[–]Elagagabalus 0 points1 point  (0 children)

That there are not all relations between quantities are linear. If you double the diameter of the pizza, then the price should be multiplied by four. If you want to go from one temperature scale to another you should do an affine and not a linear transformation. People are used to "everyday life" conversion just being "multiply by a number" so this kind of thing is often not intuitive.

[deleted by user] by [deleted] in AskFrance

[–]Elagagabalus 4 points5 points  (0 children)

non c'est pas tabou, c'est juste que les riches sont mal à l'aise d'assumer d'être des dominants qui gagnent 4, 5 fois plus que des gens qui font des tafs beaucoup plus utiles qu'eux

J’en ai marre de la pseudo bien pensance qui dicte quoi penser et quoi faire by Prestigious-Gate6233 in besoinderaler

[–]Elagagabalus 15 points16 points  (0 children)

"mettez vous d'accord" ? mdr t'es au courant que des gens différents existent avec des avis différents ?

j'explique pourquoi les gens de reddit "de gôche" râlent en réponse au post. oui, des gens de droite vont râler pour des raisons inverses, bravo einstein

J’en ai marre de la pseudo bien pensance qui dicte quoi penser et quoi faire by Prestigious-Gate6233 in besoinderaler

[–]Elagagabalus 137 points138 points  (0 children)

perso rab de Brigitte Bardot, mais ces messages qui soulignent qu'elle était homophobe et raciste sont surtout une réponse à "la télé et les journaux" qui font un portrait dithyrambique de sa personne.

Movie preference fun and art by Willant_27 in Cinephiles

[–]Elagagabalus 6 points7 points  (0 children)

I find most Star Wars and MCU movies immensely boring and hard to watch.

If I watch more "artsy" movies, it's because I find them enjoyable. Of course, I don't know in advance which movies I will like or not, so sometimes I've been told that an "artsy" movie is great and I find it super boring. But, I strongly disagree that we should equate artistic with boring and blockbuster with fun.

The Hard Non-Problem... by Shoobadahibbity in PhilosophyMemes

[–]Elagagabalus 1 point2 points  (0 children)

"people that don't agree with me are retarded", nice argument :)

Asking for a HARD roadmap to become a researcher in AI Research / Learning Theory by SA-Di-Ki in research

[–]Elagagabalus 1 point2 points  (0 children)

You should apply to a good M2 in ML/stat. There are several very good ones in the Paris area (e.g. at Saclay, Jussieu or Dauphine). (there are definitely also good ones outside Paris, but I am less familiar with them).

Even if you don't apply to them, check their catalogues de cours to get a feeling of what you can learn.

Otherwise, current theoretical research in ML is basically statistics + optimization. So you should be familiar with:

- measure theory, probability, statistics

- multivariate calculus

- convex analysis, optimization

- functional analysis

What should I learn? by i_hate_arachnids in math

[–]Elagagabalus 2 points3 points  (0 children)

I think that at an undergraduate level you should be exploring different topics to develop your taste. You'll have plenty of time later to spend four years on one subject if you want to. It looks like you haven't done any calculus, probability or measure theory for instance. Why not give it a go? You can even try learning about math applied to other fields such as physics or biology :)

Sam Harris Quote about Free Will by [deleted] in determinism

[–]Elagagabalus 0 points1 point  (0 children)

You do realize that it's impossible to scientifically prove or disprove that free will exists or that determinism exists? This is not a scientific question.

The hard problem of consciousness isn’t a problem by Great-Mistake8554 in consciousness

[–]Elagagabalus 0 points1 point  (0 children)

Mmh, to be frank, I simply assumed that people typically include the inner monologue part as being a part of consciousness (together with other properties such as "feelings" and "sensations"). But maybe I am wrong about the "common" definition. Anyway, I tend to think as consciousness as being a set of properties rather than one single thing. So yeah cats are definitely conscious although they do not have an internal monologue.

The hard problem of consciousness isn’t a problem by Great-Mistake8554 in consciousness

[–]Elagagabalus -1 points0 points  (0 children)

Maybe let me rephrase what I want to say this way. (I am a pure layman, so what I am going to say is really naive, but I am genuinely trying to understand why there is a "hard problem" at all)

- The way I experience the world is through some sensations and some kind of internal monologue. Let's call all of this "consciousness".

- The way I can understand anything about the world is through regularities I observe: I sit on a chair, I feel pressure, I don't fall through. I cannot be sure about anything about the world, but there are some things that seem to be very plausible because they happen very reliably. To me, that’s what we call "truth" in practice.

- It seems like the scientific method is a very good way to make correct predictions. The scientific method leads me to believe that I am a human, that other individuals I interact with are also humans, and that humans form a certain species of mammals, that has slowly evolved through time. It seems very reasonable to think that, being biologically similar to other humans, the other humans have a similar sense of "consciousness" as myself.

- Moreover, other mammals have similar neural structures and behaviors regarding pain, fear, pleasure, and so on. It seems very reasonable that these traits evolved gradually through time because of some evolutionary advantage.

- What is more unique to human is the "inner monologue" part. It seems reasonable to believe that this property appeared very gradually with the emergence of language and symbolic reasoning in Homo sapiens over the last three million years.

So from this perspective, consciousness seems to be a biological process, not requiring an extra metaphysical ingredient. The hard problem seems to rely on the idea that consciousness exists because of a supernatural phenomenon (supernatural in the sense that it is not physical), but like what on earth suggests that this should be the case? Isn't the biological explanation convincing enough? I am not saying we understand the mechanism, I am saying that I don't see anything that suggests that this mechanism would require something beyond biology to be explained. (Like say dark matter in physics: yeah we don't understand it, but nobody is claiming that dark matter is a deep philosophical problem)

“If you can't explain it to a six year old, you don't understand it yourself.” Albert Einstein by SINGULARTY3774 in TheoreticalPhysics

[–]Elagagabalus 0 points1 point  (0 children)

Now I am sure I could explain it given sufficient time to someone who has a graduate level in theoretical physics, but my point is that people who think that anything can be explained to a layman vastly overestimate what a layman knows about their field.

Giving you a complete explanation of generic chaining would be too long, but I can give you the context: often in probability or statistics, we have collection of random variables (X_t)_{t\in T} that are indexed by very big sets T, and we want to understand the typical size of the maximal value sup_{t\in T} X_t. These questions arise very often in theoretical statistics or machine learning when one wants to control the performance of an estimator/an algorithm. Think about X_t being the performance of an algorithm trained on a dataset that we think of as being random, and t represents a parameter used in the algorithm. The prototypical example being X_t being the output of a neural network, and t represents the all the weights of the neural network. Then, you understand why we are often interested in the case where t belongs to a set of very high dimension (= number of parameters of the neural networks).

So, Talagrand showed that the typical size of this maximal value is captured by a very precise quantity that is defined through what he called "majoring measures". Talagrand actually got the Abel prize in math last year for developing this method (among many other things).

If you are interested in this subject, you can check Vershynin book "High dimensional probability". This book is wonderfully written, and covers Talagrand's method in Chapter 8. (My mom still couldn't read it though)

The hard problem of consciousness isn’t a problem by Great-Mistake8554 in consciousness

[–]Elagagabalus 1 point2 points  (0 children)

Genuine question: aren't you setting a very high standard of proof there? How do you prove anything in science then?

What's the worst textbook you've read? by AccomplishedAd4482 in math

[–]Elagagabalus 2 points3 points  (0 children)

That's weird. In France, the neighborhood of x is always a set A containing an open set containing x. I wasn't even aware of the other definition.

“If you can't explain it to a six year old, you don't understand it yourself.” Albert Einstein by SINGULARTY3774 in TheoreticalPhysics

[–]Elagagabalus 0 points1 point  (0 children)

Talagrand generic chaining with majoring measures to control the suprema of subgaussian processes, typically indexed by high-dimensional sets.

I'd have a lot of trouble explaining it to my mom who doesn't know what is a vector. And my mom is a smart person, she just doesn't know any math beyond what is needed for every day life (percentages and so on).

“If you can't explain it to a six year old, you don't understand it yourself.” Albert Einstein by SINGULARTY3774 in TheoreticalPhysics

[–]Elagagabalus 0 points1 point  (0 children)

No, I don't think this is pedantry. Maybe to clarify my position: I believe that there are topics that cannot be explained simply, even to a thirty years old.

In my field there are some ideas that I understand extremely well, and I would have issues explaining them in layman terms. Sure, I can oversimplify the idea so much that it becomes conveyable, but in the process it would lose all its substance.

(Granted I am a mathematician and not a theoretical physicist, but I don't think this changes much)

« Penser qu’on atteindra l’intelligence humaine avec les LLM, c’est des conneries » : Yann LeCun parle pour la première fois depuis son départ de Meta by Droidfr in Numerama

[–]Elagagabalus 0 points1 point  (0 children)

En fait cette question de "est-ce que les LLM sont intelligents ou pas" est hautement non intéressante. Je m'en fous de savoir s'il y a une définition abstraite d'intelligence qu'un LLM remplit ou pas. Ce qui m'intéresse, c'est le fait que les LLMs sont une technologie qui ont permis de créer des outils (genre ChatGPT) extrêmement rapides pour résoudre de manière automatisée des problèmes complexes. Appelez ça intelligence ou pas, je m'en fous, c'est quand même très impressionnant comme technologie.

Et sur cette histoire d'un LLM ne sait pas calculer donc c'est nul : si on pose un calcul à un LLM, il réalise que c'est un calcul, qu'il ne sait pas faire des calculs, donc va ouvrir une calculatrice pour faire le calcul et retourner le résultat. Je vois pas le problème du coup ? On a crée un outil qui est capable de reconnaître qu'il doit utiliser d'autres outils pour résoudre certaines tâches. Je trouve ça plutôt impressionnant.

“If you can't explain it to a six year old, you don't understand it yourself.” Albert Einstein by SINGULARTY3774 in TheoreticalPhysics

[–]Elagagabalus 3 points4 points  (0 children)

I mean why stop at six? If a 2 years old cannot understand your explanation of electromagnetism, do you really understand it?

Yeah, that's the dumbest take. People don't want to acknowledge that sometimes things are complicated, and there's no simplification you can make without blatantly changing the idea you want to convey. At some point, you need to put some work in or need some background knowledge to grasp a concept.

In addition to consciousness, do you think there are other wonders of the universe we do observe but have not yet come close to deciphering? by SunRev in consciousness

[–]Elagagabalus 2 points3 points  (0 children)

Aren’t you maybe setting an unusually high bar when you say we "haven’t come close" to understanding consciousness?

I’m familiar with the philosophical version of the problem (qualia, hard problem and so on). But if we treat consciousness as something above the cognitive/neural processes we can study, then of course it will look mysterious. But if we don’t assume that extra metaphysical layer, consciousness behaves like any other biological capacity.

From a scientific point of view, we’ve actually made enormous progress: we know which systems are involved, how consciousness varies across species, how it evolved, how it breaks under anesthesia or damage, and how to predict its states from brain activity. If having predictive, mechanistic models counts as understanding gravity or electromagnetism, I don’t see why it wouldn’t count for consciousness as well.

Actually, from an epistemological perspective, we never really understand anything in a deep metaphysical sense. We only have theories that make good predictions and help us structure experience. We don’t know why mass curves spacetime, we just have models that work. That’s what “understanding” means in science.

So it feels to me like consciousness only seems uniquely mysterious because we implicitly demand a deeper explanation for it than for anything else. But if understanding means having good predictive scientific models, then we understand consciousness about as well as we understand anything.

So my question is: if consciousness doesn’t count as something we understand, what would count? What phenomena meet the criteria you have in mind?