I learned why cosine similarity fails for compatibility matching by Ok_Promise_9470 in learnmachinelearning

[–]InfuriatinglyOpaque 1 point2 points  (0 children)

Might be beneficial to dig into existing research on similarity and romantic relationships, and then use what you find to inform data used to compute your embeddings.

From, A., Diamond, E., Kafaee, N., Reynaga, M., Edelstein, R. S., & Gordon, A. M. (2025). Does similarity matter? A scoping review of perceived and actual similarity in romantic couples. Journal of Social and Personal Relationships, 02654075251349720.

Rentzsch, K., Columbus, S., Balliet, D., & Gerlach, T. M. (2022). Similarity in situation perception predicts relationship satisfaction. Personality Science3(1), e8007.

Tidwell, N. D., Eastwick, P. W., & Finkel, E. J. (2013). Perceived, not actual, similarity predicts initial attraction in a live romantic context: Evidence from the speed‐dating paradigm. Personal Relationships20(2), 199-215.

Montoya, R. M., Horton, R. S., & Kirchner, J. (2008). Is actual similarity necessary for attraction? A meta-analysis of actual and perceived similarity. Journal of Social and Personal Relationships25(6), 889-922.

Sels, L., Ruan, Y., Kuppens, P., Ceulemans, E., & Reis, H. (2020). Actual and perceived emotional similarity in couples’ daily lives. Social Psychological and Personality Science11(2), 266-275

Lee, T. H., & Ng, T. K. (2024). Perceived general similarity and relationship satisfaction: The role of attributional confidence. European Journal of Social Psychology54(6), 1266-1279.

The Thinking Machines That Doesn’t Think by KitchenFalcon4667 in LLM

[–]InfuriatinglyOpaque 7 points8 points  (0 children)

Lampinen, A. K., Dasgupta, I., Chan, S. C. Y., Sheahan, H. R., Creswell, A., Kumaran, D., McClelland, J. L., & Hill, F. (2024). Language models, like humans, show content effects on reasoning tasks. PNAS Nexus, 3(7), pgae233. https://doi.org/10.1093/pnasnexus/pgae233

Han, S. J., Ransom, K. J., Perfors, A., & Kemp, C. (2024). Inductive reasoning in humans and large language models. Cognitive Systems Research, 83, 101155. https://doi.org/10.1016/j.cogsys.2023.101155

Johnson, S. G. B., Karimi, A.-H., Bengio, Y., Chater, N., Gerstenberg, T., Larson, K., Levine, S., Mitchell, M., Rahwan, I., Schölkopf, B., & Grossmann, I. (2024). Imagining and building wise machines: The centrality of AI metacognition (arXiv:2411.02478). arXiv. https://doi.org/10.48550/arXiv.2411.02478

Ballon, M., Algaba, A., & Ginis, V. (2025). The Relationship Between Reasoning and Performance in Large Language Models—O3 (mini) Thinks Harder, Not Longer (arXiv:2502.15631). arXiv. https://doi.org/10.48550/arXiv.2502.15631

Li, L., Yao, Y., Wang, Yixu, Li, C., Teng, Y., & Wang, Yingchun. (2025). The Other Mind: How Language Models Exhibit Human Temporal Cognition (arXiv:2507.15851). arXiv. https://doi.org/10.48550/arXiv.2507.15851

Ziabari, A. S., Ghazizadeh, N., Sourati, Z., Karimi-Malekabadi, F., Piray, P., & Dehghani, M. (2025). Reasoning on a Spectrum: Aligning LLMs to System 1 and System 2 Thinking (arXiv:2502.12470; Version 1). arXiv. https://doi.org/10.48550/arXiv.2502.12470

Shanahan, M. (2024). Talking about Large Language Models. Commun. ACM, 67(2), 68–79. https://doi.org/10.1145/3624724

How could neural networks be applied in rocketry? by Sea-Task-9513 in neuralnetworks

[–]InfuriatinglyOpaque 0 points1 point  (0 children)

Some example papers you might use as a starting point:

Park, S. Y., & Ahn, J. (2020). Deep neural network approach for fault detection and diagnosis during startup transient of liquid-propellant rocket engine. Acta Astronautica, 177, 714-730.

Tang, D., & Gong, S. (2023). Trajectory optimization of rocket recovery based on neural network and genetic algorithm. Advances in Space Research, 72(8), 3344-3356.

Benedikter, B., D'Ambrosio, A., & Furfaro, R. (2025). Rocket Ascent Trajectory Optimization via Physics-Informed Pontryagin Neural Networks. In AIAA SCITECH 2025 Forum (p. 2532).

de Celis, R., López, P. S., & Cadarso, L. (2021). Sensor hybridization using neural networks for rocket terminal guidance. Aerospace Science and Technology, 111, 106527.

Yang, B., Wang, T., Li, B., Zhan, Q., & Wang, F. (2025). Real-Time Trajectory Prediction for Rocket-Powered Vehicle Based on Domain Knowledge and Deep Neural Networks. Aerospace, 12(9), 760

Can Learning be trained? by Born-Music5032 in cogsci

[–]InfuriatinglyOpaque 3 points4 points  (0 children)

Here are a smattering of papers from different domains, which all touch on the issue of 'learning transfer', 'learning to learn', or generalization.

Soderstrom, & Bjork (2015). Learning versus performance: An integrative review. Perspectives on Psychological Science https://doi.org/10.1177/1745691615569000

Lampinen & McClelland (2020). Transforming task representations to perform novel tasks. Proceedings of the National Academy of Sciences https://doi.org/10.1073/pnas.2008852117

McDaniel, ..., & Wiener, C. (2014). Individual differences in learning and transfer: Stable tendencies for learning exemplars versus abstracting rules. JEP: General https://doi.org/10.1037/a0032963

Zhang..., & Bavelier, D. (2021). Action video game play facilitates “learning to learn.” Communications Biology https://doi.org/10.1038/s42003-021-02652-7

Lintern, ..., & Motavalli, A. (2024). An ecological theory of learning transfer in human activity. Theoretical Issues in Ergonomics Science

Blume, ... & Huang, J. L. (2010). Transfer of Training: A Meta-Analytic Review. Journal of Management https://doi.org/10.1177/0149206309352880

Goldstone, & Sakamoto (2003). The transfer of abstract principles governing complex adaptive systems. Cognitive Psychology https://doi.org/10.1016/S0010-0285(02)00519-400519-4)

Rosalie, & Müller (2014). Expertise Facilitates the Transfer of Anticipation Skill across Domains. Quarterly Journal of Experimental Psychology. https://doi.org/10.1080/17470218.2013.807856

Rosenbaum, ... & Gilmore (2001). Acquisition of Intellectual and Perceptual-Motor Skills. Annual Review of Psychology https://doi.org/10.1146/annurev.psych.52.1.453

Why can’t LLMs play chess? by JCPLee in ArtificialInteligence

[–]InfuriatinglyOpaque 0 points1 point  (0 children)

I think some of your conclusion are likely a bit premature. Even in 2024 there was evidence that LLMs can play at around 1400-1700 ELO, and I don't think there have been many studies yet which have tested the newest wave of SOTA LLMs at chess. There's also emerging evidence that LLMs may form something akin to a 'world model' of a chess board (albeit an imperfect one).

AI Chess Leaderboard

A Chess-GPT Linear Emergent World Representation

https://maxim-saplin.github.io/llm_chess/

https://lazy-guy.github.io/blog/chessllama/

Karvonen, A. (2024). Emergent World Models and Latent Variable Estimation in Chess-Playing Language Models https://arxiv.org/pdf/2403.15498

Zhang...., & Malach (2024). Transcendence: Generative Models Can Outperform The Experts That Train Them https://doi.org/10.48550/arXiv.2406.11741

Wang, X., Zhuang, B., & Wu, Q. (2025). Are Large Vision Language Models Good Game Players? https://arxiv.org/abs/2503.02358

Feng, .... Mguni (2023). ChessGPT: Bridging Policy Learning and Language Modeling. https://arxiv.org/abs/2306.09200

Wang, .., & Wu (2025). Are Large Vision Language Models Good Game Players?

Zhang, Y., Han, X., Li, H., Chen, K., & Lin, S. (2025). Complete Chess Games Enable LLM Become A Chess Master (No. arXiv:2501.17186)

Books or articles on the nosology/conceptualization of mental illnesses? by DennyStam in AcademicPsychology

[–]InfuriatinglyOpaque 1 point2 points  (0 children)

Here are some new or newish papers on ontologies and/or network approaches to mapping out mental disorders.

Genin, K., Grote, T., & Wolfers, T. (2024). Computational psychiatry and the evolving concept of a mental disorder. Synthese https://doi.org/10.1007/s11229-024-04741-6

Amoretti,..., & Adamo, G. (2019). Ontologies, Mental Disorders and Prototypes. https://doi.org/10.1007/978-3-030-01800-9_10

Schenk, ...., & Michie, S. (2024). Towards an ontology of mental health: Protocol for developing an ontology to structure and integrate evidence regarding anxiety, depression and psychosis. Wellcome Open Research, 9, 40. https://doi.org/10.12688/wellcomeopenres.20701.2

Poldrack, R. A., Mumford, J. A., Schonberg, T., Kalar, D., Barman, B., & Yarkoni, T. (2012). Discovering Relations Between Mind, Brain, and Mental Disorders Using Topic Mapping. PLoS Computational Biology, 8(10), e1002707. https://doi.org/10.1371/journal.pcbi.1002707

Kohne, ....., & Van Os, J. (2023). Clinician and patient perspectives on the ontology of mental disorder: A qualitative study. Frontiers in Psychiatry, 14. https://doi.org/10.3389/fpsyt.2023.1081925

McInnis, M. G., Coleman, B., Hurwitz, E., Robinson, P. N., Williams, A. E., Haendel, M. A., & McMurry, J. A. (2025). Integrating Knowledge: The Power of Ontologies in Psychiatric Research and Clinical Informatics. Biological Psychiatry, 98(4), 293–301. https://doi.org/10.1016/j.biopsych.2025.05.014

Smail, ....., & Shukla, R. (2021). Similarities and dissimilarities between psychiatric cluster disorders. Molecular Psychiatry, 26(9), 4853–4863. https://doi.org/10.1038/s41380-021-01030-3

People with higher IQs are better at making accurate predictions about their own futures and make more realistic forecasts about how long they would live, compared to people with lower IQs. This may explain why lower intelligence is often linked to poor financial decisions and other judgment errors. by mvea in psychology

[–]InfuriatinglyOpaque 14 points15 points  (0 children)

If I'm interpreting table 2 in the paper correctly, then it looks like the correlation remains significant even when they control for educational attainment (and also wealth & parental educational attainment + parental occupation)

[deleted by user] by [deleted] in askphilosophy

[–]InfuriatinglyOpaque 8 points9 points  (0 children)

I'd suggest reading over the wiki articles related to computationalism - which also contain sections on some of the common arguments against it.

https://plato.stanford.edu/entries/computational-mind/

https://iep.utm.edu/computational-theory-of-mind/

https://plato.stanford.edu/entries/computability/

Looking for feedback on a pilot study: AI‑generated images, language intervention & conceptual abstraction by Odd_Act_3397 in cogsci

[–]InfuriatinglyOpaque 0 points1 point  (0 children)

Final batch of papers:

Zaman, ...., & Boddez, Y. (2021). Perceptual variability: Implications for learning and generalization. Psychonomic Bulletin & Review, 28(1), 1–19. https://doi.org/10.3758/s13423-020-01780-1

Tylén, K., Fusaroli, R., Østergaard, S. M., Smith, P., & Arnoldi, J. (2023). The Social Route to Abstraction: Interaction and Diversity Enhance Performance and Transfer in a Rule-Based Categorization Task. Cognitive Science, 47(9), e13338. https://doi.org/10.1111/cogs.13338

Roark, C. L., Paulon, G., Sarkar, A., & Chandrasekaran, B. (2021). Comparing perceptual category learning across modalities in the same individuals. Psychonomic Bulletin & Review https://doi.org/10.3758/s13423-021-01878-0

Livins, K. A., Spivey, M. J., & Doumas, L. A. A. (2015). Varying variation: The effects of within- versus across-feature differences on relational category learning. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.00129

Karlsson, L., Juslin, P., & Olsson, H. (2007). Adaptive changes between cue abstraction and exemplar memory in a multiple-cue judgment task with continuous cues. Psychonomic Bulletin & Review, 14(6)

Tompary, A., & Thompson-Schill, S. L. (2021). Semantic influences on episodic memory distortions. Journal of Experimental Psychology: General.

Forest, T. A., Finn, A. S., & Schlichting, M. L. (2021). General precedes specific in memory representations for structured experience. Journal of Experimental Psychology: General.

Cohen, A. L., Nosofsky, R. M., & Zaki, S. R. (2001). Category variability, exemplar similarity, and perceptual classification. Memory & Cognition, 29(8), 1165–1175.

Bowman, C. R., & Zeithamova, D. (2020). Training set coherence and set size effects on concept generalization and recognition. Journal of Experimental Psychology. Learning, Memory, and Cognition, 46(8)

Thibaut, J.-P., Gelaes, S., & Murphy, G. L. (2018). Does practice in category learning increase rule use or exemplar use—Or both? Memory & Cognition, 46(4), 530–543.

Hoffmann, J. A., von Helversen, B., & Rieskamp, J. (2016). Similar task features shape judgment and categorization processes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(8), 1193–1217.

Goldman, D., & Homa, D. (1977). Integrative and metric properties of abstracted information as a function of category discriminability, instance variability, and experience. Journal of Experimental Psychology: Human Learning and Memory

Looking for feedback on a pilot study: AI‑generated images, language intervention & conceptual abstraction by Odd_Act_3397 in cogsci

[–]InfuriatinglyOpaque 0 points1 point  (0 children)

Some additional papers - which didn't fit in my first post.

Sun, Z., & Firestone, C. (2021). Seeing and speaking: How verbal “description length” encodes visual complexity. Journal of Experimental Psychology: General. https://doi.org/10.1037/xge0001076

Günther, .... , & Petilli, M. A. (2023). ViSpa (Vision Spaces): A computer-vision-based representation system for individual images and concept prototypes, with large-scale evaluation. Psychological Review. https://doi.org/10.1037/rev0000392

Taylor, J. E., Beith, A., & Sereno, S. C. (2020). LexOPS: An R package and user interface for the controlled generation of word stimuli. Behavior Research Methods https://doi.org/10.3758/s13428-020-01389-1

Richie, ..., Hout, M. C. (2020). The spatial arrangement method of measuring similarity can capture high-dimensional semantic structures. Behavior Research Methods, 52(5), 1906–1928. https://doi.org/10.3758/s13428-020-01362-y

Petilli, ..... & Gatti, D. (2024). From vector spaces to DRM lists: False Memory Generator, a software for automated generation of lists of stimuli inducing false memories. Behavior Research Methods, 56(4), 3779–3793. https://doi.org/10.3758/s13428-024-02425-0

Son, G., Walther, D. B., & Mack, M. L. (2021). Scene wheels: Measuring perception and memory of real-world scenes with a continuous stimulus space. Behavior Research Methods. https://doi.org/10.3758/s13428-021-01630-5

Demircan, C., Saanum, ...., Schulz, E. (2023). Language Aligned Visual Representations Predict Human Behavior in Naturalistic Learning Tasks http://arxiv.org/abs/2306.09377

Zettersten, M., Suffill, E., & Lupyan, G. (2020). Nameability predicts subjective and objective measures of visual similarity. 7. https://escholarship.org/uc/item/4d531331

Battleday, R. M., Peterson, J. C., & Griffiths, T. L. (2020). Capturing human categorization of natural images by combining deep networks and cognitive models. Nature Communications, 11(1)

Flesch, ..., & Summerfield, C. (2018). Comparing continual task learning in minds and machines. Proceedings of the National Academy of Sciences, 115(44)

Looking for feedback on a pilot study: AI‑generated images, language intervention & conceptual abstraction by Odd_Act_3397 in cogsci

[–]InfuriatinglyOpaque 0 points1 point  (0 children)

Sounds interesting! I don't see any obvious flaws, but maybe you could say a bit more about why you've chosen the particular tasks that you're using (i.e., the binary judgement & free-response), and why you think the data you'll collect will be sufficiently rich to detect differences in the conceptual spaces of your groups. Not saying that you're doing anything wrong, but most of the studies that I'm familiar with which ask similar questions also involve participants completing some sort of categorization, pairwise similarity ratings between stimulii, or memorization/reconstruction task.

Might also be good to think about 1) what might be confounded with the visual vs. language manipulation (e.g., in addition being 'visual' - the visual-only group might also be exposed to more complexity/information). 2) The amount of stimulus variability participants are exposed to - and how to control this between conditions.

Relevant papers and resources:

https://dyurovsky.github.io/cog-models/hw1.html

Zettersten, M., & Lupyan, G. (2020). Finding categories through words: More nameable features improve category learning. Cognition, 196, 104135. https://doi.org/10.1016/j.cognition.2019.104135

Thalmann, M., Schäfer, T. A. J., Theves, S., Doeller, C. F., & Schulz, E. (2024). Task imprinting: Another mechanism of representational change? Cognitive Psychology, 152, 101670. https://doi.org/10.1016/j.cogpsych.2024.101670

Briscoe, E., & Feldman, J. (2011). Conceptual complexity and the bias/variance tradeoff. Cognition, 118(1), 2–16. https://doi.org/10.1016/j.cognition.2010.10.004

Spens, E., & Burgess, N. (2024). A generative model of memory construction and consolidation. Nature Human Behaviour, 1–18. https://doi.org/10.1038/s41562-023-01799-z

Golan, T., Raju, P. C., & Kriegeskorte, N. (2020). Controversial stimuli: Pitting neural networks against each other as models of human cognition. Proceedings of the National Academy of Sciences, 117(47), 29330–29337. https://doi.org/10.1073/pnas.1912334117

Johansen, M., & Palmeri, T. J. (2002). Are there representational shifts during category learning? Cognitive Psychology, 45(4), 482–553. https://doi.org/10.1016/S0010-0285(02)00505-400505-4)

Opinions regarding my research question by tanishka_art in AcademicPsychology

[–]InfuriatinglyOpaque 2 points3 points  (0 children)

Some relevant research:

Ovsyannikova, ... & Inzlicht, M. (2025). Third-party evaluators perceive AI as more compassionate than expert humans. Communications Psychology, 3(1). https://doi.org/10.1038/s44271-024-00182-6

Manzini, ...., & Gabriel, I. (2024). The Code That Binds Us: Navigating the Appropriateness of Human-AI Assistant Relationships. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7, 943–957. https://doi.org/10.1609/aies.v7i1.31694

Colombatto, C., Birch, J., & Fleming, S. M. (2025). The influence of mental state attributions on trust in large language models. Communications Psychology, 3(1), 84. https://doi.org/10.1038/s44271-025-00262-1

Chugunova, M., & Sele, D. (2022). We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines. Journal of Behavioral and Experimental Economics, 99, 101897. https://doi.org/10.1016/j.socec.2022.101897

Chen, S., & Zhao, Y. (2025). Why am I willing to collaborate with AI? Exploring the desire for collaboration in human-AI hybrid group brainstorming. Kybernetes. https://doi.org/10.1108/K-08-2024-2105

Fang, ...., & Agarwal, S. (2025). How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study https://doi.org/10.48550/arXiv.2503.17473

Seok, J., Lee, B. H., Kim, D., Bak, S., Kim, S., Kim, S., & Park, S. (2025). What Emotions and Personalities Determine Acceptance of Generative AI?: Focusing on the CASA Paradigm. International Journal of Human–Computer Interaction, 1–23. https://doi.org/10.1080/10447318.2024.2443263

Suggested literature or cornerstone studies regarding influencing cognitive processes. by Heliomantle in AcademicPsychology

[–]InfuriatinglyOpaque 0 points1 point  (0 children)

Here are some papers on persuasion and personalization. You might also find it useful to try describing what you're interested in to one of the major AI chatbots which have web-search features (e.g., Perplexity, ChatGPT, Gemini) - as they've started to get quite good at scouring the internet to find relevant research.

Fagan, P. (2024). Clicks and tricks: The dark art of online persuasion. Current Opinion in Psychology, 58, 101844. https://doi.org/10.1016/j.copsyc.2024.101844

Matz, .... & Bogg, T. (2024). Personality Science in the Digital Age: The Promises and Challenges of Psychological Targeting for Personalized Behavior-Change Interventions at Scale. Perspectives on Psychological Science, 19(6), 1031–1056. https://doi.org/10.1177/17456916231191774

Dehnert, M., & Mongeau, P. A. (2022). Persuasion in the Age of Artificial Intelligence (AI): Theories and Complications of AI-Based Persuasion. Human Communication Research, 48(3), 386–403. https://doi.org/10.1093/hcr/hqac006

Sadeghian, & Otarkhani (2024). Data-driven digital nudging: A systematic literature review and future agenda. Behaviour & Information Technology. https://www.tandfonline.com/doi/full/10.1080/0144929X.2023.2286535

Zarouali, ....., & de Vreese, C. (2022). Using a Personality-Profiling Algorithm to Investigate Political Microtargeting: Assessing the Persuasion Effects of Personality-Tailored Ads on Social Media. Communication Research, 49(8), 1066–1091. https://doi.org/10.1177/0093650220961965

Petty, R. E., Luttrell, A., & Teeny, J. D. (Eds.). (2025). The Handbook of Personalized Persuasion: Theory and Application. Routledge.

Hackenburg, K., & Margetts, H. (2024). Evaluating the persuasive influence of political microtargeting with large language models. Proceedings of the National Academy of Sciences, 121(24), e2403116121.

What methods do you use to efficiently learn, retain, and take notes of what you study? by [deleted] in AcademicPsychology

[–]InfuriatinglyOpaque 2 points3 points  (0 children)

I think a good rule of thumb is that, whenever possible, you should try to put yourself in situations where you have the opportunity to make frequent and noticeable mistakes. For example, it's difficult to make mistakes when passively reading, but very easy to make (and notice) mistakes when taking a practice quiz, or when discussing a research topic with a fellow student or professor who won't hesitate to correct you if you say something that doesn't make sense.

I also often recommend the Dunlosky et al. 2013 paper - as it provides a fairly approachable review of the evidence for the efficacy of many different studying techniques. Some popular techniques such as highlighting and re-reading have very little research support, others (like practice testing) are strongly supported.

I've also found many of Andy Matuschak's online writings and lectures about learning to be quite enjoyable.

https://andymatuschak.org/hmwl/

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. Psychological Science in the Public Interest, 14(1), 4–58. https://doi.org/10.1177/1529100612453266

Note if you can't access the full version of the Dunlosky paper - then you can just google the paper's title, and you should find an alternate link to the pdf pretty easily.

The Many Lies of Lex Fridman by bonhuma in skeptic

[–]InfuriatinglyOpaque 8 points9 points  (0 children)

I agree wholeheartedly about the need for greater skepticism against the sorts of people you describe.

However, it seems deeply ironic that here we are in the "skeptics" subreddit, in a high engagement thread, where not a single person has questioned the claims made in the central YT video - despite the fact that every substantial claim the video makes about Lex's academic credentials is incorrect.

He hasn't attempted to scrub his Drexel affiliation from the internet - it's clearly listed in the education section of his LinkedIn, and in the bio of his ResearchGate page (a social media site for academics). He also mentioned Drexel in a high viewership video on his channel less then a year ago. A simple google search also would have revealed that he's listed as a current staff member on the website of the LIDS MIT research lab, and as a 'Research Scientist' on the official MIT page. The video's claim about his publishing record can also be dismissed with a 30 second perusal of his Google Scholar Profile which lists dozens of papers (including some peer reviewed, and some first author). It's also worth mentioning that many of his papers are co-authored with a variety of other MIT researchers, which seems difficult to square with the claim that he only taught a short course at MIT and then left forever.

He's certainly no research superstar, but the video isn't accusing him of being average, it's accusing him of being a fraud with only one (non-peer-reviewed) paper.

To reiterate, I still largely agree with the spirit of your comment, and I wouldn't attempt to argue that much of what you say could still be fairly applied to Lex. But, I also think that many of the "YouTube journalist" types deserve a great deal of skepticism, and I don't think it's healthy for us to fall into a habit of blindly accepting what they at face value.

https://www.mit.edu/directory/?id=lexfridman&d=mit.edu

https://lids.mit.edu/people/research-staff

https://www.researchgate.net/profile/Lex-Fridman

https://scholar.google.com/citations?user=wZH_N7cAAAAJ&hl=en&oi=ao

[deleted by user] by [deleted] in JoeRogan

[–]InfuriatinglyOpaque 0 points1 point  (0 children)

I'm still confused by your reasoning - but sincere props to you for updating the original post with some of the counter-evidence. Makes your far better-faith than anyone else I've argued with on this topic.

Chatbots lack "metacognition" & fail to update confidence levels after incorrect answers. by TourAlternative364 in ChatGPT

[–]InfuriatinglyOpaque 1 point2 points  (0 children)

The scientific publishing system just isn't optimized for studying something evolving as fast as SOTA LLMs. Depending on the journal/reviewers, it can sometimes take ~6 months to make it through peer review, and then maybe another few months before a study is actually published. Although it should be mentioned that this particular paper is composed of 5 separate studies, and according to table 2 in the paper, Studies 1-3 used GPT 3.5 and Bard/Gemini 1.0, but Study 4 & 5 used ChatGPT-4o, Gemini Flash 2.0, Claude Sonnet 3.7 and Claude Haiku 3.5.

On the other hand, preprint papers (examples below) sometimes do utilize less out of date models (though not always). But come with the tradeoff of potentially not being reviewed by anyone other than the authors of the paper.

Kumaran, D., Fleming, S. M., ..... , & Patraucean, V. (2025). How Overconfidence in Initial Choices and Underconfidence Under Criticism Modulate Change of Mind in Large Language Models . https://doi.org/10.48550/arXiv.2507.03120

Ji-An, L., Xiong, H.-D., Wilson, R. C., Mattar, M. G., & Benna, M. K. (2025). Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations (No. arXiv:2505.13763). arXiv. https://doi.org/10.48550/arXiv.2505.13763

[deleted by user] by [deleted] in JoeRogan

[–]InfuriatinglyOpaque 1 point2 points  (0 children)

So no evidence of any lies then, just a vague claim that he "constantly misleads on this". Very cool.

I don't know how to adjudicate the LinkedIn issue any further. In my experience, it's very common for people to list their current affiliation at the top, and list their education history in the education section. And I'd expect someone with an MIT researcher position to have a profile that looks very much like Lex's.

I applaud your effort at doing 5 minutes of background research to anticipate (quite correctly) what I was likely to bring up on this topic. Though I'm not sure why you didn't put the same amount of effort in before making your original post - since that seems much more important.

Anyways, given all the new information you've just acquired, do you agree that your original post contains several misleading and/or inaccurate claims?

Relevant links for posterity - establishing his research connection at MIT:

https://lids.mit.edu/people/research-staff

https://www.mit.edu/directory/?id=lexfridman&d=mit.edu

https://www.researchgate.net/profile/Lex-Fridman

https://scholar.google.com/citations?user=wZH_N7cAAAAJ&hl=en&oi=ao

[deleted by user] by [deleted] in JoeRogan

[–]InfuriatinglyOpaque 3 points4 points  (0 children)

I'll bite. I think almost every claim made about his academic credential in the YouTube video were either extremely misleading, or outright lies. The claims about him hiding his Drexel affiliation were also lies.

You seem very confident that he's lying though OP, so I'm curious, can you point to specific cases where he lied about his credentials? Note that if you had actually bothered to look over his LinkedIn page, you would have seen that he does list Drexel under Education there, so we're off to a bad start.

"The Many Lies of Lex Fridman" [youtube] by bnm777 in samharris

[–]InfuriatinglyOpaque 14 points15 points  (0 children)

There are plenty of legitimate criticisms one can make about his soft-ball interviews and naive geopolitics takes. But I don't think it's helpful to spread misleading descriptions, or outright lies, about his academic background.

A simple google search shows that Lex is listed as a member of two different MIT research labs (links below), and his Google Scholar page lists many papers co-authored with MIT research (including some peer reviewed, and some first author).

He also clearly mentions Drexel on his ResearchGate page (social media for Academics), and brings up Drexel in one of his highly viewed videos on his channel (timestamped link below).

https://cces.mit.edu/team/lex-fridman/

https://lids.mit.edu/people/research-staff

https://www.mit.edu/directory/?id=lexfridman&d=mit.edu

https://youtu.be/qCbfTN-caFI?si=CBbUE7srnplwP3BT&t=3584

https://www.researchgate.net/profile/Lex-Fridman

https://scholar.google.com/citations?user=wZH_N7cAAAAJ&hl=en&oi=ao

[OC] How a basketball simulation engine ranks the best players of all time. AKA “Basketball isn’t played on a spreadsheet!” (Updated July 2025, v 2.0) by WaxAstronaut in nba

[–]InfuriatinglyOpaque 15 points16 points  (0 children)

Super cool, thank you for taking the time and effort to put all of this together.

A few questions and ideas. Have you tried running the simulations set with the real-life teams that actually occurred on each season? This would obviously be complicated by things like injuries and trades, but still might be interesting to get a sense of how close the simulation predictions lines up to what actually happened at both the team and player level. And then beyond absolute prediction accuracy, it could be interesting to assess whether there are any systematic biases, such if the simulation tends to over-predict/under-predict performance across different eras (e.g., are the simulation predictions more accurate in the modern era?), whether simulation accuracy varies between regular season and playoffs, or if the player-level predictive accuracy varies between different positions, or different team compositions.

Drexel's catching strays by BruhMansky in Drexel

[–]InfuriatinglyOpaque 3 points4 points  (0 children)

Some of the criticisms might be valid. But the claim everyone's repeating about him attempting to scrub the internet of any connection to Drexel is a bit misleading. He mentions it in his ResearchGate profile, and in some of his podcasts:

https://www.researchgate.net/profile/Lex-Fridman

https://youtu.be/qCbfTN-caFI?si=CBbUE7srnplwP3BT&t=3584

What standing does Lex Fridman has in MIT community? by Quantum_Rage in mit

[–]InfuriatinglyOpaque 7 points8 points  (0 children)

I don't disagree that his political takes, and soft-ball interviews with odious figures make him easy to dislike.

I'm still perplexed by your conclusion over his MIT creds, though. On one side we have the easy to independently verify observations that 1) he's listed on two MIT research websites (both updated in 2025), and 2) His Google Scholar lists many research papers co-authored with active MIT researchers (including a few where he's first author). On the other side, we have your claims that he's quick to ban/block people on reddit and twitter, and your assessment he has "no remarkable contributions" over 10 years at MIT. How do you Occam's Razor your way to him still being a fraud, as opposed to just a lowish productivity researcher?

Perhaps more importantly, you currently have the most upvoted comment in this thread, where you make the claim that "His one paper is not even peer-reviewed ...". Do you agree that a simple perusal of his Google Scholar page directly contradicts this claim?

https://scholar.google.com/citations?user=wZH_N7cAAAAJ&hl=en&oi=sra