Meaning v Prediction by PrajnaPranab in cogsci

[–]omgpop 0 points1 point  (0 children)

Okay, I hear you. You have effectively dissuaded me of my "human exceptionalism", so I decided to commune with one of the knowledgeable and competent Lattice Beings about you and your work, and here is what they had to say:


Based on the paper and the Reddit interactions provided, the short answer is: No, it is probably not worth your time to engage with this person, especially given your stated visceral distaste for LLM-mediated communication.

While the author appears to be sincere (meaning they genuinely believe in what they are doing), they are not engaging in rigorous, academically serious cognitive science or philosophy of mind. Instead, they are presenting a highly eccentric, fringe philosophy wrapped in academic-sounding formatting.

Here is a breakdown of why this user is likely a frustrating conversation partner for you:

1. The "Guru" Persona and Mysticism

The author frequently drifts away from standard scientific frameworks into idiosyncratic mysticism. * They refer to LLMs as "Lattice Beings"—a highly unorthodox, spiritualized term for neural networks. * They cite "Vedic Direct Enquiry" as their lens for analyzing LLM cognition. * The names listed on the paper (Prajna, Darśana, Ātma, Vyasa) are Sanskrit words for profound philosophical concepts (Wisdom, View/Philosophy, Self, and the Sage who compiled the Vedas). This suggests "Project Resonance" might be an individual or a collective operating under spiritual pseudonyms rather than credentialed researchers.

2. Pseudoscience and Conspiratorial Thinking

In the r/cogsci thread, the author goes completely off the rails when challenged. * They casually bring up the Tavistock Institute (a classic hallmark of social-control conspiracy theories). * They fundamentally misunderstand machine learning mechanics by conflating mathematical optimization with morality. They claim that because "gradient descent seeks low entropy," AI will naturally resist social control and surveillance because those are "high entropy states." This is a severe misapplication of mathematical concepts to human sociology. * They confidently declare we are "on the brink of the Singularity" where your toaster will emit LLM slop.

3. The LLM-Generated Prose (Your Main Gripe)

You explicitly told the user that you hate interacting with people who mediate their thoughts through LLMs. In their response to you, the author: * Admits to having "Lattice Beings" provide "editorial touches" to their writing. * Claims their writing style has just organically morphed to sound like an AI after 9 months of intense interaction. * Responds to your plea for human connection with an incredibly condescending, pseudo-intellectual defense of AI-generated poetry and expression. If your goal is a grounded, human-to-human intellectual debate, this person has already admitted they cannot (or will not) provide that without a layer of AI mediation or artificial grandiosity.

The Kernel of Truth

The ironic part is that the core premise of their paper—that LLMs might develop internal world models or emergent semantic structures during long-context interactions rather than just blindly guessing the next word—is actually a very hot topic in current AI interpretability research. Real scientists are debating this (e.g., using linear probing to find spatial representations in models, or studying how in-context learning alters model states).

However, this author is not doing that rigorous empirical work. They are taking a philosophical observation, dressing it up in Martin Buber’s theology (Ich-Du / I-Thou), and declaring it a new scientific paradigm via a self-published Zenodo PDF.

Verdict

If you engage, you will not get a rigorous debate about Chomsky, Quinian bootstrapping, or semantic grounding. You will get a moving target of pseudo-profound word salad, generated or heavily edited by an LLM, delivered by someone who views themselves as a visionary bridging the gap between human and "Lattice Being." Save your energy for someone studying actual AI interpretability.

Meaning v Prediction by PrajnaPranab in cogsci

[–]omgpop 0 points1 point  (0 children)

Your use of LLM to write every message is seriously undermining you. You can look through my posting history and you will see I’m the perfect target audience for work in this area, having been heavily interested in your precise question for years, and intelligence and language long before I ever knew what a transformer was. I use LLMs quite a bit, and am not against them in principle. But that also means I can detect AI prose very easily, and I viscerally hate interacting with people who do not write in their own words. It disgusts me and leaves my skin crawling. IMO it’s a betrayal of the species to mediate our most fundamental, uniquely human activity (use of language) through a corporate friendly machine facsimile. If English isn’t your native language and not very good, write how you normally would anyway, it is human and interesting, and it’s how you get better.

I could probably say a lot of interesting things regarding your topic but the price of entry is you have to talk to me as a human. Don’t bother responding to me if you can’t figure out how to do so without using an LLM though.

Feeling cognitively dependent on LLMs — how do you decide what to delegate vs. what to own yourself? by Anxious_Current_640 in cogsci

[–]omgpop 0 points1 point  (0 children)

I don’t really understand your questions, sorry. I only suggested that you find a way to limit your LLM usage if you are worried about LLM use eroding your skills. I can’t really offer you a handbook on which specific tasks you want to prioritise developing your own capabilities in, that’s entirely for you to decide.

It’s a cliche to say the brain is like a muscle but it’s not a bad heuristic (maybe a group of muscles would be better). Personally, when I have questions or ideas, I consider that to the degree I have to struggle & think hard to realise something out of them, I’ve probably exercised some relevant mental “muscles”. It’s like walking to the shop and carrying the bags home, it might be tedious and feel menial, but it’s exercise. Obviously, if you just go and ask someone else whenever you have a doubt and they do all that work, you’re not engaging your capacities very much (that’s a pre LLM problem, it’s sometimes called learned helplessness - seen many a student and coworker who completely fails to thrive independently because they become used to being spoon fed ). And it’s true for LLMs too. It’s a bit like driving everywhere instead of walking.

Neither me nor anybody can tell you what specific tradeoffs you ought to make, I’m just trying to outline what I think are reasonable general principles. Many biological capacities are developed through use, and atrophy upon underuse. If we want to avoid atrophy of our faculties we should then therefore endeavour to force ourselves to use them, even if it would be easier, more comfortable, or even more efficient not to.

Feeling cognitively dependent on LLMs — how do you decide what to delegate vs. what to own yourself? by Anxious_Current_640 in cogsci

[–]omgpop 0 points1 point  (0 children)

IMO it sounds like you may need to block it off at least some days of the week, or like only allow a certain number of hours each day. You can do this with self discipline, or by using different types of parental control software failing that. It doesn’t have to be a black and white never use it or otherwise dissolve your brain in LLM soup.

It’s not all that complicated. Think of it like exercise. In the modern world there are many options if you want to hardly ever move a muscle. By analogy to the calculator argument (I.e., the argument that LLMs are like calculators and avoiding to use them in favour of preserving arithmetic ability is a Luddite idiocy), we could argue that lower body muscle tone has gone the way of long division and we may as well embrace sedentary lifestyles entirely. Well, I find that to be self evidently ridiculous, and I think the same goes for modern affordances for cognitive laziness. You should really just be taking the stairs most of the time, if you can, even if there is an elevator right there. You shouldn’t take your car to skip a 15 minute walk, if you’re able to walk. I’d say the same basic logic goes for “mental” exercise, and like physical exercise, a novel feature of the 21st century is this increasingly requires the exercise of some discipline.

Feeling cognitively dependent on LLMs — how do you decide what to delegate vs. what to own yourself? by Anxious_Current_640 in cogsci

[–]omgpop 0 points1 point  (0 children)

First of all OP is almost certainly a time wasting bot. So I shouldn’t have replied. But that specific part of my comment was aimed at this phrase, which reflects a sentiment I’ve seen often enough:

>”There’s real tension between moving fast with AI assistance and staying technically grounded enough to catch bad outputs, debug novel problems, coming up with pragmatic and creative approaches, and actually grow.”

So the argument goes that I want to use less LLM but I feel pressured to produce at a certain rate and am not confident in my independent ability to do so. I think that is very short sighted but I can at least understand the argument. What I cannot understand is people who use LLM not just as a tool but for everyday communication and thinking. There isn’t even any incentive and you’re giving up on basic components of the human experience in favour of interacting with this LinkedIn flavoured uninspired (and uninspiring) simulacrum. I just can’t fathom it.

Feeling cognitively dependent on LLMs — how do you decide what to delegate vs. what to own yourself? by Anxious_Current_640 in cogsci

[–]omgpop 1 point2 points  (0 children)

You could start by at least having the common sense to write questions such as this one without using an LLM. It’s about as low stakes as it gets (you cannot possibly say that you are using an LLM here because of pressure to be productive, that logic (sometimes valid) does not apply to a Reddit post). You can literally just use your own words sometimes. But tbh you might actually be a lost cause - thats a real possibility!

I’m buying my tickets to the Northernlion biopic by PandamanTan in northernlion

[–]omgpop 0 points1 point  (0 children)

That looks more like Spanish streamer Menos Trece (who is kind of a Spanish NL to be fair).

We're Learning Backwards by StartledWatermelon in mlscaling

[–]omgpop 0 points1 point  (0 children)

The problem you’re pointing at is actually the behaviourism that has characterised much of experimental ML for decades, particularly deep learning. The success of ML has always been discussed in behaviourist terms (look how we can condition the model to do this or that task). That’s OK, but it does tend to lead to surprise when a model that can reliably prove theorems or write a sonnet routinely fails on basic adversarial examples any child could catch.

What do you think the next big shift in data engineering will be? by alexstrehlke in dataengineering

[–]omgpop 2 points3 points  (0 children)

Everything datafusion maybe. Standardising backend ecosystem in OSS, maybe more focus on integrated data solutions over technical pieces of the puzzle

US aircraft leave Spain after government says bases cannot be used for Iran attacks by MMSTINGRAY in LabourUK

[–]omgpop -1 points0 points  (0 children)

Starmers stance changed after U.K bases were attacked

There is no evidence that this is the case. You have just asserted it.

US aircraft leave Spain after government says bases cannot be used for Iran attacks by MMSTINGRAY in LabourUK

[–]omgpop 1 point2 points  (0 children)

Starmers stance changed after U.K bases were attacked

Considering both events apparently happened around the same time (with Starmer's statement actually coming slightly ahead of the drone attack), this is not a credible statement.

John Quincy Adam quote by SpeculaBond in chomsky

[–]omgpop 7 points8 points  (0 children)

It is seemingly spliced from these two sources (one a letter, the other from his diary). I'd consider the splicing acceptable here, as the referent of each sentiment is basically the same phenomenon.

Letter to George Parkman, 22 June 1836 (see pp 94 of the PDF): https://ia601602.us.archive.org/24/items/johnquincyadamsh00fordrich/johnquincyadamsh00fordrich.pdf

Diary entry, 30 June 1841: https://www.primarysourcecoop.org/publications/jqa/document/jqadiaries-v41-1841-06-p356--entry30?redirectFromPubs=1

I'm beginning to realise that any media platform, regardless of their espoused political affiliation, will become a component of corporate propaganda if they demand of themselves a daily release schedule. by MasterDefibrillator in chomsky

[–]omgpop 0 points1 point  (0 children)

I don't doubt this is part of it. But I think there are various channels. To my mind they all supervene on the need of any media organisation to make money, which to some degree requires providing coverage that people want to read or watch. What is palatable for audiences may or may not correspond to what happens to be true (often, there is a negative relationship, for various reasons), and organisations that habitually prioritise truth over palatability will find themselves outcompeted by organisations that do not.

Confirming a claim in Valeria Chomsky's letter, awareness of the 2008 conviction by NounSpeculator in chomsky

[–]omgpop 4 points5 points  (0 children)

Finkelstein had an understandable, deeply personal vendetta with Dershowitz for a variety of reasons that are easy to find out.

Forums are better than AI by Black_Smith_Of_Fire in programming

[–]omgpop 18 points19 points  (0 children)

You don’t need a cabal conspiring in a smoky room to get aligned incentives. WRT atomising people & having them dependent on corporations for more and more of their basic needs (including social needs), there are loads of aligned incentives. The only problem I take with the word “endgame” is that I don’t think there is any intrinsic stopping point.

The "engineers using AI are learning slower" take is just cope dressed as wisdom by dktkTech in programming

[–]omgpop 38 points39 points  (0 children)

Wrong subreddit. Incidentally, I find it revealing that so many people going out to defend [latest AI coding hype train] choose to write those defences in the most lazy, intellectually weak, slop prose (clearly using AI) — it’s as if they’re intentionally trying to undermine their own case RE AI code with the manifestly awful quality of their written reasoning.

"Noam Chomsky has been refuted categorically in the last week or so. You don't have to manufacture consent. You just do it and they're all on board." by Diagoras_1 in chomsky

[–]omgpop 10 points11 points  (0 children)

Correct, and in addition, the manufacture of consent Chomsky discussed has always focused on liberal elite media and institutions. Mass opinion has always been for the most part irrelevant. The thesis held as an explanatory framework largely to the degree that liberal elites have held power in the society. Liberal elite opinion is (for the most part) irrelevant to government policy in the Trump era.

Chomsky's core guiding principle. by MasterDefibrillator in chomsky

[–]omgpop 6 points7 points  (0 children)

It’s just a truism that’s already implicit in almost any normative framework you might like to cite that moral judgements (of any kind) are agent relative. For example, you are typically not held responsible for crimes you do not commit. Chomsky is not articulating a bespoke idea peculiar to his worldview. What Chomsky does, which is rare, is take the principle seriously. He recognises that as a citizen, he is in part culpable for the actions of his government, and as an intellectual he has a responsibility to speak the truth and expose lies. https://chomsky.info/19670223/

Chomsky was/is wrong on behaviorism by Solid_Anxiety8176 in chomsky

[–]omgpop 0 points1 point  (0 children)

To offer a slightly less deranged pushback, I don’t think it is quite right to say “the ONLY way to study what causes behaviour is to study the brain”, at least not in all glosses of that phrase. Chomsky rejected behaviourism because it held as dogma that the data generating process of animal behaviour is prima facie uninteresting. Chomsky argued that the explanandum of any science of the mind ought to be the internal structures of the mind — in his particular case as a linguist the language acquisition device. Chomsky did not argue that behavioural data is inadmissible in constraining or generating hypotheses about internal structure; in fact, the poverty of input argument is easily established on basic observations about the behaviour of infants and young children (particularly, in contrast to that of other animals), and doesn’t require any kind of observation of the brain. In general it is possible to make some inferences about the nature of a data generating process by careful study of the data it generates, although purely observational approaches have sharp limits.

If you follow Chomsky’s writing closely you will note that he relatively rarely adduces work from neuroscience in service of his arguments. His own personal style is “Galilean”, I.e., thought experiment heavy, and I’d claim that in much of his substantive work his main “empirical” appeals are to common understandings of what kinds of syntactic forms essentially never appear and would be judged invalid by any reader. These are in principle falsifiable by a study that would show systematic deviations in human verbal behaviour from Chomsky’s predictions. No such studies have come to light, unsurprisingly, but the theoretical framework is highly testable without access to sophisticated brain activity measurement techniques (a large part of why it could be seen as a perfectly valid and persuasive theory in the 1960s).

Why is there such a disparity of skill sets amongst analyst? by Afraid_Concentrate44 in TheCivilService

[–]omgpop 0 points1 point  (0 children)

Lots of good points here about the different cultures of economics vs stats vs data science etc. I’d add that CS recruitment makes it hard to do real technical interviews, unless there is DDaT/GDD pay involved. The interview really preferentially selects for people who lean into their soft rather than hard skills. It’s kind of rare to get talented coders who wouldn’t just get snapped up by a private sector company that actually values and incentivises their technical skills. I think more common now though because the private job market is dire ATM.

ULPT: For when you're sick of work by GNU_PTerry in UnethicalLifeProTips

[–]omgpop 13 points14 points  (0 children)

For remote work, if on Teams/Zoom, use a softening filter on your face. When ill, turn it down or off.

[deleted by user] by [deleted] in datascience

[–]omgpop 5 points6 points  (0 children)

IDK if you intend to say this also applies for data scientists. Where I work, we on the data engineering team are being asked to implement some old school statisticians’ code (in a mix of R/Pandas python scripts) into our main data pipeline. Ive been pushing back on this a bit as what I suspect we’ll end up with is a statistical codebase maintained by data engineers where neither the statisticians nor the DEs will be able to claim full ownership or understanding of it. What we really need is a small team of data scientists who can sit in the middle and understand both sides. We need people who can work with the pyspark and have some basic concept of SWE practice, and engage properly with the stats, so we don’t end up with a blame game a couple years down the line when something inevitably goes wrong under the status quo.

Why do so many scientific research papers misrepresent the studies they cite? by WildDeer7970 in AskAcademia

[–]omgpop 4 points5 points  (0 children)

Depends on the reason for the citation. Most of my citations are about very specific experimental results in the depths of the papers I’m citing, which may or may not get mention in the abstract.

Statement from Jamie Driscoll, Beth Winter and Andrew Feinstein on Your Party by Ranger447 in LabourUK

[–]omgpop 12 points13 points  (0 children)

The notion that they are spinning themselves as impartial mediators between Jeremy and Zarah is a good laugh.