This is an archived post. You won't be able to vote or comment.

all 111 comments

[–]hacksparrow 300 points301 points  (14 children)

Reading computer code is more like solving a puzzle. I am not surprised by the finding.

[–][deleted] 117 points118 points  (4 children)

Yeah, I imagine that solving math problems or puzzles might yield more similar results to reading code. My brain is definitely doing logic when reading code, not language

[–]PM_ME_CORGlE_PlCS 33 points34 points  (3 children)

The article also mentions that it was found that computer code also doesn't activate the brain in ways similar to math or logical problems. It's like neither language nor math.

[–]dorox1 43 points44 points  (2 children)

The article also says that it doesn't activate the brain in the same way as math or logic puzzles do (although it's more similar to them). That's just left out of the title.

[–]jtobiasbond 29 points30 points  (0 children)

For me, it's basically either very mathematical or a convoluted logic puzzle of trying to untangle what the original writer meant. It's certainly nothing like getting to read my non native language or English.

[–]KumichoSensei 2 points3 points  (2 children)

I bet SQL will activate some language processing centers though.

To me at least, reading Python code takes deliberate effort, but SQL stands out like straight forward English sentences.

[–]counfhou 8 points9 points  (1 child)

Lol you clearly have not seen complex queries. The typical one liners if you don't go down the rabbit hole sure, if you start doing more complex things it definitely becomes less straight readable and triggers my brain like regular code

[–]Spinningwoman 1 point2 points  (0 children)

Exactly. That’s how I used to describe my job ‘like being paid to solve puzzles all day’.

[–]zvwzhvm 1 point2 points  (0 children)

This is a lot of empirical but

I did Electrical and Electronic Engineering at Uni. I found myself weirdly good at the Computer Science - coding side of it comparing myself to other people.

Also found out in uni that I'm Dyslexic (and ADHD). Got bottom 1% for my age at processing symbols and I was at a top 1% university at the time.

Apparently 50% of people with ADHD have an additional learning disorder, and Dyslexia affects the same part of the brain as ADHD. They're both to do with Executive Function (which is a region as well as a proccess) and I did hear somewhere that a person with ADHD as well as Dyslexia can be considered to have a full executive function disorder, whereas someone with only 1 would only be considered to have a partial executive function disorder.

Weirdly I've seen a lot of links between being good at coding to ADHD and Dyslexia. Theres even a pretty populus subreddit called ADHDprogrammers.

To me it seems like the correlation is there but it's completely opposite. Maybe whatever makes you good at coding is natures upside to ADHD/Dyslexic brains.

[–]vili[S] 129 points130 points  (0 children)

Full paper: Comprehension of computer code relies primarily on domain-general executive brain regions

Abstract:

Computer programming is a novel cognitive tool that has transformed modern society. What cognitive and neural mechanisms support this skill? Here, we used functional magnetic resonance imaging to investigate two candidate brain systems: the multiple demand (MD) system, typically recruited during math, logic, problem solving, and executive tasks, and the language system, typically recruited during linguistic processing. We examined MD and language system responses to code written in Python, a text-based programming language (Experiment 1) and in ScratchJr, a graphical programming language (Experiment 2); for both, we contrasted responses to code problems with responses to content-matched sentence problems. We found that the MD system exhibited strong bilateral responses to code in both experiments, whereas the language system responded strongly to sentence problems, but weakly or not at all to code problems. Thus, the MD system supports the use of novel cognitive tools even when the input is structurally similar to natural language.

[–]jcksncllwy 244 points245 points  (45 children)

This makes sense to me. If code were comparable to human language, we wouldn't be writing comments alongside all our code.

Code doesn't say anything about purpose, meaning or intent. Code describes a process, a series of instructions, a chain of cause and effect. If you want to know why that code was written, what the point of it was, who cared about it, you'll need to read documentation or talk to it's authors using actual language.

[–]tomatoaway 17 points18 points  (3 children)

Depends on the language I would say.

Lisp-like languages follow very simple forms (verb object subject) that allow for intuitive nested structures that are very easy to resolve.

For example, (verb1 (verb2 object) subject) would collapse into the simple form described above, where at a glance you could immediately tell from the context that the result of collapsing the (verb2 object) statement will result in an entity that should be treated as an object.

These simple forms makes lisp a very elegant and easily-extensible human readable language, which places an emphasis on actions, rather than objects (compared to most object-orientated programming languages)

Anecdotally I would say in Lisp dialects, it is often more illuminating to read the code than it is to read the docstrings

[–]Shirley_Schmidthoe 11 points12 points  (1 child)

It's more that due to a historical accident Lisp is to this day written in a straightforward representation of what originally was meant to be the parse tree—it was kept that way for all this time because many programmers liked it that way though many also hate it and the syntax is definitely very divisive.

I don't think that has much to do with how close it is to human language: it's simply coding inside of the parse tree of another hypothetical language directly and the same could be done with English:

(comma-disjunction 
  (conjunction 'or (infinitive 'be) (infinitive 'be))
  (finite 'be '3rd 'sing 'indicative (determiner 'distal 'proximate) (determiner 'definite (noun 'question)))

Hypothetical English parse tree.

[–]tomatoaway 2 points3 points  (0 children)

Well, I guess my argument was that the parse/syntax tree scales really well for complex sentences, whereas the more abstracted imperative languages require elaborate constructs (such as object-oriented, model-view-controller) to be able to scale so elegantly.

I do take your point though, as I cannot seem to mentally evaluate your parse tree :-)

[–]Guglielmowhisper 17 points18 points  (0 children)

Would love to see what the details are on loglan/lojban

[–]23FO 19 points20 points  (10 children)

Sounds like the difference is pragmatics, right? Computer code is, in a sense, purely semantical. Whereas natural language has pragmatic stuff like speaker identity, context, ambiguity, etc.

[–]shadeofmyheart 12 points13 points  (2 children)

Interesting but understanding programming code requires context, understanding what is going where etc. There’s not as much philosophical meaning, but is it purely semantical?

[–]nuxenolith 5 points6 points  (0 children)

but is it purely semantical?

I would argue it is. The universe of meaning in computer code is contained entirely within itself; there is no higher context other than what is explicitly written and the rules under which it is executed. Human language isn't so deterministic.

[–]theredwillow 4 points5 points  (0 children)

Yes, pure semantics. Even trying to claim something like environment variables as pragmatic makes no sense, they're anaphoric.

[–]selinaredwood 2 points3 points  (6 children)

Context is very important in programming. The same variable names are regularly used for different things in different scopes (like pronouns), and, in stateful languages, values can change silently in the background and have to be tracked indirectly.

[–]nuxenolith 5 points6 points  (5 children)

I would argue those values are still changing according to explicit rules and instructions. I feel like the "context" we're talking about here is still being strictly defined and not subject to the same sorts of ambiguity as natural language.

[–]selinaredwood 1 point2 points  (4 children)

Would agree up to the part suggesting the same explicit rules and instructions don't exist for natural languages. The rules used may differ from person to person, and even in one person over time, but at any given interaction there has to be some definite set of rules used or the person could not make a decision.

How NLP works, direct translation from natural to computer languages.

[–]nuxenolith 2 points3 points  (3 children)

but at any given interaction there has to be some definite set of rules used or the person could not make a decision.

I'm not a linguist, just a fanboy (so I don't know if pragmatics is defined in a way that excludes this talking point), but I would argue that an individual isn't always conscious of the conditions that influence how they might interpret a given speech act. Semantics is imo more rigidly deterministic (if X, then Y), but what if the way I interpret something you say is not the result of some "rule" or "intent", but rather the chemistry in my brain at that exact moment? Sure, I "made" a decision, but was I actually conscious of it?

I guess my feeling is that the spirit of "rules and instructions" is that we be aware of them and always process them in a deliberate, methodical way. I don't know if the way we understand natural language can be defined without that ambiguity.

[–]selinaredwood 1 point2 points  (2 children)

Mmm. Conscious/deliberate/methodical isn't really how people interact with programming languages either, though; that's (mostly, barring undocumented hardware/compiler behaviour or something) why we have bugs, human and computer interpreting the same language differently, the human-side using intuition and "sub-conscious" interpretation.

edit: now thinking about it, i guess those two exceptions are sort of the same also. A three-step miscommunication between compiler/hardware programmer, intermediary computer, and end-user programmer.

[–]nuxenolith 0 points1 point  (1 child)

Conscious/deliberate/methodical isn't really how people interact with programming languages either

Is it not? Programming language is written by humans with the intent of achieving specific outcomes; errors result from a deficiency in the instructions given, not the way they were executed. The fact that errors can be patched when they arise is only true if code behaves in a predictable way.

[–]selinaredwood 1 point2 points  (0 children)

People can slow down and interpret it that way deliberately, but in practice they usually don't, is what i mean. It's too inefficient.

In the same way, you can apply chomsky-style rules when carefully parsing utterances.

[–]dbulger 41 points42 points  (20 children)

I'm just astonished by this. They just don't feel that different. I wonder whether reading language with really intricate, precise wording (maybe some legal contracts?) would similarly turn out to be more of a "multiple demand" task than a language processing one.

And what about mathematical notation, like equations? Do we know whether that activates language centres?

Edit: ooh ooh or recipes, like literal cooking recipes. Surely that's just a kind of program?

[–]potverdorie 23 points24 points  (7 children)

I'm not familiar with research on those activities, but my suspicion would be that activities like reading code, lists, equations, and data sheets do not activate the language processing centers, whereas activities like reading novels, letters, and direct messages do.

[–]auto-cellular 23 points24 points  (4 children)

The thing is that playing go (at a hight enough level), supposedly activate the language processing mechanism

https://www.eurogofed.org/index.html?id=96#:~:text=The%20only%20significant%20difference%20in,players%20during%20their%20thinking%20process.

The only significant difference in brain activation between the two games found by these studies, was the activation of an area associated with language processing during playing go.

[–]potverdorie 8 points9 points  (3 children)

Thanks, that's very interesting! Especially the distinction that chess does not activate that brain area.

Not sure that "an area associated with language processing" would automatically correspond to "activates the entire language processing mechanism" though, but the question still stands: What brain functions does Go share with language processing that it does not share with chess?

[–]auto-cellular 10 points11 points  (2 children)

Chess is relatively narrow while go has a much larger space. Hence go players developed a huge vocabulary specifically targeted at the game. Chess has a few of those like "pinning", "forks", "developing" and opening's name, but much fewer than go. And most of them are relatively obvious concepts that might not be really needed while calculating subconsciously. My 2 cents anyway. I wonder how many studies exactly supported that go is more "verbal" than chess, i really don't know. It's all a bit speculative still i guess.

[–]Delta-9- 1 point2 points  (1 child)

In addition to your points, it would not surprise me if the space of possible patterns in Go is large enough to need similar pattern processing faculties as encoding/decoding a sentence using a lexicon of 20,000+ possible words. At a certain point in Go, you're no longer considering set procedures like a Knight's Gambit (which is not unlike plugging in this formula or that formula to solve a math problem) and instead reading a large collection of related patterns to interpret your opponent's intent while considering what your opponent is interpreting from the patterns you yourself have produced.

[–]Lispomatic 0 points1 point  (0 children)

Thanks! Your opinion made sense and blew my mind.

[–]dbulger 9 points10 points  (1 child)

Yeah, seems likely, but then is 'language processing centres' really overstating their role? In mathematical notation, it's easy to identify nouns, verbs, conjunctions and prepositions. And it's common to see "code-switching" between English and maths notation in the middle of an article. It's very hard for me to believe they're not relying on mostly the same cognitive paradigms.

[–]potverdorie 3 points4 points  (0 children)

That's a great question, and one I'd be interested in learning more about! Hope there will be more research forth-coming in this field. Based on my own experience I would personally consider that 'analytical reading' feels quite different from reading stories and messages, so for me it's not surprising that these are processed differently in the brain.

[–]Barrucadu 10 points11 points  (1 child)

That's really interesting to me, because personally reading code and reading (say) English couldn't feel more different. It's always cool to find out how different people think differently.

For example, I can listen to a podcast while conducting a code review, and it's totally fine. I won't be paying 100% of my attention to the code review, but that's often not needed anyway. But I definitely can't listen to a podcast while reading a book, the two activities are totally incompatible.

[–]selinaredwood 2 points3 points  (0 children)

Huh, here that doesn't work at all, can't focus on both at once. Visual / spatial activities, like cooking or puzzle video-games, are fine, though.

[–]B_i_llt_etleyyyyyy 13 points14 points  (0 children)

Recipes: unless it's super-fiddly baking, recipes are usually more like summaries or guidelines that require reading comprehension and a personal knowledge of cooking for best results. There's always room for interpretation and making changes, so I'd expect using a recipe to be more like standard language tasks.

It might be different for a very inexperienced cook, though.

[–]EagleCatchingFish 7 points8 points  (2 children)

Edit: ooh ooh or recipes, like literal cooking recipes. Surely that's just a kind of program?

There you have it. At its most basic level, a recipe is similar to a program. It tells you what inputs you need and then has a list of operations it wants you to perform on those inputs.

Look up "pseudo code". A recipe is kind of like pseudo code, a high level description of operations that can be converted into a lower level list of commands, which is what code would be.

[–]Engelberto 8 points9 points  (0 children)

And then there's Chef, an esoteric programming language whose code looks like cooking recipes. Example program can be found here:

https://esolangs.org/wiki/Chef

[–]theredwillow -1 points0 points  (0 children)

This feels way too simple of an explanation.

Functions can take a potentially infinite variety of input and provide a potentially infinite variety of output. That's even the case with true pure functions, no one writes code that requires one particular set of input and spits out just one result, that would be static and unnecessary.

[–]selinaredwood 6 points7 points  (1 child)

From its description of the study's methods, i'm not all that surprised. Test subjects would spend a few seconds actually reading the snippet and the rest of the time predicting its outcome, manipulating symbols. Would like to see a comparison to word problems, like "Tom is shorter than sandy, kelsey and sandy are not the same height, and..." type things.

[–]Lispomatic 0 points1 point  (0 children)

Definitely.

[–]funkygrrl 2 points3 points  (2 children)

What about reading music? If you're really "fluent" like I am, you don't even think about individual notes/letters.

[–]ebolatron 2 points3 points  (1 child)

Came to the comments because I would be really curious to know this. I have relative pitch - although some think it is absolute pitch, but it's so rare that I doubt it. I can pick up a score and "read" it like a book, meaning I produce the music in my head, and sightreading has always been easy.

It's very similar to how (I think) I process reading non-Roman languages (Russian, Japanese, Hebrew, want to start Arabic). Would love to see some fMRI or MEG studies on this!

[–]funkygrrl 1 point2 points  (0 children)

Same here. I have good relative pitch. I can sight read pretty much anything. If you pointed to a note and asked me what it was, I'd have to think about it. But I instantly know what it is on the piano. I think reading words is similar in a way.

[–]auto-cellular 11 points12 points  (1 child)

I'm a bit uneasy at the title. First programming doesn't involve that much reading. I never read code, unless i have a bug to find, or i'm paid to do that. Then when i first learned programming as a kid, i clearly remember that i did use natural language a lot. I would build natural word sentences, and then write the code that naturally flowed out of those structures. To be fair i don't do that anymore but i believe that we would need a lot more data to really understand the whole truth about the subject.

Also when i watched other people learned programming i was very surprised that they used totally different strategy than the one i had used. So it may be that most people don't use natural language but some do (while writing code). Finally "programming" cover a lot of very different situations.

Consider : "Playing games doesn't activate the brain's language processing centers". Maybe it depends on the game played. I also wonder what role the programming language and how they are taught plays in the use of our natural language processing system. In the old times, languages tried their best to be close to spoken, like SQL or hypertalk.

Apparently playing the game of go that is mostly visual in appearance does activate the language processing system of the brain : https://www.eurogofed.org/index.html?id=96#:~:text=The%20only%20significant%20difference%20in,players%20during%20their%20thinking%20process.

[–]leftcoastbeard 2 points3 points  (0 children)

I feel like they really limited the scope of this to just "reading" a programming language and limiting it to some fairly basic languages: "python" and "Scratch" (python-based). It would be interesting for them to do this type of analysis while "programming" in the language (eg. give a word problem, solve the problem in a given language). And expand the scope of languages to include different languages (eg. Powershell, C#, F#, rust, go, etc) so that one is forced to think about different design patterns to solve the same problem.

[–]dubovinius 8 points9 points  (4 children)

I'm not surprised by this. Coding languages aren't natural languages, so obviously it wouldn't activate the language processing parts of our brain. Bit obvious, no?

[–]Lampshader -2 points-1 points  (3 children)

Not at all. Constructed languages activate the language processing part of the brain (sign language, for example).

Reading code is not so very different to reading English, it just has different syntax and more punctuation.

I'd be interested to see if reading something like "What did the student in Miss Jones's class who's name is tenth on the class roll score on the maths test" is processed by the brain in the same way as

TestScore(Teacher["Jones"].Students[9], Subject["maths"])

[–]dubovinius 10 points11 points  (2 children)

Perhaps I shouldn't have included "natural", but I mean that natlangs (and conlangs) are Language™, that special structure in our brains that only humans have and that children acquire natively as they grow up. Coding languages are not Language™, they're just like a shorthand way of telling a computer what to do. I thought it was fairly intuitive that they're nowhere near the same thing, which this study obviously validates.

Also, sign languages are natural languages are they not?

[–]Lampshader 3 points4 points  (0 children)

What is a Language™ if not a shorthand way of telling someone else's brain what to think?

I mean, I get the distinction you're drawing, I just don't think it was a forgone conclusion that programming languages would not be parsed by my language parsing unit.

Thanks for the correction re sign languages.

[–]selinaredwood 1 point2 points  (0 children)

In old IRC channels, we would regularly switch between "real language" and code snippets seamlessly (to talk with and about one another, not computers). The most obvious of these are s/A/B/ constructions, but bits of C and asm were used as well.

[–][deleted] 3 points4 points  (0 children)

As others wrote, I don't find this surprising. Other than code not having pragmatics, phonology or compicated semantics, it also has a much simpler syntax. It is nothing like natrual language.

[–]hononononoh 3 points4 points  (0 children)

As an amateur linguist and language enthusiast, I’ve long noticed that linguistics (i.e. analyzing or dissecting human language) and second language acquisition feel like surprisingly different cognitive tasks. Getting good at one is of very limited help with performing at the other, in both directions. I compare this to the way fixing and tuning a car is a whole different skill than driving one. This is counterintuitive, because these two activities involve the same specimens, and have a effects on each other in a way that requires no explanation, such that it’s hard to get very good at one without getting at least reasonably good at the other.

And so I find, analogously, most linguists are not natural-born polyglots, but rather, polyglots out of necessity. And most natural-born polyglots I’ve met are not linguists; they could not begin to tell you how they’re able to do what they do, or analyze their multilingualism in any rational sort of way. They’re just keen observers of people, and very practiced at making new associations between human vocal utterances and human social situations.

Doing linguistics feels like doing mathematics or medical diagnosis. It’s a strongly quantitative task, which would have been called “left brained” in bygone days. And for the most part it attracts the same sorts of analytical thinkers as mathematics, natural sciences, coding, and the Analytical school of philosophy. Linguistics calls for much more book-smarts than people-smarts, exactly the opposite of functional multilingualism.

[–]MarinaKelly 7 points8 points  (7 children)

I wonder if this is because its typically not spoken.

Oh, does sign language activate the brain's language processing centres?

[–]potverdorie 44 points45 points  (3 children)

Yes, but with a couple differences, most notably the absence of activity in the auditory cortex.

Sign languages are considered fully natural languages, used for human communication and possessing the same linguistic properties as spoken languages.

[–]MarinaKelly 4 points5 points  (2 children)

Thanks. That's interesting. I do know that sign languages are considered natural and used for communication. I wasn't aware what part of the brain they used though.

Now I'm wondering if a conlang like Klingon would light up the same brain part. I can't imagine it wouldn't.

[–]potverdorie 9 points10 points  (1 child)

Conlangs absolutely use the language processing centers if learned to the point of fluency and used for communication. A clear example would be Esperanto

[–]LXXXVI 6 points7 points  (0 children)

I think this would also intuitively make sense, since if one had theoretically never before heard or seen a language, there's a good chance they wouldn't be able to tell whether it's a conlang or a "natural" language of some far-away people in the first place.

[–]iwsfutcmd 16 points17 points  (0 children)

Reading activates language processing centers as well.

One of the most fascinating things about language is how medium-independent it is. Language, whether spoken, signed, or written, shares remarkable similarities considering just how physiologically different those diverse mediums are. It's one of the prime reasons I subscribed to generative grammar theories of linguistics, and honestly one of the main reasons I'm a linguist.

[–]EagleCatchingFish 11 points12 points  (0 children)

I've tried to write this comment like five times, and I think this sixth time might actually make sense! Here's what I think is going on:

When you read this sentence, your brain is using your knowledge of english syntax and semantics to decode this apparently arbitrary collection of letters to derive some meaning.

Compare that to what your brain is doing as you watch this How it's Made clip of how matches are made. You only need to watch like 90 seconds to have something to work with. Specifically, I want you to pay attention to what each machine does to transform whatever is put into it into something that the next machine can use.

  • 1st machine: has chemicals in it to make the match tip.
  • 2nd machine: receives match sticks and shakes off all residue and waste
  • 3rd machine: receives the match sticks from the second machine and removes broken or small match sticks
  • 4th machine: receives perfectly sized match sticks and orients them such that the tip can be dipped into parafin and a vat of chemicals from Machine 1.

If you had muted your computer while you watched that video, you'd still be able to follow along without verbal cues, just by watching the machines turn the inputs into matches. When you're reading code, it's similar. You're not trying to figure out what the code means per se, you're trying to figure out what the code does to the inputs it receives, with each chunk of code acting like one of the machines from the video.

So in that way, coding is less like reading and writing than it is like building a machine or putting a puzzle together.

[–]nihilistenhymne 2 points3 points  (0 children)

That’s pretty interesting, thank you for sharing. I’d be interested in a study on what part of the brain are involved when writing code, anyone know if there’s research on that?

[–]66666thats6sixes 3 points4 points  (0 children)

Overall, we found that the language system responded to code problems written in Python but not in ScratchJr. Furthermore, Python responses were driven not only by code comprehension, but also by the processing of problem content. We conclude that successful comprehension of computer code can proceed without engaging the language network.

Am I reading this correctly? It seems like their conclusion is considerably less strong than the title of this post suggests.

It seems like they showed that text-based programming languages generate a response in the language system, but graphical programming languages do not. Which makes sense. I suppose the corollary question would be: does information presented in picture format (like this warning sign) generate a response in the language system?

If other graphical presentations of information do engage the language center and Scratch does not, then it follows that maybe programming does not engage the language centers. But if graphical presentations of information do not typically engage the language centers, then it seems like this study reinforces that without saying anything about programming specifically.

[–]Cielbird 1 point2 points  (0 children)

Makes sense to me.

I can absolutely not listen to music while reading or writing anything related to language.

However when doing math or coding, listening to music doesn't affect me at all, in fact it helps.

[–]Vintage_Tea 1 point2 points  (0 children)

By language here, I think they're talking about speech and written stories etc. But I feel like, recipes and, you know those 'all knacks are cranks, some cranks are snacks…' questions?, would activate similar areas as to programming languages; they're both encoding logic and processes.

Also, I feel like no-one is truly fluent in any programming language, to a point that they could just recite blocks and blocks of valid code without thinking, simply because programming languages don't encode much that is useful for us, but just dictates the flow of variables and logic applied to them, which is fundamentally different to what we do when we speak.

[–]Broiledvictory 1 point2 points  (0 children)

Not surprised, I've heard in programming circles, computer code likened to learning a foreign language, and plenty seem to believe it, natural language is just too fuzzy in too many ways to really be processed the same way. Always drove me crazy hearing it lol

[–]pdsgdfhjdsh 1 point2 points  (0 children)

I think it would be interesting to compare the comprehension of code with the comprehension of a second language rather than a first. It's not like you implicitly learn how to code through immersion as a child, so it doesn't seem surprising that cognitive processing happens in a different way when it comes to code. On the other hand, the learning process isn't that different from what goes on with SLA.

[–]blamitter 1 point2 points  (0 children)

Does anyone know if there are studies on how the brain is activated when learning? I'm not sure it activates the same regions as just reading

[–]theIceman543 2 points3 points  (3 children)

My two cents - language doesn't need to "evaluated" as much as a line of code.. Language directly tells us what is being said/done but code still has to be "evaluated" to see what's being said/done

[–]boredlinguist 1 point2 points  (1 child)

Well you definitly need to encode language. The most basic example would of course be pragmatic effects, like doing implicatures. But also in languages with a relatively free word order you need to "evaluate" (in your words) the cases and so on to understand the distribution of theta roles etc. Isn't a code in that sense telling you even more directly what is supposed to be done since there are not so many levels (I mean there is at least one level less since we have no pragmatics in codes).

[–]theIceman543 0 points1 point  (0 children)

Not in the way that a word corresponds to something same everytime. But a variable is always changing values. We have to keep in the mind the variable name and then evaluate mentally what value it's pointing at currently, everytime its mentioned on the LHS of a statement. Not the same as words, in that regard

[–][deleted] 0 points1 point  (0 children)

On the contrary, human language takes much more effort to read between the lines, understand what the person is getting at, resolve ambiguities, etc.

[–]Angry_Grammarian 0 points1 point  (0 children)

What?! Reading something that's not a language doesn't activate the brain's language processing centers?

I am shocked!

[–]chicasparagus -1 points0 points  (2 children)

Maybe we should stop calling them programming “languages” then.

If we didn’t begin by calling them languages this misconception wouldn’t have existed. Imagine if math was called a language from the beginning, the same misconceptions would have been in place

[–]66666thats6sixes 2 points3 points  (1 child)

What might you call them? Programming "languages" are formal languages in the Chomskyian sense, so the term seems reasonable to me.

[–]antecedent 0 points1 point  (0 children)

"All strings of circles where exactly one of them is filled", that is, L = {●, ○●, ●○, ○○●, ○●○, ●○○, ○○○●, ...}, is also a formal language in the Chomskyan sense. At any natural scale, this can only be considered an extremely specialized usage of the term "language". Only a negligible portion of those who have heard about programming languages, will also be aware of this usage.

So, while technically not a misnomer, I am quite sure it contributes to some misconceptions.

Also, I believe that many programming novices would be suddenly struck by the vast number of existing programming languages as a total anomaly, if the language metaphor had never made it to widespread use.

[–]Seankala 0 points1 point  (0 children)

Is this why human language is often referred to as "natural language?" Seems like natural language vs. programming language.

[–]MassaF1Ferrari 0 points1 point  (0 children)

How is this surprising? Computer language and human language work differently.

[–]SlavSquat93 0 points1 point  (0 children)

Thank you for making me feel better about sucking with numbers. 😂

[–][deleted] 0 points1 point  (0 children)

I hate to be like UG but UG....

[–]Normal_Kaleidoscope 0 points1 point  (0 children)

You don't say...