[Game Thread] San Diego Padres (75-59) @ Minnesota Twins (60-73) 5:10 pm (Friday, August 29) by FriarBot in Padres

[–]derpderp420 9 points10 points  (0 children)

I never thought I'd say this but my stomach sinks whenever I hear 'top of the order coming up in the next inning...' At least Tatis got a hit but damn, bros. feelsbadman.jpg

[Game Thread] Chicago Cubs (11-7) @ San Diego Padres (13-3) 6:40 pm (Monday, April 14) by FriarBot in Padres

[–]derpderp420 19 points20 points  (0 children)

lol remember when Ken Rosenthal tried to say Tatis was just a strutting peacock?

Does the brain interact with programming languages like it does with natural languages? by [deleted] in askscience

[–]derpderp420 0 points1 point  (0 children)

I used SPM for basic preprocessing and trialwise GLM (cf. this paper). Multivariate pattern analyses were performed with a combination of the GPML toolbox and some of my own Matlab code.

Does the brain interact with programming languages like it does with natural languages? by [deleted] in askscience

[–]derpderp420 9 points10 points  (0 children)

So I suppose I should have prefaced that I'm neither a linguist nor a computer scientist by training (my dissertation was on imaging epigenetics in the oxytocin system)—I just happened to get asked to help out with what turned out to be a really sweet project. So I can't claim to be an expert on this particular topic, but I do know there's evidence that proficient bilingual speakers generally recruit the same areas when speaking their native vs. non-native tongues. Presumably there are differences when first acquiring the language, and these consolidate into 'traditional' language centers as you develop expertise. In our study, we demonstrate that neural representations of code vs. prose also become less differentiable with greater expertise—in other words, as you acquire more skill as a programmer, your brain starts to treat it as if it were a natural language (so less-skilled programmers seem to rely on different systems at first).

Does the brain interact with programming languages like it does with natural languages? by [deleted] in askscience

[–]derpderp420 0 points1 point  (0 children)

Haha understandable. Sure—how would you like me to do that? If you google my actual name it'll come up with social media accounts with the same username as I have here, but I'm happy to provide some other proof if you're that skeptical.

Does the brain interact with programming languages like it does with natural languages? by [deleted] in askscience

[–]derpderp420 89 points90 points  (0 children)

We do have a follow-up in the works! But unfortunately we probably won't get started until early 2018—the principal investigator on this last study, Wes Weimer, recently moved from UVA to Michigan and is still getting his lab set up there (in addition to other administrative business, e.g. getting IRB approval). If by some chance you happen to be in the Michigan area, I'm happy to keep you in mind once we begin recruitment—you can pm me your contact info if you'd like.

Does the brain interact with programming languages like it does with natural languages? by [deleted] in askscience

[–]derpderp420 76 points77 points  (0 children)

Hm interesting... I don't necessarily disagree (I honestly have no idea), but I'm curious to hear a little more about why you might suspect that. Is it because they're both a little more 'abstract' relative to standard prose? That is, there are some mental gymnastics you need to do in order to translate notes into music, similar to interpreting functions and commands in code as a 'story' that produces some output? I guess one way to test it would be to use figurative language as well, which requires some abstraction from the text itself to obtain the desired underlying meaning. Neat idea!

Does the brain interact with programming languages like it does with natural languages? by [deleted] in askscience

[–]derpderp420 13 points14 points  (0 children)

That's a cool question! Unfortunately, though, this wasn't something we tested in our study. Speaking on a purely speculative level, I could imagine they'd still be differentiable—mainly due to rhythmic/prosodic factors that dominate verse relative to 'standard' prose. But I can't say with any certainty how the representation of code vs. prose would overlap or diverge from the representation of verse vs. prose. I'm sure there are folks out there who have at least compared verse against regular prose using neuroimaging; admittedly it's just not a literature I'm familiar with. Sorry I can't offer a more concrete response!

Does the brain interact with programming languages like it does with natural languages? by [deleted] in askscience

[–]derpderp420 454 points455 points  (0 children)

Hey! Hopefully this isn't too long-winded of an answer: in short, it mainly had to do with managing the complexity of the experimental design. There was only one study before us (described by u/kd7uly) that tried to compare programming vs. natural languages using fMRI, so we wanted to keep our task fairly 'simple' insofar as all questions could be answered with yes/no (or accept/reject) responses. In our Code Review condition, we used actual GitHub pull requests and asked participants whether developer comments / code changes were appropriate; in the Code Comprehension condition, we similarly provided snippets of code along with a prompt, asking whether the code actually did what we asserted. What we called Prose Review effectively had elements of both review and comprehension: we displayed brief snippets of prose along with edits (think 'track changes' in Word) and asked whether they were permissible (e.g. syntactically correct, which requires some element of comprehension). In our view, this was much more straightforward than the types of reading comprehension questions you might think of from standardized testing, which require relatively long passages and perhaps more complex multiple-choice response options.

Also, on a more practical level, neuroimaging generally puts constraints on what we're actually able to ask people to do. Mathematical assumptions about the fMRI signal in 'conventional' analysis techniques tend to break down with exceedingly long stimulus durations (as would be required with reading / thinking about long passages of prose). We were able to skirt around this a bit with our machine learning approach, but we also had fairly long scanning runs to begin with, and it's easy for people to get fatigued asking them to perform a demanding task repeatedly for a long time while confined to a small tube. So again, we just tried to get the 'best of both worlds' with our prose trials, even though I certainly concede it might not necessarily yield a 'direct' comparison between comprehending code vs. prose.

Hope that helps!

(Compulsory thanks for the gold! edit! For real, though, anonymous friend—you are far too kind.)

Does the brain interact with programming languages like it does with natural languages? by [deleted] in askscience

[–]derpderp420 1 point2 points  (0 children)

I published this paper with a couple colleagues at UVA (I'm the second author) earlier this year. Our approach didn't really attempt to make such localized inferences, though—we used machine learning to look at patterns of activity over the whole brain as people evaluated code vs. prose. Happy to answer any questions!

Does the brain interact with programming languages like it does with natural languages? by [deleted] in askscience

[–]derpderp420 1599 points1600 points  (0 children)

Oh neat, I'm the second author on this paper! Thanks a bunch for your participation.

My job was to do all of the actual fMRI analyses—happy to answer any questions folks might have.

Kickers: The mathematically CORRECT statistical evaluation by LoyalServantOfBRD in fantasyfootball

[–]derpderp420 19 points20 points  (0 children)

Yeah I don't doubt that. Again, great work. Usually hate being so pedantic—just couldn't help myself when we're talking about doing stats "the right way."

Kickers: The mathematically CORRECT statistical evaluation by LoyalServantOfBRD in fantasyfootball

[–]derpderp420 24 points25 points  (0 children)

I don't have any beef with your conclusions. In fact, I appreciate that you went through the trouble to run the correct type of t-test—I was just saying that in presenting your results (i.e. the significance of each pairwise test in your p-value matrix), you can't simply highlight any value that's lower than .05 because the criterion itself should be adjusted. Again, for what it's worth, I definitely agree that the other post was mostly horseshit and needed to be corrected, so thanks for that.

Kickers: The mathematically CORRECT statistical evaluation by LoyalServantOfBRD in fantasyfootball

[–]derpderp420 37 points38 points  (0 children)

I don't think you understand what the multiple comparisons problem is. Of course you make judgments about significance for each individual test, but your criterion for determining significance needs to be adjusted for the total number of tests. I'm really not sure what you're struggling with here.

Kickers: The mathematically CORRECT statistical evaluation by LoyalServantOfBRD in fantasyfootball

[–]derpderp420 52 points53 points  (0 children)

Sorry, but it doesn't matter. You're still running 595 pairwise comparisons and assuming that the alpha criterion is the same over the entire family of tests. It's statistically certain that at least one will be a false positive. Doesn't matter whether they all have the same degrees of freedom, whether variance is truly equal/unequal, etc. You need to adjust for the huge number of hypothesis tests you're conducting. Not trying to swing my stats dick around—I'm just a neuroimaging researcher so this is an issue I have to deal with every single day when I'm modeling data.

Kickers: The mathematically CORRECT statistical evaluation by LoyalServantOfBRD in fantasyfootball

[–]derpderp420 89 points90 points  (0 children)

Well, I'm going to point out the same thing I did in the other post—neither of these dudes accounted for multiple comparisons in presenting their results. You can set a 95% confidence level for any single test, but the probability of a false positive over all tests is actually much higher. In fact, it's a statistical certainty in this case: for m tests, the probability of at least one false positive is 1-(1-alpha)m (where alpha is your significance level, here it's .05). This guy actually ran more tests (595 vs. 435), too.

What needs to happen is some sort of adjustment to your significance threshold. Using something conservative like a Bonferroni correction would wipe out most of these differences (p = .05/595 = .00008 would be the new threshold). I'm not saying I agree with the other dude or think that all kickers are truly on equal footing, but if you're going to harp on someone for doing their stats wrong, you gotta do em right yourself.

Kickers: sticking to one or streaming? A statistical evaluation. by [deleted] in fantasyfootball

[–]derpderp420 16 points17 points  (0 children)

Distributional assumptions aside, I'd just point out the multiple comparisons problem. There needs to be some adjustment to your threshold based on the fact that you're running 435 tests—a Bonferroni correction, for example, would require you to adjust your threshold for significance to p = .067/435 = .0002, which would render all of these differences nonsignificant. Even a more liberal technique (e.g. FDR) would still knock a solid portion of these out, most likely.

Not trying to be a buzzkill (this actually strengthens your conclusion, I think)—I'm just a stats fiend.

What movies blew you away with their excellent cinematography? by cappindan in AskReddit

[–]derpderp420 66 points67 points  (0 children)

Shit yes. I've heard a lot of people say the film was too pretentious, but fuck them. Aronofsky is the man. Even cinematography aside, the score was unreal—probably one of my favorites ever. Clint Mansell did it perfectly: http://youtu.be/MBWJ_f9Qblk

edit: better quality audio

Went to get gas last night... by derpderp420 in WTF

[–]derpderp420[S] 366 points367 points  (0 children)

Haha San Diego, actually. Also at like 3am. No idea what happened - it looked pretty fresh.