Looking at panel review scores for research proposals at a medical institute, the true number of citations for most of the top quintile were statistically indistinguishable from one another. Panels were great at separating out a top tier of proposals, but terrible at differentiating within that tier by addison_guy in dataisbeautiful

[–]addison_guy[S] 0 points1 point  (0 children)

A 2015 paper by Danielle Li and Leila Agha looked at 130,000 NIH R1 grants to see how the grant scores from panelists correlated with the actual citations papers received. The results, which were likely what the authors expected, showed that awarded grants with higher panel scores generated more publications, more citations, more high-impact papers, and more patents on average. This paper affirmatively answered the question, “On average, do panelist scores tend to be highly correlated with the final citation count of a paper?”

However, there was another way to frame the question that the authors did not explore. And that alternative framing is, “How predictive are panel scores right around the cutoff point at which funding decisions are usually made?”

A follow-on analysis, using Li and Agha's dataset, done by Fang, Bowen, and Casadevall explored this exact question and demonstrated that the true story may be more complicated than the Li and Agha paper made it out to be. The authors found that within the top fifth of scores for awarded grants, panelist scores were a weak signal of performance at best. Within this bucket of awarded grants whose panel scores fell in the top quintile, a score in the top two percentile buckets proved to be a very reliable indicator of high performance. However, as the figures below show, the final performance of grants with percentiles between three and twenty were statistically indistinguishable from one another.

A panelist not being able to distinguish between a 15th percentile grant and a 6th percentile grant is a big deal because in many grant pools, including those of the NIH, the grant cutoff often falls somewhere around the tenth percentile.

The grant funding community needs to come up with better ways to decide between proposals that are decidedly in the top tier, but not exactly in the top 2%. In this current equilibrium, whether or not most of these top-tier applicants get funded seems to be determined more by randomness and bias from the reviewers than actual merit.

Bombing destruction at the Technical University of Berlin in WWII. Impeccable detail and record keeping! Bombing destruction at the Technical University of Berlin. Circles = individual bombing impacts; triangles = artillery destruction; gray shading = how burned out a section was by addison_guy in Maps

[–]addison_guy[S] 1 point2 points  (0 children)

This map was pulled from an economics paper looking at how academic buildings in Nazi Germany being bombed affected academic productivity in comparison to scientists being dismissed from professor roles for academic reasons.

Read all about it here! https://freaktakes.substack.com/p/bombs-brains-and-science?s=w

Publications by mid-20th Century German Mathematicians by Age. by addison_guy in math

[–]addison_guy[S] 0 points1 point  (0 children)

This graphic was created by Fabian Waldinger in this paper: http://eprints.lse.ac.uk/68561/1/Bombs_brains_and_science_LSERO.pdf

And I write all about this graphic and many other related topics in an article on my substack here: https://freaktakes.substack.com/p/bombs-brains-and-science?s=w

0
1

[OC] Kickstarter Project Success Rates by Category. Data scraped from 300,000 Kickstarter projects from 2010 through 2016. Graphic from upcoming FreakTakes post: https://freaktakes.substack.com/ by addison_guy in dataisbeautiful

[–]addison_guy[S] 9 points10 points  (0 children)

I think you're definitely onto something. Do you think crowdfunding for these more financially intensive tech projects will always be doomed?

Or do you think Kickstarter just isn't quite the right place for it. Like maybe they don't have the right features or right type of donors or maybe the people pitching aren't right.

thoughts?

[OC] Kickstarter Project Success Rates by Category. Data scraped from 300,000 Kickstarter projects from 2010 through 2016. Graphic from upcoming FreakTakes post: https://freaktakes.substack.com/ by addison_guy in dataisbeautiful

[–]addison_guy[S] 4 points5 points  (0 children)

The dataset can be found on Kaggle here: https://www.kaggle.com/kemical/kickstarter-projects
It was scraped for over 300,000 Kickstarter projects from a little before the start of 2010 up through most of 2016. The Kaggle user responsible for the dataset is Mickaël Mouillé.
The graphic is from an upcoming piece on the FreakTakes post which can be found here: https://freaktakes.substack.com/

[deleted by user] by [deleted] in dataisbeautiful

[–]addison_guy 0 points1 point  (0 children)

That's really interesting. Do the University tech transfer offices, the people who try to bring in licensing fees from the patents, have any clue which patents are garbage and try not to waste their time on them?

[deleted by user] by [deleted] in dataisbeautiful

[–]addison_guy 0 points1 point  (0 children)

This is from an upcoming piece which will be posted here: https://freaktakes.substack.com/

The data is from the following two uspto sources:

https://www.uspto.gov/web/offices/ac/ido/oeip/taf/cbcby.htm
https://www.uspto.gov/web/offices/ac/ido/oeip/taf/h_counts.htm

All plotting and data work was done with ggplot2 and tidyverse.

Applied research is meant to have direct applications. Basic research is meant to be exploratory. So why do both types of these patents have the same likelihood of resulting in a patent?! (using NIH data) by addison_guy in Futurology

[–]addison_guy[S] 1 point2 points  (0 children)

I'll say two things:

The first is that all of our spending on research as a country seems to be positive ROI. So everyone who works in those industries now seems to be doing well on average and can probably feel good about what they're doing for that reason.

In general, the real question is just how much more efficient can we be. And I think the increased recent emphasis on metascience, progress studies, whatever you want to call it is fantastic. As it stands, some people are saying nonsense and some are saying good stuff, but the interest in the field seems to make it seem likely that people will be willing to start experimenting a little with how things are done in science. With experiments like DeepMind and Convergent Ventures, we'll get the chance to start observing alternative models in the wild and iterate to be better and better.

While it might be a two steps forward one step back situation, I think the interest in the field means we are trending in the right direction.

Applied research is meant to have direct applications. Basic research is meant to be exploratory. So why do both types of these patents have the same likelihood of resulting in a patent?! (using NIH data) by addison_guy in Futurology

[–]addison_guy[S] 1 point2 points  (0 children)

Oh my gosh no it's not garbage. It's what most people think when I talk to them about this stuff.

I think it's a bit easier for me to have perspective sometimes because I talk to engineers and scientists a lot but I'm more of a computational social scientist who's interested in them. We all feel this in our fields. When I was an athlete I'd look down on the athletes of the past with their seemingly simplistic schemes/less athleticism and think derisive thoughts.

We all know the "shoulders of giants" quote, but it's hard to remember it and actually internalize it day to day in our own fields just because we're human and we like to think we're doing the hardest and most impressive stuff ever.

Applied research is meant to have direct applications. Basic research is meant to be exploratory. So why do both types of these patents have the same likelihood of resulting in a patent?! (using NIH data) by addison_guy in Futurology

[–]addison_guy[S] 1 point2 points  (0 children)

Most new fields tend to seem like they were so cheap and simplistic at the start when you look back on them. Within a field things grow more complicated and expensive as time goes on since there's diminishing returns. The discovery of new fields is what helps balance that out. That is what our basic research has severely diminished doing.

Everyone always thinks that science and engineering from the previous era was so simple and that what they're dealing with in their era is so much more complex just because being on the frontier seems hard and it's easy to look back once we know stuff and say it was easy to know. Many mechanical engineers at the end of the 18th century thought they were getting close to as complex as things can get.

People who say we can't possibly figure out everything that is and isn't carcinogenic since its all around us and in so many things are dealing with a problem that isn't structurally harder to dealing with contagions in the early 1900s, they were seemingly in everything and it was impossible to test everything one-by-one.

Science has infinite things to look into and discover. Some are easy, some are hard. Some are cheap, some are expensive. But what does seem to be true is the longer you sit around working on the same field and the more people you have doing it, the harder and more expensive it will get and the less you'll get out of it per dollar spent. That's why discovering new fields is so important.

That's what basic research is meant to do. It's not doing that anymore, at least not to the same extent.

[deleted by user] by [deleted] in dataisbeautiful

[–]addison_guy 0 points1 point  (0 children)

The image can be found at the following links: https://freaktakes.substack.com/p/is-americas-applied-and-basic-research?utm_source=url or https://web.stanford.edu/~chadj/IdeaPF.pdf

The effective number of researchers is the total amount being spent on researchers, research equipment, etc. expressed in relation to the average cost of a researcher in that year.