Re-posted (reblogged) from Jason Antrosio’s excellent blog Living Anthropologically, another bit pointing to some of the problems of academic valuation and valuing quantity over quality in academic publishing and prestige:
[Nassim Nicholas] Taleb[, author of the book “The Black Swan: Second Edition: The Impact of the Highly Improbable“] draws directly from Robert K. Merton:
Let’s say someone writes an academic paper quoting fifty people who have worked on the subject and provided background materials for his study; assume, for the sake of simplicity, that all fifty are of equal merit. Another researcher working on the exact same subject will randomly cite three of those fifty in his bibliography. Merton showed that many academics cite references without having read the original work; rather, they’ll read a paper and draw their own citations from among its sources. So a third researcher reading the second article selects three of the previously referenced authors for his citations. These three authors will receive cumulatively more and more attention as their names become associated more tightly with the subject at hand. The difference between the winning three and the other members of the original cohort is mostly luck: they were initially chosen not for their greater skill, but simply for the way their names appeared in the prior bibliography. Thanks to their reputations, these successful academics will go on writing papers and their work will be easily accepted for publication. Academic success is partly (but significantly) a lottery. (2007:217)
There’s a larger conversation to be had about Taleb’s skepticism of statistics, predictability, or our ability to draw causality from past events–he seems to basically believe the latter is near-nil–but what I’m more interested in at the moment is something Jason and Taleb discuss in terms of properly assessing the successes of the successful. That is–when we attempt to measure why the people who are successful are so, how frequently (if ever) are we really doing the proper analysis and including correct sampling of the cohort “the successful” came from? Is it possible (if not likely) that while “the successful” may share traits of “courage, risk taking, optimism, and so on” (Taleb, pp. 105-106), that the explanatory power of such traits may be nil? (Presumably because the number of successful people with these traits is actually not higher than what one would predict by pulling people out of the population at random, i.e., just as many or more courageous, risk-taking, optimistic people fail as succeed; and courage, risk-taking, and optimism may not necessarily statistically increase your chances of success.)
The point of my ruminations on the possible Just-So-Storiness of success is that, whether or not success (or citations) basically is stochastic (more “noise” than “signal”), there certainly is much of the academic life that is based on Just-So-Stories. A prime example are what I’ve called the “unfortunate mysteries at the heart of peer review“. There are also assumptions about meritocracy, assumptions about tenure and academic freedom, assumptions about the (proper) roles of intellectuals and academics, assumptions about the validity of published results (most/many of which are, arguably, false), and on and on. In my experience at least, the chance a scientist will use an anecdote to illustrate the strengths of, say, peer review approaches 100% (For extra irony, this may take place after they have admonished someone that “the plural of anecdote is not data.” I’m QUITE sure that the singular of anecdote is not data, either.) The “New Scientism“, that condescends against whatever group possessing whatever deemed-to-be-irrational beliefs disliked by a given Champion Skeptic, is rarely is turned in on itself. That is, the Skeptic (“Scientismist”?) rare faces or is forced to acknowledge that social factors are as much at work for scientific “beliefs that are held to be true or rational” as they are for beliefs that are false. The vast majority of things we believe (scientist, scientismist, or “civilian”), we believe because of social cues and our judgments about the reliability of the interlocutors–reliability judgments that themselves are usually informal and socially informed–such that the social processes behind believing untrue things and believing true things are “symmetrical“. In practice, in my experiences (anecdotes!), the processes behind learning a “true fact” are pretty close to those behind learning a “goodfact” or utterly incorrect factoid.
Science is an important and powerful way to winnow rational, true beliefs, from irrational, false beliefs, but scientists tend to underappreciate the extent to which most of our human lives (even those of scientific materialists!) are likely to be shrouded in mystery and depend on irrational (not personally empirically tested), true beliefs, and rational, false beliefs as much as their converse.
Confused? Me too. That’s enough STS-ing without a license for today…