Re-posted (reblogged) from Jason Antrosio’s excellent blog Living Anthropologically, another bit pointing to some of the problems of academic valuation and valuing quantity over quality in academic publishing and prestige:
[Nassim Nicholas] Taleb[, author of the book “The Black Swan: Second Edition: The Impact of the Highly Improbable“] draws directly from Robert K. Merton:
Let’s say someone writes an academic paper quoting fifty people who have worked on the subject and provided background materials for his study; assume, for the sake of simplicity, that all fifty are of equal merit. Another researcher working on the exact same subject will randomly cite three of those fifty in his bibliography. Merton showed that many academics cite references without having read the original work; rather, they’ll read a paper and draw their own citations from among its sources. So a third researcher reading the second article selects three of the previously referenced authors for his citations. These three authors will receive cumulatively more and more attention as their names become associated more tightly with the subject at hand. The difference between the winning three and the other members of the original cohort is mostly luck: they were initially chosen not for their greater skill, but simply for the way their names appeared in the prior bibliography. Thanks to their reputations, these successful academics will go on writing papers and their work will be easily accepted for publication. Academic success is partly (but significantly) a lottery. (2007:217)
There’s a larger conversation to be had about Taleb’s skepticism of statistics, predictability, or our ability to draw causality from past events–he seems to basically believe the latter is near-nil–but what I’m more interested in at the moment is something Jason and Taleb discuss in terms of properly assessing the successes of the successful. That is–when we attempt to measure why the people who are successful are so, how frequently (if ever) are we really doing the proper analysis and including correct sampling of the cohort “the successful” came from? Is it possible (if not likely) that while “the successful” may share traits of “courage, risk taking, optimism, and so on” (Taleb, pp. 105-106), that the explanatory power of such traits may be nil? (Presumably because the number of successful people with these traits is actually not higher than what one would predict by pulling people out of the population at random, i.e., just as many or more courageous, risk-taking, optimistic people fail as succeed; and courage, risk-taking, and optimism may not necessarily statistically increase your chances of success.)
The point of my ruminations on the possible Just-So-Storiness of success is that, whether or not success (or citations) basically is stochastic (more “noise” than “signal”), there certainly is much of the academic life that is based on Just-So-Stories. A prime example are what I’ve called the “unfortunate mysteries at the heart of peer review“. There are also assumptions about meritocracy, assumptions about tenure and academic freedom, assumptions about the (proper) roles of intellectuals and academics, assumptions about the validity of published results (most/many of which are, arguably, false), and on and on. In my experience at least, the chance a scientist will use an anecdote to illustrate the strengths of, say, peer review approaches 100% (For extra irony, this may take place after they have admonished someone that “the plural of anecdote is not data.” I’m QUITE sure that the singular of anecdote is not data, either.) The “New Scientism“, that condescends against whatever group possessing whatever deemed-to-be-irrational beliefs disliked by a given Champion Skeptic, is rarely is turned in on itself. That is, the Skeptic (“Scientismist”?) rare faces or is forced to acknowledge that social factors are as much at work for scientific “beliefs that are held to be true or rational” as they are for beliefs that are false. The vast majority of things we believe (scientist, scientismist, or “civilian”), we believe because of social cues and our judgments about the reliability of the interlocutors–reliability judgments that themselves are usually informal and socially informed–such that the social processes behind believing untrue things and believing true things are “symmetrical“. In practice, in my experiences (anecdotes!), the processes behind learning a “true fact” are pretty close to those behind learning a “goodfact” or utterly incorrect factoid.
Science is an important and powerful way to winnow rational, true beliefs, from irrational, false beliefs, but scientists tend to underappreciate the extent to which most of our human lives (even those of scientific materialists!) are likely to be shrouded in mystery and depend on irrational (not personally empirically tested), true beliefs, and rational, false beliefs as much as their converse.
Confused? Me too. That’s enough STS-ing without a license for today…
Ah, I recall one point that I was trying to make that before I got myself lost in… myself. It is that our use of citations and citation indexes to indicate quality for academics may be subject to the same sorts of biases/mistakes Taleb talks about, rewarding the lucky instead of the good. The thing that I find odd about this is that we certainly could, as academics, *choose* to reward quality over quantity, and one way to take that seriously would be spending more time learning and understanding the work of our colleagues, such that (say) we could review tenure files from colleagues in unrelated disciplines by spending the time to become informed enough to make… erm, and informed opinion. I mean, of course, we couldn’t become experts/peers in every field, and I’m not suggesting we replace external letters–but it just boggles my mind that scholars, who should be experts on learning difficult concepts, rely so heavily on proxies like citation indexes rather than spending the (yes, I know, VERY scarce!) time to understand what the major issues in the field are, and make up their own minds (with external letters and other context, including citation indexes as supporting information) as to whether a fellow scholar is doing something innovative, interesting, and tenure-worthy.
Now, even if we grant that such is possible (learning enough of other fields to make informed decisions that rely less on proxies), the major barriers appears to be, essentially, that a) many academics are academics expressly so that they can spend as much time as possible on their own specialties and as little time as possible on… anything else; b) all of us academics have more responsibilities than we have time anyway, even outside of provincialistic inclinations, and c) we’re not rewarded, at all, for breadth of knowledge or interest or even being well-informed about the work and innovations of close colleagues–or being “good academic citizens” at all.
This reminds me of the tale told to me by my advisor about people cutting and pasting bibliographies. Someone traced the propagation of a typo in the citation of a famous paper through a series of subsequent papers. I can’t recall if it was Gould’s Spandrels of San Marco paper, or Raven and Ehrlich’s co-evolution paper, or another paper entirely, but the citation had clearly been copied and pasted from one bibliography to the next—and the typo traveled from paper to paper and even ended up in a textbook. So who’s to say whether a high citation rate is a recognition of merit or a lottery? Clearly, we do cut corners and there are implications for these actions.
@AgroEcoProf, I find the more I delve into the subjects that interest me, the more I am drawn to know more about how other fields impact my area of interest. I often find myself expanding well beyond my own specialty to inform my specialty.
Well, I’m not surprised that you end up drawn to a “broader view” the more you work in your interests, D, but I would still say that’s a minority approach–and that the incentives to focus ever more narrowly also align well with a large number of self-selected people who probably have that same draw, D, but find themselves able to resist it!
What I’m learning is that the way that those other fields inform my research is without parallel. There are things I just wouldn’t understand without stepping outside my comfort zone. And trust me…there are days that I am way outside my comfort zone and struggling like an undergrad fresh off the farm.
🙂 Well of course — that means you’re doing it right! This is an ancillary problem, I find, for academics, and adults in general. We hate feeling stupid, like an undergrad or even less advanced, all over again. Kids’ brains are definitely more plastic than adults–they learn plain quicker–but we also I think cramp ourselves because of our unwillingness/hesitation to just be openly, uncomfortably ignorant again about something. It’s just like (in my mind) foreign languages–much easier to speak the one you’re already familiar in if other people are around who understand, and no one past drinking age (or even before) terribly wants to have to say the equivalent of “I… room need where go ladies and gentlesmen to eliminate waste, please” (or “Wrong you are but I do not have the words need to use to explain you just are me think”) in front of a room of adults. Having to say this in its academic equivalent (“I’m sorry, I have no idea what you just said, but I am going to invest several weeks/months/years to being reminded of this ignorant state and working my way out of it”) is no fun for most of us.
It’s not the struggle that I’m uncomfortable with. It’s the time investment. But it is easier for me to learn on my own (and possibly seek out help when I encounter the difficult-for-me material now and again) than is it to start at the beginning and take a course or something. I guess that’s the beauty of being a trained researcher. You know how to learn.
In theory, at least… though one has to let go of ego to do that. I still hypothesize that you are not typical in these aspects, D 🙂
I have always been far too tolerant of my curiosity.