Evolutionary Psychology

Evolutionary Psychology is moving to SAGE. The new address is evp.sagepub.com. Submissions here.
Robert Kurzban

The Evolutionary Psychology Blog

By Robert Kurzban

Robert Kurzban is an Associate Professor at the University of Pennsylvania and author of Why Everyone (Else) Is A Hypocrite. Follow him on Twitter: @rkurzban

On Checking References, Part Two

Published 19 November, 2013

I received a number of offline comments about my last post, in addition to the commenters on the post itself. By and large, it seems that there is an interest in some sort of mechanism to check citations, so I have put a little pilot program into effect at Evolution and Human Behavior.

Here’s how I’m currently structuring it. I received a paper that is in its third iteration. After studying the new manuscript, I decided to accept it. I have delayed officially accepting it, however, and sent the manuscript to someone who has agreed to check the citations in exchange for an hourly wage. My employee is not a graduate student – so I am not taking her away from scholarly pursuits – but does have an undergraduate degree and has already shown competence, so I trust her to do a good job.

The economics are somewhat alarming. Suppose that I’m paying $12 per hour. She estimates it takes about twenty minutes to track down and verify a citation. So each citation costs $4 to check. If a paper has, say, 50 citations – which is often on the low side – then this process, conservatively, costs $200 per paper. If the journal publishes 50 papers per year, the annual budget would be $10,000, a tidy sum. Add in review papers, which might have three times as many cites, and the figure potentially increases substantially.

Despite this expense, I decided it would be worthwhile to conduct a pilot program along these lines. My expectation is to have a few papers checked in this way every month, as opposed to having all of them checked. There is some money in one of my budgets for this, so I can at least fund the pilot program. One question is how many cites are questionable. Maybe it’ll turn out that the answer is basically zero, and so the whole thing was more or less a tempest in a teapot. (This would still leave open the issue of whether the field should introduce pincites. I’m genuinely curious about this. Drop me a line to tell me if you’d prefer the present world, or the world in which you have to supply page numbers in your citations. A pain for authors, but a boon for readers.)

There is another cost beyond the labor. Instead of the paper in question going right to production, there will now be a lag between when the paper is accepted and when it goes to press. If my cite-checker finds a number of potentially incorrect cites, then there is another iteration of the manuscript introduced into the process, which might well be vexing for the author or authors.

Throwing money at the problem is not unreasonable, I think, for the moment, but I’m not sure that it’s sustainable over the long term or, even if it were, if the money spent this way would not be better spent on other things.

Having said that, my sense is that the law school solution is not really an option. I don’t think that it will, any time soon, improve a student’s reputation or job prospects to perform such tasks. The law school regime – work on the law review to get a clerkship – has no obvious equivalent in psychology as currently constituted, as far as I can tell.

Other ideas? A bond system? Should authors put up a $300 bond when they submit, which they get back if all the citations are correct? Would authors be willing to do this, or simply migrate to another outlet? A simple fee, as in the PLoS family of journals, to defray the cost? This seems to add insult to injury, given the existing resentment about Elsevier and its unseemly profits. A tax on members of the Human Behavior and Evolution Society? E&HB is the journal of the Society, and the  Society benefits to the extent the journal is better. Consider it a public good produced by the membership? What about coercing advanced undergrads? We already more or less force them to participate in experiments in the name of pedagogy. Checking citations is educational, right? When people come to me to ask to volunteer in my lab, which is not too infrequently, do I stop turning them away, and say, yes, as a matter of fact there is something you can do to help…? It might take them longer, and they might not be as adept as a grad student would be, but the price sounds great…

Still open to ideas and discussion. The pilot program is under way…

  • Gil

    Hi Rob, I am wondering what do you mean to check citations? I assume that you mean beyond making sure that the actual reference is correctly cited, right? In that case, 20 minutes could be a very low estimate to the amount of time it takes to verify a claim cited in the article. If you have a reference that is 30 pages long, and the author of the original article does not provide page numbers, you might need to read large parts of the cited article to verify the claim. Also, what do you do with books that are less available (i.e. you actually have to go to the library to get them) and might require more time to evaluate.

    Another potential problem that I can see is that sometimes, it’s a matter of interpretation. A fact-checker might think a certain claim isn’t supported, but the author might disagree. That can make your life as an editor much tougher as you will have to also serve as an arbitrator between the student and the author.

    • rkurzban

      Right, I mean checking to ensure that the cited source says what the authors say it says. In some cases, this might be quite quick, so I think a 20 minute average isn’t too crazy, bearing in mind some will take longer. Books are relatively rarely cited, but I take your point. I think we’ll probably burn that bridge when we come to it. On your second point, I think that my policy will be to defer to authors in such cases. Mostly I see the student’s role as calling authors’ attention to issues that merit it. Thanks for the comments, Gil.

      • Gil

        I agree with you on the last point and I think it’s a good idea. Just knowing that your article is being scrutinized will get authors to pay more attention to details. In fact, you don’t need to check all articles and can choose them in random. This is like having random drug tests for athletes that makes it difficult to cheat.

  • Brent D.

    What about doing random checks of say 25% of the citations in a given paper instead of all citations? (A smart person could figure out the optimal number). If the
    paper passes with no errors, fine. If any errors are detected, the paper is then passed
    back to the authors with a stern request to check each and every citation and
    provide a clear annotation?

    I like the idea of using advanced undergraduates. Perhaps it would be possible to create a collaboratry of labs that could dedicate X% of RA hours to the task per week. This could be educational and useful. I know there are times when we scramble to have meaningful tasks for RAs.

    Regardless, I like the idea of checking the scholarship of a paper. I will be interested in learning about the results of your pilot program.
    PS – I like pincities. I think it is useful to know where to look for the supportive information. This will not work all of the time but some effort toward this level of precision would be valuable to the field.

    • Chris Martin

      I wrote a comment here. Did it get deleted?

  • Robert Deaner

    Rob, you’re considering two related but distinct questions:

    #1: Should there be pincites or page numbers for every reference?

    #2: Should references be externally fact-checked?

    #2 is expensive and the cost is entirely assumed by journal. And there may be little tangible benefit for readers (at least in most cases, we hope.)

    #1 is lower-cost and the cost is entirely assumed by the authors. And there should be a big benefit for reviewers and readers (i.e., internally motivated fact checkers!). And by forcing authors do this, you will probably promote more accurate citing because authors will realize their references can be easily checked.So it seems to me you should focus on #1.

    However, for #1 it’s more difficult to run a pilot program (i.e., you must actually change the journal’s standards) and perhaps more difficult to evaluate its success.

  • Lee_Kirkpatrick

    I like the idea of this pilot program, if only to get a sense of how big the problem might or might not be. If inaccurate citations are sufficiently rare, it probably isn’t worth the costs to reduce them to zero.

    Apart from that, I’m inclined to think that citation accuracy is a considerably less-important issue in our field than in, say, law. Sure, it’s important as a matter of scholarship to give credit where credit is due, but I see a fundamental difference. The validity of a legal argument depends crucially on accuracy of citations of precendents: If one of those precedents doesn’t in fact exist, the argument crumbles and is consequently not worthy of publication. By analogy, I can imagine mathematics journals hiring legions of graduate students to pore over proofs offered in manuscripts, looking for errors, because an error in the proof potentially invalidates the conclusion of the paper. In both cases, the publishibility of a manuscript hinges crucially on the question of whether the conclusion follows logically from the premises, so it is

  • Pingback: Citation Update | Evolutionary Psychology

Copyright 2013 Robert Kurzban, all rights reserved.

Opinions expressed in this blog do not reflect the opinions of the editorial staff of the journal.