## Moderating principles

### July 25th, 2022

Some time around April 1994, I founded the Computation and Language E-Print Archive, the first preprint repository for a subfield of computer science. It was hosted on Paul Ginsparg’s arXiv platform, which at the time had been hosting only physics papers, built out from the original arXiv repository for high-energy physics theory, hep-th. The repository, cmp-lg (as it was then called), was superseded in 1999 by an open-access preprint repository for all of computer science, the Computing Research Repository (CoRR), which covered a broad range of subject areas, including computation and language. The CoRR organizing committee also decided to host CoRR on arXiv. I switched over to moderating for the CoRR repository from cmp-lg, and have continued to do so for the last – oh my god – 22 years.[1]

Articles in the arXiv are classified with a single primary subject class, and may have other subject classes as secondary. The switchover folded cmp-lg into the arXiv as articles tagged with the cs.CL (computation and language) subject class. I thus became the moderator for cs.CL.

A preprint repository like the arXiv is not a journal. There is no peer review applied to articles. There is essentially no quality control. That is not the role of a preprint repository. The role of a preprint repository is open distribution, not vetting. Nonetheless, some kind of control is needed in making sure that, at the very least, the documents being submitted are in fact scholarly articles and are appropriately tagged as to subfield, and that need has expanded with the dramatic increase in submissions to CoRR over the years. The primary duty of a moderator is to perform this vetting and triage: verifying that a submission possesses the minimum standards for being characterized as a scholarly article, and that it falls within the purview of, say, cs.CL, as a primary or secondary subject class.

I am (along with the other arXiv moderators) thus regularly in the position of having to make decisions as to whether a document is a scholarly article or not. To a large extent, Justice Potter Stewart’s approach works reasonably well for scholarly articles: you know them when you see them. But over time, as more marginal cases come up, I’ve felt that tracking my thinking on the matter would be useful for maintaining consistency in my own practice. And now that I’ve done that for a while, I thought it might be useful to share my approach more broadly. That is the goal of this post.

The following thus constitutes (some of) the de facto policies that I use in making decisions as the moderator for the cs.CL collection in the CoRR part of the arXiv repository. I emphasize that these are my policies, not those of CoRR or the moderators of other CoRR subjects. (The arXiv folks themselves provide a more general guide for arXiv moderators.) Read the rest of this entry »

## Upcoming in Tromsø

### November 11th, 2015

 Northern lights over Tromsø

I’ll be visiting Tromsø, Norway to attend the Tenth Annual Munin Conference on Scholarly Publishing, which is being held November 30 to December 1. I’m looking forward to the talks, including keynotes from Randy Schekman and Sabine Hossenfelder and an interview by Caroline Sutton of my colleague Peter Suber, director of Harvard’s Office for Scholarly Communication. My own keynote will be on “The role of higher education institutions in scholarly publishing and communication”. Here’s the abstract:

Institutions of higher education are in a double bind with respect to scholarly communication: On the one hand, they need to support the research needs of their students and researchers by providing access to the journals that comprise the archival record of scholarship. Doing so requires payment of substantial subscription fees. On the other hand, they need to provide the widest possible dissemination of works by those same researchers — the fruits of that very research — which itself incurs costs. I address how these two goals, each of which demands outlays of substantial funds, can best be honored. In the course of the discussion, I provide a first look at some new results on predicting journal usage, which allows for optimizing subscriptions.

Update: The video of my talk at the Munin conference is now available.

## In support of behavioral tests of intelligence

### May 7th, 2015

 …“blockhead” argument… “Blockhead by Paul McCarthy @ Tate Modern” image from flickr user Matt Hobbs. Used by permission.

Alan Turing proposed what is the best known criterion for attributing intelligence, the capacity for thinking, to a computer. We call it the Turing Test, and it involves comparing the computer’s verbal behavior to that of people. If the two are indistinguishable, the computer passes the test. This might be cause for attributing intelligence to the computer.

Or not. The best argument against a behavioral test of intelligence (like the Turing Test) is that maybe the exhibited behaviors were just memorized. This is Ned Block’s “blockhead” argument in a nutshell. If the computer just had all its answers literally encoded in memory, then parroting those memorized answers is no sign of intelligence. And how are we to know from a behavioral test like the Turing Test that the computer isn’t just such a “memorizing machine”?

In my new(ish) paper, “There can be no Turing-Test–passing memorizing machines”, I address this argument directly. My conclusion can be found in the title of the article. By careful calculation of the information and communication capacity of space-time, I show that any memorizing machine could pass a Turing Test of no more than a few seconds, which is no Turing Test at all. Crucially, I make no assumptions beyond the brute laws of physics. (One distinction of the article is that it is one of the few philosophy articles in which a derivative is taken.)

The article is published in the open access journal Philosophers’ Imprint, and is available here along with code to computer-verify the calculations.

## Inaccessible writing, in both senses of the term

### September 29th, 2014

My colleague Steven Pinker has a nice piece up at the Chronicle of Higher Education on “Why Academics Stink at Writing”, accompanying the recent release of his new book The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century, which I’m awaiting my pre-ordered copy of. The last sentence of the Chronicle piece summarizes well:

In writing badly, we are wasting each other’s time, sowing confusion and error, and turning our profession into a laughingstock.

The essay provides a diagnosis of many of the common symptoms of fetid academic writing. He lists metadiscourse, professional narcissism, apologizing, shudder quotes, hedging, metaconcepts and nominalizations. It’s not breaking new ground, but these problems well deserve review.

I fall afoul of these myself, of course. (Nasty truth: I’ve used “inter alia” all too often, inter alia.) But one issue I disagree with Pinker on is the particular style of metadiscourse he condemns that provides a roadmap of a paper. Here’s an example from a recent paper of mine.

After some preliminaries (Section 2), we present a set of known results relating context-free languages, tree homomorphisms, tree automata, and tree transducers to extend them for the tree-adjoining languages (Section 3), presenting these in terms of restricted kinds of functional programs over trees, using a simple grammatical notation for describing the programs. We review the definition of tree-substitution and tree-adjoining grammars (Section 4) and synchronous versions thereof (Section 5). We prove the equivalence between STSG and a variety of bimorphism (Section 6).

This certainly smacks of the first metadiscourse example Pinker provides:

“The preceding discussion introduced the problem of academese, summarized the principle theories, and suggested a new analysis based on a theory of Turner and Thomas. The rest of this article is organized as follows. The first section consists of a review of the major shortcomings of academic prose. …”

Who needs that sort of signposting in a 6,000-word essay? But in the context of a 50-page article, giving a kind of table of contents such as this doesn’t seem out of line. Much of the metadiscourse that Pinker excoriates is unneeded, but appropriate advance signposting can ease the job of the reader considerably. Sometimes, as in the other examples Pinker gives, “meta­discourse is there to help the writer, not the reader, since she has to put more work into understanding the signposts than she saves in seeing what they point to.” But anything that helps the reader to understand the high-level structure of an object as complex as a long article seems like a good thing to me.

The penultimate sentence of Pinker’s piece places poor academic writing in context:

Our indifference to how we share the fruits of our intellectual labors is a betrayal of our calling to enhance the spread of knowledge.

That sentiment applies equally well – arguably more so – to the venues where we publish. By placing our articles in journals that lock up access tightly we are also betraying our calling. And it doesn’t matter how good the writing is if it can’t be read in the first place.

## How universities can support open-access journal publishing

### To university administrators and librarians:

 …enablement becomes transformation… “Shelf of journals” image from Flickr user University of Illinois Library. Used by permission.

As a university administrator or librarian, you may see the future in open-access journal publishing and may be motivated to help bring that future about.1 I would urge you to establish or maintain an open-access fund to underwrite publication fees for open-access journals, but to do so in a way that follows the principles that underlie the Compact for Open-Access Publishing Equity (COPE). Those principles are two:

Principle 1: Our goal should be to establish an environment in which publishers are enabled2 to change their business model from the unsustainable closed access model based on reader-side fees to a sustainable open access model based on author-side fees.

If publishers could and did switch to the open-access business model, in the long term the moneys saved in reader-side fees would more than cover the author-side fees, with open access added to boot.

But until a large proportion of the funded research comes with appropriately structured funds usable to pay author-side fees, publishers will find themselves in an environment that disincentivizes the move to the preferred business model. Only when the bulk of research comes with funds to pay author-side fees underwriting dissemination will publishers feel comfortable moving to that model. Principle 1 argues for a system where author-side fees for open-access journals should be largely underwritten on behalf of authors, just as the research libraries of the world currently underwrite reader-side fees on behalf of readers.3 But who should be on the hook to pay the author-side fees on behalf of the authors? That brings us to Principle 2.

Principle 2: Dissemination is an intrinsic part of the research process. Those that fund the research should be responsible for funding its dissemination.

Research funding agencies, not universities, should be funding author-side fees for research funded by their grants. There’s no reason for universities to take on that burden on their behalf.4 But universities should fund open-access publication fees for research that they fund themselves.

We don’t usually think of universities as research funders, but they are. They hire faculty to engage in certain core activities – teaching, service, and research – and their job performance and career advancement typically depends on all three. Sometimes researchers obtain outside funding for the research aspect of their professional lives, but where research is not funded from outside, it is still a central part of faculty members’ responsibilities. In those cases, where research is not funded by extramural funds, it is therefore being implicitly funded by the university itself. In some fields, the sciences in particular, outside funding is the norm; in others, the humanities and most social sciences, it is the exception. Regardless of the field, faculty research that is not funded from outside is university-funded research, and the university ought to be responsible for funding its dissemination as well.

The university can and should place conditions on funding that dissemination. In particular, it ought to require that if it is funding the dissemination, then that dissemination be open – free for others to read and build on – and that it be published in a venue that provides openness sustainably – a fully open-access journal rather than a hybrid subscription journal.

Organizing a university open-access fund consistent with these principles means that the university will, at present, fund few articles, for reasons detailed elsewhere. Don’t confuse slow uptake with low impact. The import of the fund is not to be measured by how many articles it makes open, but by how it contributes to the establishment of the enabling environment for the open-access business model. The enabling environment will have to grow substantially before enablement becomes transformation. It is no less important in the interim.

What about the opportunity cost of open-access funds? Couldn’t those funds be better used in our efforts to move to a more open scholarly communication system? Alternative uses of the funds are sometimes proposed, such as university libraries establishing and operating new open-access journals or paying membership fees to open-access publishers to reduce the author-side fees for their journals. But establishing new journals does nothing to reduce the need to subscribe to the old journals. It adds costs with no anticipation, even in the long term, of corresponding savings elsewhere. And paying membership fees to certain open-access publishers puts a finger on the scale so as to preemptively favor certain such publishers over others and to let funding agencies off the hook for their funding responsibilities. Such efforts should at best be funded after open-access funds are established to make good on universities’ responsibility to underwrite the dissemination of the research they’ve funded.

1. It should go without saying that efforts to foster open-access journal publishing are completely consistent with, in fact aided by, fostering open access through self-deposit in open repositories (so-called “green open access”). I am a long and ardent supporter of such efforts myself, and urge you as university administrators and librarians to promote green open access as well. [Since it should go without saying, comments recapitulating that point will be deemed tangential and attended to accordingly.]
2. I am indebted to Bernard Schutz of Max Planck Gesellschaft for his elegant phrasing of the issue in terms of the “enabling environment”.
3. Furthermore, as I’ve argued elsewhere, disenfranchising readers through subscription fees is a more fundamental problem than disenfranchising authors through publication fees.
4. In fact, by being willing to fund author-side fees for grant-funded articles, universities merely delay the day that funding agencies do their part by reducing the pressure from their fundees.

## Public underwriting of research and open access

### April 4th, 2014

 …a social contract… Title page of the first octavo edition of Rousseau’s Social Contract

[This post is based loosely on my comments on a panel on 2 April 2014 for Terry Fisher‘s CopyrightX course. Thanks to Terry for inviting me to participate and provoking this piece, and to my Berkman colleagues for their wonderful contributions to the panel session.]

Copyright is part of a social contract: You the author get a monopoly to exploit rights for a while in return for us the public gaining “the progress of Science and the Useful Arts”. The idea is that the direct financial benefit of exploiting those rights provides incentive for the author to create.

But this foundation for copyright ignores the fact that there are certain areas of creative expression in which direct financial benefit is not an incentive to create: in particular, academia. It’s not that academics who create and publish their research don’t need incentives, even financial incentives, to do so. Rather, the financial incentives are indirect. They receive no direct payment for the articles that they publish describing their research. They benefit instead from the personal uplift of contributing to human knowledge and seeing that knowledge advance science and the useful arts. Plus, their careers depend on the impact of their research, which is a result of its being widely read; it’s not all altruism.

In such cases, a different social contract can be in force without reducing creative expression. When the public underwrites the research that academics do – through direct research grants for instance – they can require in return that the research results must be made available to the public, without allowing for the limited period of exclusive exploitation. This is one of the arguments for the idea of open access to the scholarly literature. You see it in the Alliance for Taxpayer Access slogan “barrier-free access to taxpayer-funded research” and the White House statement that “The Obama Administration agrees that citizens deserve easy access to the results of research their tax dollars have paid for.” It is implemented in the NIH public access policy, requiring all articles funded by NIH grants to be made openly available through the PubMed Central website, where millions of visitors access millions of articles each week.

But here’s my point, one that is underappreciated even among open access supporters. The penetration of the notion of “taxpayer-funded research”, of “research their tax dollars have paid for”, is far greater than you might think. Yes, it includes research paid for by the $30 billion invested by the NIH each year, and the$7 billion research funded by the NSF, and the $150 million funded by the NEH. But all university research benefits from the social contract with taxpayers that makes universities tax-exempt.1 The Association of American Universities makes clear this social contract: The educational purposes of universities and colleges – teaching, research, and public service – have been recognized in federal law as critical to the well-being of our democratic society. Higher education institutions are in turn exempted from income tax so they can make the most of their revenues…. Because of their tax exemption, universities and colleges are able to use more resources than would otherwise be available to fund: academic programs, student financial aid, research, public extension activities, and their overall operations. It’s difficult to estimate the size of this form of support to universities. The best estimate I’ve seen puts it at something like$50 billion per year for the income tax exemption. That’s more than the NIH, NSF, and (hardly worth mentioning) the NEH put together. It’s on par with the total non-defense federal R&D funding.

And it’s not just exemption from income tax that universities benefit from. They also are exempt from property taxes for their campuses. Their contributors are exempt from tax for their charitable contributions to the university, which results ceteris paribus in larger donations. Their students are exempt from taxes on educational expenses. They receive government funding for scholarships, freeing up funds for research. Constructing an estimate of the total benefit to universities from all these sources is daunting. One study places the total value of all direct tax exemptions, federal, state, and local, for a single university, Northeastern University, at $97 million, accounting for well over half of all government support to the university. (Even this doesn’t count several of the items noted above.) All university research, not just the grant-funded research, benefits from the taxpayer underwriting implicit in the tax exemption social contract. It would make sense then, in return, for taxpayers to require open access to all university research in return for continued tax-exempt status. Copyright is the citizenry paying authors with a monopoly in return for social benefit. But where the citizenry pays authors through some other mechanism, like$50 billion worth of tax exemption, it’s not a foregone conclusion that we should pay with the monopoly too.

Some people point out that just because the government funds something doesn’t mean that the public gets a free right of access. Indeed, the government funds various things that the public doesn’t get access to, or at least, not free access. The American Publisher’s Association points out, for instance, that although taxpayers pay for the national park system “they still have to pay a fee if they want to go in, and certainly if they want to camp.” On the other hand, you don’t pay when the fire department puts out a fire in your house, or to access the National Weather Service forecasts. It seems that the social contract is up for negotiation.

And that’s just the point. The social contract needs to be designed, and designed keeping in mind the properties of the goods being provided and the sustainability of the arrangement. In particular, funding of the contract can come from taxpayers or users or a combination of both. In the case of national parks, access to real estate is an inherently limited resource, and the benefit of access redounds primarily to the user (the visitor), so getting some of the income from visitors puts in place a reasonable market-based constraint.

Information goods are different. First, the benefits of access to information redound widely. Information begets information: researchers build on it, journalists report on it, products are based on it. The openness of NWS data means that farms can generate greater yields to benefit everyone (one part of the fourth of six goals in the NWS Strategic Plan). The openness of MBTA transit data means that a company can provide me with an iPhone app to tell me when my bus will arrive at my stop. Second, access to information is not an inherently limited resource. As Jefferson said, “He who receives an idea from me, receives instruction himself without lessening mine.” If access is to be restricted, it must be done artificially, through legal strictures or technological measures. The marginal cost of providing access to an academic article is, for all intents and purposes, zero. Thus, it makes more sense for the social contract around distributing research results to be funded exclusively from the taxpayer side rather than the user side, that is, funding agencies requiring completely free and open access for the articles they fund, and paying to underwrite the manifest costs of that access. (I’ve written in the past about the best way for funding agencies to organize that payment.)

It turns out that we, the public, are underwriting directly and indirectly every research article that our universities generate. Let’s think about what the social contract should provide us in return. Blind application of the copyright social contract would not be the likely outcome.

1. Underappreciated by many, but as usual, not by Peter Suber, who anticipated this argument, for instance, in his seminal book Open Access:

All scholarly journals (toll access and OA) benefit from public subsidies. Most scientific research is funded by public agencies using public money, conducted and written up by researchers working at public institutions and paid with public money, and then peer-reviewed by faculty at public institutions and paid with public money. Even when researchers and peer reviewers work at private universities, their institutions are subsidized by publicly funded tax exemptions and tax-deductible donations. Most toll-access journal subscriptions are purchased by public institutions and paid with taxpayer money. [Emphasis added.]

## A true transitional open-access business model

### March 28th, 2014

 …provide a transition path… “The Temple of Transition, Burning Man 2011” photo by flickr user Michael Holden, used by permission

David Willetts, the UK Minister for Universities and Research, has written a letter to Janet Finch responding to her committee’s “A Review of Progress in Implementing the Recommendations of the Finch Report”. Notable in Minister Willetts response is this excerpt:

Government wants [higher education institutions] to fully participate in the take up of Gold OA and create a better functioning market. Hence, Government looks to the publishing industry to develop innovative and sustainable solutions to address the ‘double-dipping’ issue perceived by institutions. Publishers have an opportunity to incentivise early adoption of Gold OA by moderating the total cost of publication for individual institutions. This would remove the final obstacle to greater take up of Gold OA, enabling universal acceptance of ‘hybrid’ journals.

It is important for two reasons: in its recognition, first, that the hybrid journal model has inherent obstacles as currently implemented (consistent with a previous post of mine), and second, that the solution is to make sure that individual institutions (as emphasized in the original) be properly incentivized for underwriting hybrid fees.

This development led me to dust off a pending post that has been sitting in my virtual filing cabinet for several years now, being updated every once in a while as developments motivated. It addresses exactly this issue in some detail.

## A model OA journal publication agreement

### February 19th, 2014

 …decided to write my own…

In a previous post, I proposed that open-access journals use the CC-BY license for their scholar-contributed articles:

Recently, a journal asked me how to go about doing just that. What should their publication agreement look like? It was a fair question, and one I didn’t have a ready answer for. The “Online Guide to Open Access Journals Publishing” provides a template agreement that is refreshingly minimalist, but by my lights misses some important aspects. I looked around at various journals to see what they did, but didn’t find any agreements that seemed ideal either. So I decided to write my own. Herewith is my proposal for a model OA publication agreement. Read the rest of this entry »

## Thoughts on founding open-access journals

### November 21st, 2013

 … altogether too much concern with the contents of the journal’s spine text… “reference” image by flickr user Sara S. used by permission.

Precipitated by a recent request to review some proposals for new open-access journals, I spent some time gathering my own admittedly idiosyncratic thoughts on some of the issues that should be considered when founding new open-access journals. I make them available here. Good sources for more comprehensive information on launching and operating open-access journals are SPARC’s open-access journal publishing resource index and the Open Access Directories guides for OA journal publishers.

Unlike most of my posts, I may augment this post over time, and will do so without explicit marking of the changes. Your thoughts on additions to the topics below—via comments or email—are appreciated. A version number (currently version 1.0) will track the changes for reference.

### It is better to flip a journal than to found one

The world has enough journals. Adding new open-access journals as alternatives to existing ones may be useful if there are significant numbers of high quality articles being generated in a field for which there is no reasonable open-access venue for publication. Such cases are quite rare, especially given the rise of open-access “megajournals” covering the sciences (PLoS ONE, Scientific Reports, AIP Advances, SpringerPlus, etc.), and the social sciences and humanities (SAGE Open). Where there are already adequate open-access venues (even if no one journal is “perfect” for the field), scarce resources are probably better spent elsewhere, especially on flipping journals from closed to open access.

Admittedly, the world does not have enough open-access journals (at least high-quality ones). So if it is not possible to flip a journal, founding a new one may be a reasonable fallback position, but it is definitely the inferior alternative.

### It’s all about the editorial board

The main product that a journal is selling is its reputation. A new journal with no track record needs high quality submissions to bootstrap that reputation, and at the start, nothing is more convincing to authors to submit high quality work to the journal than its editorial board. Getting high-profile names somewhere on the masthead at the time of the official launch is the most important thing for the journal to do. (“We can add more people later” is a risky proposition. You may not get a second chance to make a first impression.)

Getting high-profile names on your board may occur naturally if you use the expedient of flipping an existing closed-access journal, thereby stealing the board, which also has the benefit of acquiring the journal’s previous reputation and eliminating one more subscription journal.

Another good idea for jumpstarting a journal’s reputation is to prime the article pipeline by inviting leaders in the field to submit their best articles to the journal before its official launch, so that the journal announcement can provide information on forthcoming articles by luminaries.

Adherence to the codes of conduct of the Open Access Scholarly Publishers Association (OASPA) and the Committee on Publication Ethics (COPE) should be fundamental. Membership in the organizations is recommended; the fees are extremely reasonable.

### You can outsource the process

There is a lot of interest among certain institutions to found new open-access journals, institutions that may have no particular special expertise in operating journals. A good solution is to outsource the operation of the journal to an organization that does have special expertise, namely, a journal publisher. There are several such publishers who have experience running open-access journals effectively and efficiently. Some are exclusively open-access publishers, for example, Co-Action Publishing, Hindawi Publishing, Ubiquity Press. Others handle both open- and closed-access journals: HighWire Press, Oxford University Press, ScholasticaHQ, Springer/BioMed Central, Wiley. This is not intended as a complete listing (the Open Access Directory has a complementary offering), nor in any sense an endorsement of any of these organizations, just a comment that shopping the journal around to a publishing partner may be a good idea. Especially given the economies of scale that exist in journal publishing, an open-access publishing partner may allow the journal to operate much more economically than having to establish a whole organization in-house.

### Certain functionality should be considered a baseline

Geoffrey Pullum, in his immensely satisfying essays “Stalking the Perfect Journal” and “Seven Deadly Sins in Journal Publishing”, lists his personal criteria in journal design. They are a good starting point, but need updating for the era of online distribution. (There is altogether too much concern with the contents of the journal’s spine text for instance.)

• Reviewing should be anonymous (with regard to the reviewers) and blind (with regard to the authors), except where a commanding argument can be given for experimenting with alternatives.
• Every article should be preserved in one (or better, more than one) preservation system. CLOCKSS, Portico1, a university or institutional archival digital repository are good options.
• Every article should have complete bibliographic metadata on the first page, including license information (a simple reference to CC-BY; see above), and (as per Pullum) first and last page numbers.
• The journal should provide DOIs for its articles. OASPA membership is an inexpensive way to acquire the ability to assign DOIs. An article’s DOI should be included in the bibliographic metadata on the first page.

There’s additional functionality beyond this baseline that would be ideal, though the tradeoff against the additional effort required would have to be evaluated.

• Provide access to the articles in multiple formats in addition to PDF: HTML, XML with the NLM DTD.
• Encourage authors to provide the underlying data to be distributed openly as well, and provide the infrastructure for them to do so.

### Take advantage of the networked digital era

Many journal publishing conventions of long standing are no longer well motivated in the modern era. Here are a few examples. They are not meant to be exhaustive. You can probably think of others. The point is that certain standard ideas can and should be rethought.

• There is no longer any need for “issues” of journals. Each article should be published as soon as it is finished, no later and no sooner. If you’d like, an “issue” number can be assigned that is incremented for each article. (Volumes, incremented annually, are still necessary because many aspects of the scholarly publishing and library infrastructure make use of them. They are also useful for the purpose of characterizing a bolus of content for storage and preservation purposes.)
• Endnotes, a relic of the day when typesetting was a complicated and fraught process that was eased by a human being not having to determine how much space to leave at the bottom of a page for footnotes, should be permanently retired. Footnotes are far easier for readers (which is the whole point really), and computers do the drudgery of calculating the space for them.
• Page limits are silly. In the old physical journal days, page limits had two purposes. They were necessary because journal issues came in quanta of page signatures, and therefore had fundamental physical limits to the number of pages that could be included. A network-distributed journal no longer has this problem. Page limits also serve the purpose of constraining the author to write relatively succinctly, easing the burden on reviewer and (eventually) reader. But for this purpose the page is not a robust unit of measurement of the constrained resource, the reviewers’ and the readers’ attention. One page can hold anything from a few hundred to a thousand or more words. If limits are to be required, they should be stated in appropriate units such as the number of words. The word count should not include figures, tables, or bibliography, as they impinge on readers’ attention in a qualitatively different way.
• Author-date citation is far superior to numeric citation in every way except for the amount of space and ink required. Now that digital documents use no physical space or ink, there is no longer an excuse for numeric citations. Similarly, ibid. and op. cit. should be permanently retired. I appreciate that different fields have different conventions on these matters. That doesn’t change the fact that those fields that have settled on numeric citations or ibidded footnotes are on the wrong side of technological history.
• Extensive worry about and investment in fancy navigation within and among the journal’s articles is likely to be a waste of time, effort, and resources. To first approximation, all accesses to articles in the journal will come from sites higher up in the web food chain—the Google’s and Bing’s, the BASE’s and OAIster’s of the world. Functionality that simplifies navigation among articles across the whole scholarly literature (cross-linked DOIs in bibliographies, for instance, or linked open metadata of various sorts) is a different matter.

### Think twice

In the end, think long and hard about whether founding a new open-access journal is the best use of your time and your institution’s resources in furthering the goals of open scholarly communication. Operating a journal is not free, in cash and in time. Perhaps a better use of resources is making sure that the academic institutions and funders are set up to underwrite the existing open-access journals in the optimal way. But if it’s the right thing to do, do it right.

1. A caveat on Portico’s journal preservation service: The service is designed to release its stored articles when a “trigger event” occurs, for instance, if the publisher ceases operations. Unfortunately, Portico doesn’t release the journal contents openly, but only to its library participants, even for OA journals. However, if the articles were licensed under CC-BY, any participating library could presumably reissue them openly.

## Lessons from the faux journal investigation

### October 15th, 2013

 …what 419 scams are to banking… “scams upon scammers” image by flickr user Daniel Mogford used by permission.

Investigative science journalist John Bohannon[1] has a news piece in Science earlier this month about the scourge of faux open-access journals. I call them faux journals (rather than predatory journals), since they are not real journals at all. They display the trappings of a journal, promising peer-review and other services, but do not deliver; they perform no peer review, and provide no services, beyond posting papers and cashing checks for the publication fees. They are to scholarly journal publishing what 419 scams are to banking.

We’ve known about this practice for a long time, and Jeffrey Beall has done yeoman’s work codifying it informally. He has noted a recent dramatic increase in the number of publishers that appear to be engaged in the practice, growing by an order of magnitude in 2012 alone.

In the past, I’ve argued that the faux journal problem, while unfortunate, is oversold. My argument was that the existence of these faux journals costs clued-in researchers, research institutions, and the general public nothing. The journals don’t charge subscription fees, and we don’t submit articles to them so don’t pay their publication fees. Caveat emptor ought to handle the problem, I would have thought.

But I’ve come to understand over the past few years that the faux journal problem is important to address. The number of faux journals has exploded, and despite the fact that the faux journals tend to publish few articles, their existence crowds out attention to the many high-quality open-access journals. Their proliferation provides a convenient excuse to dismiss open-access journals as a viable model for scholarly publishing. It is therefore important to get a deeper and more articulated view of the problem.

My views on Bohannon’s piece, which has seen a lot of interest, may therefore be a bit contrarian among OA aficionados, who are quick to dismiss the effort as a stunt or to attribute hidden agendas. Despite some flaws (which have been widely noted and are discussed in part below), the study well characterizes and provides a far richer understanding of the faux OA journal problem. Bohannon provides tremendous texture to our understanding of the problem, far better than the anecdotal and unsystematic approaches that have been taken in the past.

His study shows that even in these early days of open-access publishing, many OA journals are doing an at least plausible job of peer review. In total, 98 of the 255 journals that came to a decision on the bogus paper (about 38%) rejected it. It makes clear that the problem of faux journal identification may not be as simple as looking at superficial properties of journal web sites. About 18% of the journals from Beall’s list of predatory publishers actually performed sufficient peer review to reject the bogus articles outright.

Just as clearly, the large and growing problem of faux journals — so easy to set up and so inexpensive to run — requires all scholars to pay careful attention to the services that journals provide. This holds especially for open-access journals, which are generally newer, with shorter track records, and for which the faux journal fraud has proliferated in a short time much faster than appropriate countermeasures can be deployed. The experiment provides copious data on where the faux journals tend to operate from, where they bank, where their editors are.

Bohannon should also be commended for providing his underlying data open access, which will allow others to do even more detailed analysis.

As with all studies, there are some aspects that require careful interpretation.

First, the experiment did not test subscription journals. All experimenters, Bohannon included, must decide how to deploy scarce resources; his concentrating on OA journals, where the faux journal problem is well known to be severe, is reasonable for certain purposes. However, as many commentators have noted, it does prevent drawing specific conclusions comparing OA with subscription journals. Common sense might indicate that OA journals, whose revenues rely more directly on the number of articles published, have more incentive to fraudulently accept articles without review, but the study unfortunately can’t directly corroborate this, and as in so many areas, common sense may be wrong. We know, for instance, that many OA journals seem to operate without the rapacity to accept every article that comes over the transom, and that there are countervailing economic incentives for OA journals to maintain high quality. Journals from 98 publishers — including the “big three” OA publishers Public Library of Science, Hindawi, and BioMed Central — all rejected the bogus paper, and more importantly, a slew of high-quality journals throughout many fields of scholarship are conducting exemplary peer review on every paper they receive. (Good examples are the several OA journals in my own research area of artificial intelligence — JMLR, JAIR, CL — which are all at the top of the prestige ladder in their fields.) Conversely, subscription publishers also may have perverse incentives to accept papers: Management typically establish goals for the number of articles to be published per year; they use article count statistics in marketing efforts; their regular founding of new journals engenders a need for a supply of articles so as to establish their contribution to the publisher’s stable of bundled journals; and many subscription journals especially in the life sciences charge author-side fees as well. Nonetheless, it would be unsurprising if the acceptance rate for the bogus articles would have been lower for subscription journal publishers given what we know about the state of faux journals. (Since there are many times more subscription journals than OA journals, it’s unclear how the problem would have compared in terms of absolute numbers of articles.) Hopefully, future work can clear up this problem with controls.

Second, the experiment did not test journals charging no author-side fees, which is currently the norm among OA journals. That eliminates about 70% of the OA journals, none of which have any incentive whatsoever to accept articles for acceptance’s sake. Ditto for journals that gain their revenue through submission fees instead of publication fees, a practice that I have long been fond of.

Third, his result holds only for journal publishing in the life sciences. (Some people in the life sciences need occasional reminding that science research is not coextensive with life sciences research, and that scholarly research is not coextensive with science research.) I suspect the faux journal problem is considerably lower outside of the life sciences. It is really only in the life sciences where there is long precedent for author-side charges and deep pockets to pay those charges in much of the world, so that legitimate OA publishers can rely on being paid for their services. This characteristic of legitimate life sciences OA journals provides the cover for the faux journals to pretend to operate in the same way. In many other areas of scholarship, OA journals do not tend to charge publication fees as the researcher community does not have the same precedent.

Finally, and most importantly, since the study reports percentages by publisher, rather than by journal or by published article, the results may overrepresent the problem from the reader’s point of view. Just because 62% of the tested publishers[2] accepted the bogus paper doesn’t mean the problem covers that percentage of OA publishing or even of life sciences APC-charging OA publishing. The faux publishers may publish a smaller percentage of the journals (though the faux publishers’ tactic of listing large numbers of unstaffed journals may lead to the opposite conclusion). More importantly, those publishers may cover a much smaller fraction of OA-journal-published papers. (Anyone who has spent any time surfing the web sites of faux journal publishers knows their tendency to list many journals with very few articles. Even fewer if you eliminate the plagiarized articles that faux publishers like to use to pad their journals.) So the vast majority of OA-published articles are likely to be from the 38% “good” journals. This should be determinable from Bohannon’s data — again thanks to his openness — and it would be useful to carry out the calculation, to show that the total number of OA-journal articles published by the faux publishers account for a small fraction of the OA articles published in all of the OA journals of all of the publishers in the study. I expect that’s highly likely.[3]

Bohannon has provided a valuable service, and his article is an important reminder, like the previous case of the faux Australasian Journals, that journal publishers do not always operate under selfless motivations. It behooves authors to take this into account, and it behooves the larger scientific community to establish infrastructure for helping researchers by systematically and fairly tracking and publicizing information about journals that can help its members with their due diligence.

1. In the interest of full disclosure, I mention that I am John Bohannon’s sponsor in his role as an Associate (visiting researcher) of the School of Engineering and Applied Sciences at Harvard. He conceived, controlled, and carried out his study independently, and was in no sense under my direction. Though I did have discussions with him about his project, including on some of the topics discussed below, the study and its presentation were his alone.  ↩
2. It is also worth noting that by actively searching out lists of faux journals (Beall’s list) to add to a more comprehensive list (DOAJ), Bohannon may have introduced skew into the data collection. The attempt to be even more comprehensive than DOAJ is laudable, but the method chosen means that even more care must be taken in interpreting the results. If we look only at the DOAJ-listed journals that were tested, the acceptance rate drops from 62% to 45%. If we look only at OASPA members subject to the test, who commit to a code of conduct, then by my count the acceptance rate drops to 17%. That’s still too high of course, but it does show that the cohort counts, and adding in Beall’s list but not OASPA membership (for instance) could have an effect.  ↩
3. In a videotaped live chat, Michael Eisen has claimed that this is exactly the case.  ↩