Avoiding the question, what Sir Crispin is really saying is that the real metric here is not cost but cost per download, and given enough downloads, any subscription price (even Brain Research) could be substantiated.
On the surface, a normative price model makes sense. We purchase cheese by the pound (or kilogram), apples by the peck, wine is sold by the bottle, and gold is sold by the ounce. But information functions very differently than other physical goods. Producing journal articles incurs high fixed costs and very low marginal costs. The cost of sending another PDF over the Internet costs the publisher virtually nothing, which is why Elsevier wants to send you more. Lots more.
Elsevier has recently unveiled a new feature on its Science Direct platform called their Document Download Manager which allows a reader to Games Download multiple articles simultaneously. If you don’t pre-select any articles, they’ll just send you the first 20.
By why stop at 20? As computer storage space gets bigger and cheaper and bandwidth grows, there is no reason why they couldn’t send you entire collections. I could just imagine their press-release: “Don’t waste a second of your precious time waiting for a download! We’ve just dumped our entire journal contents on your machine!” One could then sign up for an RSS-like feed that would automatically update one’s computer with new issues when they become available.
Measuring and comparing the unit cost of a download starts becoming very meaningless in an environment where bulk downloading is not only facilitated, but highly encouraged. In addition, the interface of a publisher can result in different usage patterns, making comparison of journal usage across publishers — the explicit goal of Project COUNTER — very difficult to do.
While there are many reasons to consider usage based metrics, the development of a Usage Factor — a project undertaken by the United Kingdom Serials Group (UKSG) — has unintentional consequences. By focusing on usage metrics, they reify the article download. Article downloads cease to be a measure of readership and become a goal in-and-of-themselves, as publishers becomes transfixed on maximizing the number of documents they send out into the ether. Not only does this create a new type of spam (a Tragedy of the Commons on the Internet), it obfuscates any meaning one derives from usage reports, making it impossible to distinguish the intention of a single human click from bulk Free Download.
One could argue that what the UKSG is doing is no different than what the Institute for Scientific Information (ISI) did to the citation in creating the Impact Factor. But there is a difference. Citations are public, transparent, and can be validated. If I suspect that a journal is artificially inflating its numbers, I can go back to the articles and start counting myself.
The UKSG is relying on the honesty of the publisher to send usage reports that reflect true download counts. While I don’t question the honesty of most publishers, I do question some, and there is no way for a skeptic to validate the numbers. Even if a publisher were willing to send raw transaction logs upon request, few have the resources or ability to digest the data. It is a system built on Blind Faith, trust me.
When rewards are high and risk is low, any opaque system is open to gaming and abuse. Usage Factor will be no different.