A little over 300 years ago, Antonie van Leeuwenhoek, a dry goods seller from Delft in Holland, learned to grind glass into lenses and fashion the best microscopes the world had ever seen. In those days, the idea of being a "scientist" as a profession was ludicrous. Natural philosophy was pastime for nobility or at least those with considerable disposable income. Leeuwenhoek was a successful business man, and in his spare time, he pointed his lenses at pond water (among other things). As Paul de Kruif recounted in his brilliant book Microbe Hunters:
[Leeuwenhoek] peeped into a fantastic sub-visible world of little things, creatures that had lived, had bred, had battled, had died, completely hidden from and unknown to men from the beginning of time. Beasts these were of a kind that ravaged and annihilated whole races of men ten million times larger than they were themselves. Beings these were, more terrible than fire-spitting dragons or hydra-headed monsters. They were silent assassins that murdered babies in warm cradles and kings in sheltered places. It was this invisible, insignificant, but implacable world that Leeuwenhoek had looked into for the first time of all men in all countries.
I don't think it's an exaggeration to say that the discovery of "little animals," the wee beasties from which this blog derives its name, has radically changed the course of humanity. But how did humanity learn of this monumental news? Leeuwenhoek wrote a letter.
Since then, the world of science communication has changed radically. These days, there's an entire industry of academic publishers that have become so fully integrated into the research system that many scientists don't realize that there's any distinction between doing science and publishing in journals. However, these journals cost an enormous amount of money (mostly public tax dollars), yet add little value to scientific research, while simultaneously slowing the pace of discovery and limiting the dissemination of knowledge. Recently, some of these journals have backed a new law that would further inhibit public dissemination of science in an effort to prop up their already massive profit margins.
But before I get to that, some history.
In Leeuwenhoek's day, there were learned men that set out to interrogate the world around them rather than trust the accounts of the ancients. These early scientists mostly communicated amongst themselves in person or in letters or in books. They shared discoveries freely and it was possible for an individual human to be aware of almost the entire sum of human knowledge. Leeuwenhoek's description of the wee beasties was sent to the Royal Society of London, and quickly disseminated to all interested parties in Britain and the rest of Europe.
Of course, the pace of research has accelerated dramatically since then, and it rapidly became untenable for simple correspondance and word of mouth to transmit new discoveries. The earliest scientific journals - collections of discoveries assembled and printed for distribution - began in 1665. Today, the number of scientific journals is in the thousands, and the people publishing in those journals are largely professional scientists. The funding of science also changed. Rather than being a pass-time of the rich, who funded the research on their own, research is now the purview of highly trained professionals, funded largely from the government purse. Because the rise of journals occurred in tandem with the rise of professional, publicly funded science, the two are now inextricably linked, to the point where the publication of discoveries in journals is necessary to maintain a career in academic science. Job prospects, grants and promotions all depend on the quality and quantity of publications.
Bora Zivkovic brilliantly documented the rise of professional science and modes of science communication (it's long, but well worth reading if you haven't already). His main thesis in this piece concerns the relationship between science and science journalism, but I also think there are insights into the way we as scientists communicate in our professional capacity. The modern link between the publication of science in journals, the funding of science based on publication record, and what publications mean for your scientific reputation means that when I talk to my colleagues, most don't really accept that there's any other way it can be done. Doing science means publishing in journals. Full stop.
But what if there is another way?
Recently, a promenant mathematician named Timothy Gowers started a boycott of one of the largest academic publishers (Elsevier), and it's gaining steam. Some news outlets have even taken to calling this the beginning of an "academic spring" - an uprising fueled by discontent about a powerful cabal that control many aspects of our lives. I wouldn't equate the concerns of scientists to suffering of the people in the Middle East under oppressive dictators, but I think we can and we should take that spirit of revolution to our little corner of the world. We don't need gatekeepers, and we can use the internet to build a movement and replace antiquated and crumbling institutions.
In his blog post announcing his boycott, Gowers said:
I don't think it is helpful to accuse Elsevier of immoral behaviour: they are a big business and they want to maximize their profits, as businesses do. I see the argument as a straightforward practical one. Yes, they are like that, as one would expect, but we have much greater bargaining power than we are wielding at the moment, for the very simple reason that we don't actually need their services. That is not to say that morality doesn't come into it, but the moral issues are between mathematicians and other mathematicians rather than between mathematicians and Elsevier. In brief, if you publish in Elsevier journals you are making it easier for Elsevier to take action that harms academic institutions, so you shouldn't.
[Italics in original, boldface mine]
I agree, but I would go further. We don't need any academic journal's services anymore. If you publish in any journal, you are making it easier for them to take action that harms academic institutions, so you shouldn't. Unfortunately, as I mentioned earlier, any academic scientists that took such a principled stand against all publishers would be ineligible for promotions or tenure and would have a much more difficult time securing grants to continue funding their research. But the truth is, journals add very little value to science, and impose huge monetary costs, as well as costs in terms of delayed publication and limited distribution.
Defenders of journals most often point to peer review as an essential bar that journals set, keeping good science filtered from the garbage. There are several problems with this argument, however. First, peer review only became a staple of science publication in the last 50 years. The paper describing the double-helix model shape of DNA was not peer reviewed, but that didn't make it any less correct. Second, peer review doesn't actually prevent crap science from getting published. Take Andrew Wakefield's 1998 paper linking a vaccine with autism (now retracted), or more recently, a theory of everything based on the a principal of "gyers" that has no math and fails to mention quarks or bosons. Even papers that are filled with good experiments and good science sometimes turn out to be wrong - peer review is not the stamp of authority many people seem to think it is. Finally, and I think this is the most important, journals don't do peer review. Academic scientists do peer review for journals, and they do it for free. Journals merely manage this process.
Another defense of journals that I find even more tenuous is their filtering capacity. As scientists, we know something published in Nature is probably of higher quality than something published on The European Journal of Immunology. However, this rule of thumb doesn't always hold, and the more prestigious journals actually have higher rates of retraction than lower tier journals. Besides, when I'm looking for papers on a particular subject, I don't browse old issues of Nature and Science, I do a search on google scholar or pubmed, and I actually read the papers and determine for myself whether I believe their data or not. One of the first skills we learn in graduate school is to critically evaluate a paper - you're not supposed to believe the authors just because they got published in a well-respected journal.
So, what are the alternatives? In my idyllic world, every lab has their own blog, and publishes their results in real time, sharing them on a site like ResearchGate. Individual figures can be indexed on something like FigShare. Scientists can post their negative or confusing data and ask the entire world for help, or talk about their research plans and get critiqued. Meanwhile, altmetrics are being generated in real time to assess the validity of data, and scientists peer review on their own blogs or at some central location. The distribution of scientific knowledge returns to the model of the 19th century - free and openly distributed - but now also instantly and globally distributed at the same time. If you don't like my model, that's fine - come up with your own, but we at least need a situation where other models can compete.
If we don't have other models, and we allow academic journals the monopoly on content that they currently enjoy, they will use their power to continue stifling free access and enriching themselves. Elsevier and other academic publishers are currently supporting the Research Works Act in the US, which would end a current policy that mandates open access to any scientific data that was funded by public money within one year of publication. Considering Elsevier's profit margins are higher than Google's, I have trouble understanding why they feel the need to be more restrictive.
Science benefits when the flow of information is unrestricted and everyone benefits when scientific knowledge advances. Journals no longer assist in the distribution of knowledge, they only impede it, and no one benefits from this arrangement except the journals themselves. It's time for something new.
The links for this post as well as some others I bookmarked while researching it can be found on Delicious
Considering Elsevier's profit margins are higher than Google's, I have trouble understanding why they feel the need to be more restrictive.
They feel the need to maintain those high profit margins and they cannot continue to do so without the government mandating their monopoly status. Businesses are expected to work in their own best interest. In an ideal world governments would balance those selfish interests against the common good. We live in a far from ideal world
In 2010 their was a paper published in "Neurotoxicology" which followed the health outcomes of baby primates vaccinated like humans. The results were alarming. Elsevier had this paper unpublished. Look at the conflicts of interest involving Elsevier and vaccine manufacturer's. It's quite alarming. BTW, a simple pubmed search for "autism vaccines" will give you reference to 540 published papers. Not just one by Wakefield.
The 'editor'ing, reviewing and organisation can be (and most of it already is) done by academics themselves. With electronic publishing, there are almost no overheads. It just takes someone to start the ball rolling.
The biggest barrier I see is the one you mention: the way we are locked into journal publishing for career advancement. How would you say we could start getting around this?
@ Mr. Obvious - That's true, I hoped my sarcasm would come through in the text...
BTW, a simple pubmed search for "autism vaccines" will give you reference to 540 published papers. Not just one by Wakefield.
What's your point? I looked at the first 20 papers returned by that search and all of them are talking about the failure of the hypothesis. There is no link between vaccines and autism.
@ Scientist -
The 'editor'ing, reviewing and organisation can be (and most of it already is) done by academics themselves. With electronic publishing, there are almost no overheads.
The biggest barrier I see is the one you mention: the way we are locked into journal publishing for career advancement. How would you say we could start getting around this?
I'm struggling with this question myself. I think it might take a few high profile scientists in secure positions to get it started. But the trouble is, those people by definition have had success under the traditional model, so they don't have much incentive to change it. Perhaps some labs could take one project out of many and try out some other models, or maybe it will take convincing institutions (universities or government agencies) to promote it and recognize alternative metrics.
On the other hand, the physics and math folks have other structures in place that have gained some measure of institutional success (arXiv and Polymath), maybe it will just take a grass-roots effort.
If you publish in any journal, you are making it easier for them to take action that harms academic institutions, so you shouldn't.
"Any"? I've spent some time in not-for-profit, fee-for-service academic publishing in the physical sciences. It's not a lucrative gig for the actual hands swabbing the decks and trying to avoid the need for errata as floatation devices. Talking down such jumpers may be more common than you think; I managed a few a year.
In this model, the question is whether there is value added in the production process, which includes manuscript editing (itself including a host of matters in assisting authors without a firm grasp of the target language to communicate their results clearly), deborking of amateurish art, fixing carried-forward erroneous references, sifting table-hurl, and so on. There is certainly an argument to be made that the level of service provided in this realm nowadays is so low as to be not worth paying for, and I'm sympathetic to it, but I'd argue that this is a predictable result of erring in the better-faster-cheaper balancing act.
On the other hand, if one simply takes the position that it's all going to be superseded in short order, well, yeah, it should be treated as fish-wrapping.
@ Narad - Thanks for your perspective. I realize that my stance is a bit radical, and that some good people providing real value might get thrown under the bes by the sort of revolution that I would like to see happen. Unfortunately, I think that the entrenched interests are such that gradual change that would protect the sort of outfits you describe won't be enough to change the system.
In this model, the question is whether there is value added in the production process, which includes manuscript editing (itself including a host of matters in assisting authors without a firm grasp of the target language to communicate their results clearly), deborking of amateurish art, fixing carried-forward erroneous references, sifting table-hurl, and so on.
I believe there is value in these services, I just don't think the value is commensurate with the cost of journals. If, for instance, universities retained all the money they spend on institutional subscription fees, they could hire full-time copy editors that provided those services for labs in their system. Or maybe individual labs could hire free-lancers with the money they save on submission fees. If we disambiguate all the value-added services journals do provide (managing peer review, filtering, editing etc), then other models could compete to provide the same services better and cheaper. Maybe the current model of a one-stop-shop would end up on top, but I think they would be forced to do it better themselves and science as a whole would benefit.
It's weird for me to write out such a free-market centric view when I'm quite politically liberal, but there it is.
I think cutting peer review out of the loop is potentially dangerous.
Peer review has one killer feature: it introduces a second person into the publication loop who has a disincentive to publish crap research. If a researcher bypasses that person (e.g. by going for a fringe journal), this signals that the research is in fact crap.
To replace this system, we're going to need two things:
1) A scalable way for people who already have a good reputation to confer that on others. This could be as simple as a Facebook-style "like" button or as complex as a Google-style solved network.
2) A way of selecting competent but disinterested reviewers for a given paper, and indicating when such a review has been performed. One reason the political debate gets so polarised is that people only comment on something when they strongly agree or strongly disagree. I have no desire to see science go the same way.
Interested laymen like myself rely strongly on the journal system as a basic sanity-check - if research appears in a journal whose name I recognise (e.g. PLoS) then it's unlikely to be obviously wrong at time of writing. Without some sort of delegation of trust, learning about a new field of science would be effectively impossible.
@ Corkscrew - I don't disagree with anything you wrote. I don't dispute the importance of peer review, I just think there's a better way to accomplish peer review, and it won't happen unless we can weaken the strangle hold of traditional publication.
I'm surprised you didn't discuss the PLOS model--free access, non-profit, low cost to the publishing author to take care of expenses for editing, administration, etc.
@ Chris V - I didn't discuss it because I'm a bit ambivalent about it. I think it's a step in the right direction (open access is better than not) and I'm thrilled with the success that they've had. However, they're still playing with the same model, which might just reinforce it. Traditional publishers can point to PLoS and say, "see, there are other models!" but in reality it's the same model with a slightly different source of revenue. That said, I certainly prefer non-profit to for-profit in an area where the monopoly is so strong, but ultimately I would prefer more competition, even amongst entities that are doing things for profit.
Let's not overlook the relation between these control information freaks and the intellectual property maximalists from the copyright cartel. They are the same people who have been stealing from the public domain for decades, culture and science no less. They have gathered an immense amount of power and have managed to buy politicians and create the necessary laws to perpetuate their outdated business practices (Copyright extension act, among many others). Their latest round of ammunition has been SOPA, PIPA and ACTA. All of them very dangerous to freedom of speech and technological innovation. It's about time we start questioning the role of copyright, patents, IP and information control. If these people do to scientific knowledge what they have done to culture the results can be catastrophic.
1) Peer review is important. Although there was less formal peer review in the past, there was the implied peer review of replication.
2) Until perhaps the mid- to late-nineties, there was a real need for expensively printed journals. That was the best way to preserve information. There weren't enough stable computer resources for the idea of primarily digital transmission and storage to make sense. It isn't true that science is "superseded" - science is built on past science. It's often of great historical value, and sometimes of scientific value, to view very old papers, so high quality paper and ink made sense. There were far fewer journals, and there were far publishing houses per capita. It made sense to use existing publishing infrastructure.
3) The development of the "publish or perish" model of academia caused the demand for more and more hyper-specialized journals, which publishers gladly provided for high cost, leading to severe inflation in the cost of journal subscriptions for academic libraries.
4) Primarily print journals aren't needed anymore. I think the PLoS model is great. But the bottom line is that whatever exact digital system is used, it should retain very strong peer review and very strong timely central searchability (those strike me as difficult goals to achieve with a "lab blog" system, although certainly a system of preliminary reports on lab blogs is easy to imagine).
5) However, there are major barriers. One less important one is that the cost of journals is borne indirectly by scientists. Money that goes to publishing house profits is money that can't go to other things, but to put it crudely, the institution's medical and science libraries usually pay the direct cost of reading the material, tempering outrage at costs at the level of the individual investigator. Much more importantly, an extremely well-known hierarchy of journal prestige tends to exist within every field. The highest circulation general journals, e.g. Science, Nature, NEJM, have the very highest prestige, then there is the next level down (often represented by the "Cell" family of journals in the biomedical sciences) and then the more specialized journals - and in every field, everyone usually knows which are the most prestigious.
This hierarchy is both useful, but also corruptible and potentially inflexible. But my point here is that it exists, and the next generation of publishing will have to take it into account.
Still, I think a switch to low cost, digital reporting of results is inevitable. The days of massively expensive subscriptions to glossy print journals are limited.
@ Harold - At first I thought you were disagreeing with me, but I got to the end and realized that I agree with everything you said.
I just want to say that of all the comments to various items that allow commenting on the web, these were probably the most civil, well-written, thoughtful collection that also included differences of opinion. I have hope.