When the college revised the general education requirements a few years ago, one of the new courses created had as one of its key goals to teach students the difference between primary and secondary sources. Which, again, left me feeling like it didn't really fit our program-- as far as I'm concerned, the "primary source" in physics is the universe. If you did the experiment yourself, then your data constitute a primary source. Anything you can find in the library is necessarily a secondary source, whether it's the original research paper, a review article summarizing the findings in some field, or a textbook writing about it years later.
In many cases, students are much better off reading newer textbook descriptions of key results than going all the way back to the "primary source" in the literature. Lots of important results in science were initially presented in a form much different than the fuller modern understanding. Going back to the original research articles often requires deciphering cumbersome and outdated notation, when the same ideas are presented much more clearly in newer textbooks.
That's not really what they're looking for in the course in question, though-- they don't want it to be a lab course. But then it doesn't feel like a "research methods" class at all-- while we do occasional literature searches, for the most part that's accomplished by tracing back direct citations from recent articles. When I think about teaching students "research methods," I think of things like teaching basic electronics, learning to work an oscilloscope, basic laser safety and operation, and so on. The library is a tiny, tiny part of what I do when I do research, and the vast majority of the literature searching I do these days can be done from my office computer.
I'm going to share some observations which maybe complicate Chad's "two cultures" framing of research (and of what sorts of research methods one might reasonably impart to undergraduates in a course focused on research methods in a particular discipline).
First, on the science side, I think there's value for students in digging up, and digging through, the primary literature of the discipline in which they are learning to do research. Sure, you don't want the primary literature to seem more authoritative as a source of knowledge than the actual experimental system a student is studying*. But neither, I think, does it make sense to just throw an experimental system at a student and say "Here, come up with a good research question and figure out how to answer it." Having some sensible starting point, in terms of both theoretical background for the phenomena being studied and experimental methods**, makes it more likely that the student learning how to do research will make some progress rather than being paralyzed by an overabundance of possible approaches.
And, at least for some scientific fields (like chemistry), what's in even the most recent textbooks may not be enough to give insight to how to tackle a particular experimental system. Textbooks may give very good explanations of the general principles underlying particular kinds of behavior without given specific empirical results for an instance of that behavior you want to explore further. Plus, textbooks are largely silent about the specific details of experiments that have been useful in building the knowledge we have of either specific phenomena or large classes of related phenomena.
In my undergraduate studies in chemistry, I'm actually having a hard time thinking of a research experience where it wasn't the case that some piece of the primary literature gave important information about how to start poking at the chemical phenomenon that was the focus of the research.
Indeed, I think engaging with the primary scientific literature can also help students get a fuller understanding of what the activity of scientific research is about. In a post of yore about undergraduate research, I wrote:
1. Making knowledge is different from learning knowledge.â¨
One of the important things undergraduate research can do is give a student insight to the activities behind the production of the knowledge she has been learning in class. There's a way in which this can be unexpectedly frustrating -- it's hard to get research to work, and thus your attempts to build a wee chunk of new knowledge may founder, something that can especially bug the student who has an easy time learning stuff from textbooks or class. However, a research experience can also make you attentive to the labor, ingenuity, and moments of good luck behind all that solid knowledge in your texbooks.
2. Scientific research can turn on creativity.â¨
Figuring out how to approach a problem in the lab -- one that no one else has solved -- can be fun. For the student who thought being good at science was a matter of having a good memory and a solid sense of the underlying principles of a subject, seeing the role creativity can play is frequently a joyful awakening.
How could the primary literature contribute to conveying these lessons?
One way is by exposing some intermediate moments in the process of building the scientific knowledge deemed "finished" enough to appear in the textbooks -- showing students that even results robust enough to publish are not necessarily a clear and comprehensive window into just what's going on with the phenomenon being studied. In other words, scientific knowledge is hardly ever built in one shot. Rather, it takes a bunch of "finished" research projects -- plus some insight into how they fit together. Maybe understanding this point is not the primary goal of most undergraduate research experiences in the sciences, but I'd hope it's at least a secondary goal.
On a related note, watching the evolution of experimental approaches to a particular system through different articles in the primary literature can help students grapple with the fact that scientific research involves both trial and error and creativity. When you're trying to find out facts about the word that no one knows yet, the best method for how to find those facts (or how to recognize them when you've found them) is not obvious up front. Seeing the range of things that grown-up scientists have tried to get at new knowledge makes this uncertainty more vivid -- and may also help students come up with some new approaches inspired by those described in the primary literature.
Finally, to the extent that undergraduates learning research methods in science may be on a track to becoming grown-up scientists themselves, it's good for them to learn norms of the tribe of science. These include expectations about what sorts of things they should communicate to other members of the tribe when reporting their results, and norms about giving other members of the tribe credit for their discoveries and methodological innovations. Getting comfortable with some exemplars from the primary literature helps them see what scientific papers in their field look like and gives them some experience in extracting the information they need. As well, "reading around" in the area of their research projects can help students identify (and cite) where our background knowledge as it stands now came from, and who set out the experimental methods that serve as the starting point for the students' tinkering. Plus, facility working through the details of research papers seems like an essential skill for a future scientist who want to keep up with the literature -- waiting around for these details to make the next edition of the textbook is bound to slow down the progress of one's own research.
It's also worth noting that research in the humanities is not necessarily as different from research in physics as Chad imagines. In philosophy, when reading around in our research area, we also use the trick of tracing back direct citations from recent articles. Our searches of the literature can also be conducted from our office computers using electronic databases (and, increasingly, electronic journals). And the literature search is, as in the sciences, a relatively small component of the research.
Say your project is to figure out the right analysis of a concept like "cause". Reading how other philosophers have analyzed that concept is where you start, but not where you end. Rather, you have to work out what's wrong (and what's right) with those analyses. You have to work out a methodology for distinguishing reasonable accounts of that concept from unreasonable ones. Maybe you have to come up with a set of plausible test-cases, or evidence to counter some of the assumptions on which other analyses of the concept have been grounded.
It's not exactly mixing up solutions or twiddling oscilloscope knobs, but research in the humanities involves something like engagement with the phenomenon you're trying to understand. It's more than just synthesizing all of what others have already said about that phenomenon. Here too, even in the humanities, the primary literature can provide an array of methodological exemplars, giving you some ideas about how to move forward and break new ground.
None of this is to say that every pass at teaching undergraduates about research methods in a field needs to be focused on information literacy (as we're now describing the skills involved in doing a good literature search). But I don't think we should write off the pedagogical value of poking around in the primary literature, even for science students.
*This is where the Scholastics went overboard in the era before the scientific revolution, the joke being that they thought the right way to determine how many teeth a horse had was to see what Aristotle had said on the subject, rather than heading over to the stable to look at the dental endowment of actual horses.
**Or analytic and/or computational methods, for those learning something about the research theoreticians do.
I completely agree that digging up the primary research can be very valuable for a science grad student. In my field (engineering/math), itâs often the 20-30 yr-old ground-breaking papers that I find most clear/useful. That is because these papers present material that, by definition, was new and perhaps even counterintuitive to their audience, so they usually go the extra mile to work everything out. For a grad student, new to a particular (sub)field and not conversant with most of the prior yearsâ development, such detailed explanations can really help you appreciate the nuances of results that these days are talked about in very casual terms.
In chemistry, some days it felt like the division was:
Primary literature: your Advisor's advisor's (or at least your advisor's heroes') papers.
Secondary literature: the other stuff.
In some ways that was good - getting grounded in the methods tried 20 or 30 years before you started often lets you read a bit more critically when they are published again as "new" by someone else. Especially when someone else is promising their incarnation of it is more versatile than a swiss army knife. (it's not).
FWIW - a series of articles by David Ellis found that chaining (following citations backward and forward) was a common tactic to all areas of scholarship he studied including humanities, social sciences, and various areas of the natural sciences. Only slightly less effective in areas like engineering where there are many fewer citations.
Also, I got through an entire undergrad physics degree completely without ever reading a real scientific article (my undergrad is from the same time and place as Chad's PhD) - I think the lack of that and undergrad research really hurt my experience as well as my understanding of physics
Over the years, not just people but entire fields seem to forget things. The paper I am currently working on is a case in point. There is a related phenomenon which was considered important enough to rate a couple of review papers in the late 1980s. The two review papers in question are among the least cited of any reviews I have come across. There was a follow-on theoretical study in 1989, and after that research into the phenomenon pretty much stopped dead for no obvious reason.
In some cases the forgotten phenomenon then gets rediscovered by somebody (grad student or more senior scientist) who was unaware of the previous work, and the paper gets past reviewers who were also unaware of the previous work. There may also be a Matthew effect* in play which steers researchers toward particular highly cited references at the expense of other more relevant references.
*"For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath."--Matthew 25:29.
I have thought that learning how we came to know what we know, and why we thought it important to learn is more fun than just knowing what we know. I'm a fish taxonomist, and a taxonomist must have a very good knowledge of the literature in the group back to 1758, as well as the literature on how one does taxonomy. Perhaps it is that taxonomy is part science and part scholasticism.