There is a new paper, just coming out in Proceedings of the National Academy of Sciences, that explores the idea that humans have undergone an increased rate of evolution over the last several tens of thousands of years.
By an increased rate of evolution, the authors mean an increased rate of adaptive change in the genome. By recent times, the authors mean various things, depending on which part of the analysis you examine, and depending on what is meant by "increased." ... In other words, the timing of an event that is not really an event (but rather a change in rate of something) is hard to specify. The time scale we are talking about here is several tens of thousands of years.
The authors accredit the major cause of the increase in rate of evolutionary change to an increase in population size during the last 50,000 years, but also point out that the biggest change in the rate of population increase would have been with the origin of agriculture subsequent to about 10,000 years ago. This partly underscores the difficulty of talking about vague (in time and space) events, but it also points out a potential problem with the analysis.
But before I delve into what I think is wrong with the analysis, let's make clear what they are saying, and point out what is probably very valid and important.
Essentially, evolutionary change, and the amount of evolutionary change that happens in a population, begins with mutation (happening at a certain rate) and continues through either random processes that cause a mutation to become more or less common over short to medium time scales. If the mutation is deleterious, it disappears quickly, and when looking at long time scales, we expect to see very few deleterious mutations that are old. If the mutation is neutral (does not have an effect one way or the other) then we expect to see the mutation become more common over time, then less common, them more common, in a kind of random walk. If there are two different forms (alleles) of a gene (the original one and a mutation) and both have the same adaptive effects (in other words, the mutation was neutral) then we expect these two alleles to increase and decrease in relation to each other randomly, and eventually, one of the mutations will accidentally bump into "zero" and disappear, leaving the other represented at 100%. Any neutral mutation that arises will by definition start off at a very low percentage, and therefore, the new mutation is usually the one that bumps into zero first, thus disappearing.
Geneticists have done a lot of work with modeling the math of change over time in frequencies of alleles that are either deleterious or neutral. The neutral part is pretty easy, because that is simple probability. The deleterious side of this is a little more difficult because "deleterious" is a quantitative and qualitative thing ... just how deleterious is a particular allele? On the other hand, it is pretty easy to insert a deleterious allele in a population of laboratory critters (bacteria, mice, etc.) and see what happens. Therefore, the statistical models that predict the behavior of deleterious mutations over time are embedded in a good sense of reality, and as a result are pretty good too.
So, when studying genetics of populations, geneticists have the ability to predict what the genetic variation should look like given the null conditions of a particular mutation rate, a particular population size and structure over time, and no positive selection. The distribution and nature ... distribution both in the genome and across a population ... of genetic variants (alleles) should look a certain way, and when they don't, you are probably looking at postitive (adaptive) selection.
I will leave it to others who know more about the statistics of population genetics than I do to evaluate the research presented in this paper. Here, in fact, I will rely on the authority of some pretty bad-ass population geneticist and evolutionary scientists who wrote the paper. Nonetheless, I eagerly await a critical analysis by my colleagues.
Going on the assumption that this research is OK, or at least, if flawed, not utterly wrong, there are two conclusions of special interest. One of these conclusions supports ideas that have already been suggested about human evolution, but in a new way, with new and more precise information, and the other contradicts a commonly held belief that those of us who think about these things a lot have long known to be a fallacy.
First, the rate of human evolution is higher now, and has been higher for tens of thousands of years, than the rate of evolution is expected to be for, say, a typical ape, and higher than we believe it may have been previous to, say, 50,000 years ago. In other words, higher than expectations, with this increase being relatively recent.
Yea! We evolve fast! Good for us. Of course, just remember that the ultimate outcome of evolution so far seems to be extinction, at least this has been the case for most species, so don't you get all full of yourself, human!
The other conclusion is this: Yes, you hear all the time that "culture overrides biology" or similar sentiments. Well, yes it can, but it is also very often not true, and I can think of many examples of culture very much NOT overriding biology. Well, this study, indicating that as the range, intensity, and ubiquity of various cultural adaptation (read: technology of all sorts from agriculture to cell phones) increases over time, so does the rate of genetic evolution. We are probably adapting to our culture. Makes sense.
Here is what I do not like about the paper. The researchers make some seriously important assumptions about population size and change in human population over time. In so doing, they model population as an ever increasing value. There is no part of their model that has a population crash. This is based on a number of papers that are individually potentially weak in this area, as well as, I think, a general assumption that archaeologists and others often make about the past. I've written and given talks about this phenomenon in the past, but apparently my wisdom has not yet been understood (damn them!)... We tend to make the assumption that changes we see happening today, in a certain direction, always happened in that direction in the past. We also tend to make the assumption that a given feature of human endeavor... writing, agriculture, whatever, is tied by an unbroken line to an origin evinced in some record (or assumption) in the past. Both of these assumptions are invalid, yet powerful in shaping our view of prehistory and history.
Indeed, the idea that agriculture was invented once (in each of the several areas in which it was invented) and continued to the present is an assumption that has not been tested. How do we know agriculture was not invented a few times over the last 100,000 years, but fell totally out of use in many areas?
This one-wayness and simplicity imposed on the past very much applies, inappropriately, to the population model used in this paper. The authors are very well aware of population crashes and bottlenecks, but probably do not adequately take them into account in this work. If you go into the archaeological record and look at the Last Glacial Maximum, it is actually pretty hard to find evidence of people living anywhere but a few locations, for instance. (That was about 18,000 years ago.) The assumption of a steady increase is unfounded.
Nonetheless, I liked the paper. Look for it to be widely cited and frequently abused, like all good papers.
Hawks, John Hawks, Eric T. Wang, Gregory M. Cochranâ¡ Henry C. Harpending, and Robert K. Moyzis. (2007) Recent acceleration of human adaptive evolution. Proceedings of the National Academy of Sciences. Forthcoming. PNAS.
I wouldn't wonder that we have an increase in the rate of evolution given the differing circumstances people have been exposed to over the past 10K years as opposed to the past 150K. When you consider living in larger, settled populations, often in conjunction with animals, I would think the unprecedented disease/parasite load would require humans to evolve faster (the Red Queen effect). It would be interesting to look at rates among different populations (although I would fear some groups claiming 'we're more evolved than them'). I'm sure there are scads of prehistoric plagues that would have forced the situation, and that this would also have resulted in multiple population bottlenecks through time.
So, uh...when are we scheduled for blowing stuff up with our minds? I think that's the real question here...
I have read your article on evolution rates as well as numerous others and has anyone tried to estimate the number of mutations required to get a Homosapien evolved form a "Lucy"? Austrailopithicus.... sorry for the spelling.
It seems to me we know what we had 6.3 million years ago and we know what we have now so what are the deltas to get to where we are now and is there enough time?
I see no discussion on this matter...