A reader writes:
I was in a PhD program in materials science, in a group that did biomedical research (biomaterials end of the field) and was appalled at the level of misconduct I saw. Later, I entered an MD program. I witnessed some of the ugliest effects of ambition in the lab there.
Do you think biomedical research is somehow "ethically worse" than other fields?
I've always wanted to compare measurable instances of unethical behavior across different fields. As an undergraduate I remember never hearing or seeing anything strange with the folks that worked with metallurgy and it never seemed to be an issue with my colleagues in these areas in graduate school. Whenever there is trouble it seems to come from the biomedical field. I'd love to see you write about that.
Thank you for doing what you do, since that time I have so many regrets, your blog keeps me sane.
First, I must thank this reader for the kind words. I am thrilled (although still a bit bewildered) that what I write here is of interest and use to others, and if I can contribute to someone's sanity while I'm thinking out loud (or on the screen, as the case may be), then I feel like this whole "blogging" thing is worthwhile.
Next, on the question of whether biomedical research is somehow "ethically worse" than research in other areas of science, the short answer is: I don't know.
Certainly there are some high profile fraudsters -- and scientists whose misbehavior, while falling short of official definitions of misconduct, also fell well short of generally accepted ethical standards -- in the biomedical sciences. I've blogged about the shenanigans of biologists, stem cell researchers, geneticists, cancer researchers, researchers studying the role of hormones in aging, researchers studying immunosuppression, anesthesiologists, and biochemists.
But the biomedical sciences haven't cornered the market on ethical lapses, as we've seen in discussions of mechanical engineers, nuclear engineers, physicists, organic chemists, paleontologists, and government geologists.
There are, seemingly, bad actors to be found in every scientific field. Of course, it is reasonable to assume that there are also plenty of honest and careful scientists in every scientific field. Maybe the list of well-publicized bad actors in biomedical research is longer, but given the large number of biomedical researchers compared to the number of researchers in all scientific fields (and also the extent to which the public might regard biomedical research as more relevant to their lives than, say, esoteric questions in organic synthesis), is it disproportionately long?
Again, that's hard to gauge.
However, my correspondent's broad question strikes me as raising a number of related empirical questions that it would be useful to try to answer:
1. Which areas of science have the highest incidence of cheating?
Here, the easiest numbers to get would be the incidence of misconduct (usually defined as fabrication, falsification, and/or plagiarism) as determined by rulings from federal oversight organizations (like the Office of the Inspector General at the National Science Foundation, the Office of Research Integrity, or corresponding bodies outside the U.S.). These numbers could then be compared to estimates of working scientists in the various fields of research (or the numbers of scientists who have secured grants in these fields, etc.).
Such data, though, will necessarily only give a partial picture. For one thing, they will only reflect instances of misconduct that have been detected, not those that remain undetected. Indeed, they may not even give completely accurate numbers as far as detected misconduct, given that not all institutions to which such misconduct is reported alert federal oversight organizations (even in cases where internal inquiries and/or investigations determine that allegations of misconduct are supported by the evidence). "Dealing with it" locally may help an institution protect its reputation, but such gaps in uniform reporting also mean that it's really hard to know just how many cheaters are out there, and how much cheating they have done.
It's worth mentioning, too, that there is a good bit of anecdotal discussion of ethical lapses people have seen -- which, presumably, are often left unreported (sometimes because the people who have observed the lapses are at the wrong end of a power gradient relative to the researcher committing the lapses). As with all data, if it's not routinely reported (and assessed), it's hard to include it in an accurate picture of a scientific field's ethical health.
There has been some research -- at least some of which we've examined here -- on what kinds of ethical lapses scientists notice in their professional environments (and regard as dangerous to the integrity of science), on how well they reckon their colleagues are doing at living up to the norms of science, and on what kinds of questionable behaviors in which scientists admit to indulging themselves. Such research offers glimpses of problematic behavior beyond that which results in official misconduct rulings. However, in the absence of more data, it's hard to draw solid conclusions.
2. In which areas of science is it easiest to get away with cheating?
This is a tricky question to answer empirically, since those who really get away with cheating are those whose cheating remains undetected -- so, we haven't identified it as cheating. I suppose a clever team of researchers might come up with conditions where they could inject more or less obvious instances of cheating in the pools of grant proposals or manuscripts submitted to journals in order to test whether peer reviewers are able to identify them as fraudulent, but I don't know whether such a study could get IRB approval.
Such deception might arguably do some harm to the subjects of the experiment.
Another option might be to collect data on the length of time it took for various instances of cheating to be discovered and adjudicated. Again, this has the limitation of giving us information on cases where cheating has been discovered, but not on cases where the cheaters succeeded in getting away with it. Still, trends in the time-to-detection data from various scientific fields might give us some insight.
I have heard the argument that cheating in the biomedical sciences is easier to get away with on account of the nature of biological systems (as compared, say, to physical or chemical systems). The natural variations in organisms, the theory goes, are sufficient that no one ever really expects to exactly reproduce the results other researchers have reported. There's also the possibility that certain lines of research lack rigorous statistical analyses, so that researchers may be satisfied getting results that kind of look close enough.
Believe me, though, replication of experimental results is plenty hard in the "hard" sciences. And, there's not much in the way of grant money to support attempts at replication, nor career rewards for succeeding in such attempts. Moreover, it seems likely that our evolving understanding of the best experimental methods and measurement techniques changes the kinds of results subsequent experiments produce, as well as changing what the experimenters are inclined to identify as "good data" versus noise.
Finally, if a cheater fabricates data that ends up being close enough to correct, this is not the sort of thing other scientists are likely to discover in their own efforts to replicate the reported results.
So, this may be a difficult question to answer empirically, but it seems like an important question, at least to the extent that scientists don't want to be too vulnerable to deception.
3. In which areas of science does there exist the most pressure to cheat?
One way to try to answer this question might be to collect data on the perceptions of researchers.
Additionally, we might examine the details of the reward structures in place or of the available resources for which scientists in different fields are in competition. This might include comparing the number of postdoc and faculty slots available to the number of Ph.D.s vying for them, or comparing costs to pursue a line of research to the amount of funding available (both in terms of the size of grants and the number of grants given compared to the number of researchers competing for them), or tracking the average number of publications researchers need on their CV to get hired or tenured.
It's also possible that the frequency of misconduct findings, and the severity of sanctions, in a particular field compared to the researcher population in that field might shed some light on the pressure to cheat in the field. However, I suspect that there are some researchers who won't cheat even when faced with high pressure to do so, and others who will cheat if there's any reasonable hope of getting away with it, even if the external pressure to do so it really low.
4. In which areas of science does the professional culture do the least to discourage ethically questionable practices?
Think of this as the flip-side of question #3: what kind of pressure is there in a field to be good? What do the grown-up scientists in a field teach the trainees about the best practices and the pitfalls to be avoided? Are such issues even explicitly discussed?
Who are the sources of information about how scientists in a given field are supposed to behave? Is it just the example set by one's PI, or can a trainee count on multiple mentors (either at their institution or throughout their profession) when it comes to responsible conduct of research? Are trainees in a field getting a fairly uniform message about how scientists ought to behave, or widely divergent messages?
Of course, finding data that bears on this question might be quite informative in working out effective strategies to teach responsible conduct of research (and to make sure the lessons are taken to heart).
Maybe something like good answers to these empirical questions would support my correspondent's hunch that the biomedical sciences have bigger ethical problems to reckon with than do other scientific fields. Or maybe biomedical research will end up being in better shape than other fields on some of these dimensions and in worse shape on others.
Or, maybe, the data could reveal that all the scientific fields face more or less the same ethical challenges.
I do think it's worth actually trying to get a handle on these questions, though -- to find out how one's scientific field is doing as far as the prevalence of ethical and unethical behaviors, and to take steps to address the problems that the data reveal.
Hai Professor.I was kinda wondering wether you can give your answer or not,I want to ask you about sciece subject.Practicaly,I am from Malaysia and this year i will seat for a big exam that is UPSR.I am in Primary Six but translate in U.S.A Grade Six.I am 12 years old.My section B is very worst.More worst,The Conclusion.Hope you can sent the tips for answering section B through my e-mail.THANKS PRO FOR HELPING ME.bye
...and I will add another question,
5. Is cheating among scientists more prevalent than that observed among the general population? After all scientists are also humans and might be expected to behave in similar ways.
One might argue that cheating by scientists may have more severe consequences than cheating by other members of the public... Maybe... But then one has to only look at the reasons for the current financial meltdown to find an obvious exception to this rule.
In every human endeavor, including science, there will be bad apples.
All we can do is educate our children as to the dramatic consequences such behavior can have for themselves and other members of the public, and find ways to detect the bad apples and treat them accordingly.
You contribute to my... functionally balanced dynamic equilibrium of insanity. If that counts for anything.
Many of our conceptions (and misconceptions) about cheating in science and the degree to which scientists would dare getting involved in cheating, are rooted in the fuzzy idea that science is a search for the truth and its practioners are thus truthful. Morally, scientists are not different from the rest of the population and there's no evidence that they are. The percentage of criminals in the general population is probably similar to their parcentage among scientists. The personal motives to get involved in criminal activity are also similar. And the degree of severity of such activity is correlated with the size of the reward this activity offers, the ease by which the activity can be executed and, to a lesser extent, the severity of the punishment that would follow should the criminal is caught (most criminals strongly believe that they'll never be caught). The biomedical sciences offer the greatest rewards for cheaters (funding for one's research career that comes with greater respect and power) compared to all other sciences. Cheating in the biomedical sciences is the easiest to execute (some of the reasons have been spelled out by Janet) and the punishment, when a cheater is caught cheating, is rediculously light (one prison cell out of all the prisons in the US would suffice to house all the convicted cheating scientists who received jail term in this country). Just take note of the language used by the scientific community to describe crimes in science: Scientific misconduct; misbehavior; cheating (probably the most severe terminology). Maybe when we'll start using the same terminology for scientific crimes that is used for any other crime in our society, the punishment for the severity of these crimes will also fit them.
Until then, criminals in science will continue to perpetuate their criminal activity and they'll do it more in the neighborhood of science where there is the most money (biomedical sciences) and where the police force is the smallest or is involved in cover-up or nonexistent.
In pure mathematics cheating is rare because very easy to detect. It is usually either plagiarism or claiming to be able to prove something you cannot, hiding this fact by presenting only a sketch of said proof.
Both techniques (the latter often combined with editor-supported lousy refereeing) end up being discovered in a very consistent way.
Perhaps it would be useful to compare the ethics of biomedical science to the ethics of driving a motor vehicle.
Most of us learn the rules of the road well enough to pass the exam and receive our permit to drive. After a few years of driving, most of us disregard speed limits and travel at the speed of other traffic. By mutual and unspoken consent the speed of traffic almost always exceeds the actual legal limit. Most of us, after years of driving, create our own set of personal rules to drive by, and are linked to other drivers mainly by our mutual disregard for the actual traffic laws.
Most of us feel traffic laws are not terribly important. As long as we have not caused an accident or been pulled off the road by the police, anything goes. We all know that the police do not have the resources or the time to enforce all the traffic laws.
So I would tender, tongue somewhat in cheek, that we are as we drive. Biomedical scientists no less than any other person.
I decided not to be a geologist in part because of the corrupting influence of money. There is enough uncertainty in geology that it was easy to make up maps, etc and sell them to people looking to drill. By chance one might occasionally be right and thus build a reputation.
I think scientists are above average in honesty simply because the work we do is subject to review and criticism. I am inclined to think that climate scientists are a more honest group than their non scientist critics, for example.
It seems to me that dishonesty in science is so uncommon as to be newsworthy when uncovered. And, yes, I do know of specific instances where scientists have been dishonest, so I do not claim perfection.
To your reader: Remember you are a moving & changing point of view. What may have hit you as an undergrad may not have been what strikes you as grad student or in medical school. Our ability to evaluate depends not only on the the context (which is incredibly different in a phd vs md program), but also on where we are in our life history.
"I think scientists are above average in honesty simply because the work we do is subject to review and criticism."
A trained cheating scientist would have no problem falcifying data that can easily pass the review process by journal editorial board and also by members of NIH study section. The review process simply cannot detect a good falcifying job. Moreover, there is no proof that cheating scientists are somehow being detered by the review process and thus are being filtered out of science before they can get away with their cheat. On average, scientists are probably more sophisticated than the average joe and one could expect the cheating scientist to be more sophisticated than the regular cheating joe.
The empirical study I would like to see, is what happens to the cheats in science when they do get caught.
The typical ORI "settlement" goes something like, don't work for the PHS for two years and have all your papers overseen by someone for the same length of time. In other words, the degree to which this amounts to punishment/deterrence is left up to the community: if the offender can find someone to employ them on the terms of the settlement, they experience essentially no consequences.
So the trick, when cheating in science, is to think big: if you bring in a lot of grant money before you get caught, there will be institutions lining up to employ you and put you right back on that cheating, money-generating horse.
Outright data falsification is not the only sort of unethical practice that you can find in science. That sort of thing is obviously wrong, but there are plenty of other things that people do that aren't as blatantly wrong but nonetheless shouldn't be done.
I don't know if any field is any worse than any other, but my observations as a physicist who often dips his toe into biomedical realms is that biomedical research seems to be a uniquely awful pressure cooker compared to the rest of academic science (which is not to say that everything else is find and dandy). That would seem to give more incentives for cutting corners. On the other hand, I think biomedical people also have more discussions of ethics (and not just in regard to human subjects and animals) than other fields. Whether those discussions prevent problems or merely cover asses in response to problems is not something I know enough to comment on.
On a more serious note...
Bruce G. Charlton, Editor-in-Chief of Medical Hypotheses and Professor of Theoretical Medicine at University of Buckingham, UK wrote the following editorial back in 2009:
âIn a nutshell, the inducements to dishonesty have come from
outside of science â from politics, government administration
and the media (for example) all of whom are continually attempting
to distort science to the needs of their own agendas and covert
real science to Zombie science. But whatever the origin of the pressures
to corrupt science, it is sadly obvious that scientific leaders
have mostly themselves been corrupted by these pressures rather
than courageously resisting them. And these same leaders have degraded
hypothesis-testing real science into an elaborate expression
of professional opinion (âpeer reviewâ) that is formally indistinguishable
from bureaucratic power-games.â (p. 634)
It is an interesting statement. Science has been corrupted and distorted by outside forces which want to use science to serve their various agendas , and scientists have done little to resist that corruption resulting in "Zombie science."
I would have thought that the larger part of the problem would have had to do with the funding and sponsorship of scientific research. Research that is funded to serve a particular interest or to establish scientific credentials for a sponsorâs particular product or treatment or device, in my humble view it would be these that would corrupt science.
But perhaps I am saying the same thing as Professor Charlton, just in a different way...?
The honest answer from any scientist, of course, is "some field that I'm not involved in."
And frankly I think that IS an honest perception. Most ethical rules are restrictions or extra burdens, and you're far more aware of the ones that apply to you and your peers than you are to those applied to other fields. So it looks like those other guys are just waltzing along willy-nilly while WE have to jump through all the dang hoops.
Or, we could just say the field with the most cheating is the one that rewards cheaters the best. Boy, now there is a whole other argument.
The differences between science and Wall Street, as far as unethical behavior goes, are in the details. Basically, it's a question of incentives. Where are you most likely to find research that is funded or performed by people with a financial interest in the outcome? Where do you find the highest pressure to publish in GlamourMags and other high-impact journals? Those are the areas where the temptation to cheat will be greatest. The biomedical sciences have both factors present and strong, but there are other such fields. Pure mathematics ranks low on both counts. Physics depends on subfield, with some areas (notably solid state, which has industrial applications) and some labs having high amounts of both factors while other areas are low.
There is also the question of how likely cheaters are to be caught, as Estraven says above. To follow the traffic law analogy, I am more careful to obey school zone speed limits out of a perception (which may or may not be accurate) that speeders are more likely to be caught there (compared to other stretches of road which are not known to be speed traps) and face stronger penalties if they are caught. Similarly, in fields where fraud is relatively easy to detect you will find less of it. These days, unfortunately, the effort involved in duplicating someone else's work (which by definition won't get you a GlamourPub) means that it's easier to get away with small-scale fraud in any Big Science endeavor, whether biomedical or physics.
Another question is whether cheating is the main form of misconduct of interest. Psychiatry, for instance, is infamous for countless highly unethical experiments. While data wasn't fudged -- as far as I know, anyway -- reading the psychiatric literature is often an exercise in trying to figure out how various studies got past an IRB.
For instance, there's an entire literature in which researchers attempt to experimentally induce or worsen psychosis in patients, many of whom were stable. The informed consent process was often ignored or subverted in these experiments.
On the other hand, they didn't fake data. Call it what you will.
Is a required disclosure of all potential conflicts of interest prior to publication of a research report a new thing?
As, for instance, is asked for here:
I'm a mathematician, and I agree with the comment above that cheating is rare (and perhaps even nonexistent) in pure mathematics. My usual area of research is in applied physics, and my peers are mathematicians, physicists, and engineers. I have observed some questionable behavior within that group, but even the worst examples pale in comparison to what I have seen during my involvement in a problem in ornithology. I have witnessed examples of failing to acknowledge mistakes, making scientifically dishonest comments in referee reports, refusing to openly discuss data that doesn't support a certain point of view, fostering an atmosphere of fear that discourages others from discussing such data, ignoring the contributions of others, and failing to take a stand against these unethical practices.
I think you can (simply) categorize those areas susceptible to unethical behavior using 2 conditions:
(1) How easy (or difficult) is it to get caught?
(2) What are the rewards for unethical behavior?