Are Mendel’s laws of genetics based on falsified data?

Image from Google Doodle, celebrating Mendel’s 189th birthday.

You may have first heard about Gregor Mendel back in high school biology class when you had genetic classes. Mendel, who was an Augustinian monk wrote a paper in 1866 that laid the basic foundation for the development of the modern genetics. Although we now recognize the importance of his paper, Mendel died before his findings were established as one of the most important discovery in genetics.

Mendel’s paper included data from experiments carried for 8 years, statistic analysis and mathematical models and as Hartl and Fairbanks (2007) put, “Mendel’s paper appears to reflect the author’s simplicity, modesty and guilelessness”. Mendel’s seminal paper still drives controversies so many years after its publication. Fisher, the famous statistician, studied Mendel’s paper and came to conclusion that the results were too good to be true, suggesting Mendel fiddled with the data (Fisher 1936). And many other papers followed that questioned Mendel’s honesty. It is a pity that Mendel’s original records are lost, presumably burned by the time of this death, and the missing original data just adds to the polemic. But the in last 10 years, a series of papers focusing on Mendel’s work has been published and they all came to the conclusion: Mendel did not fabricate his data. But unfortunately, allegations of data falsification still follows Mendel’s legacMendel’s paper indeed has few issues that would be considered not the best practices in research, such as not being specific about the methods and stating “further investigations” or reporting partial data from his experiments, repeating experiments the results were not according to the expected and pooling the data.

Based on extensive investigations of the plant material available at that time, historical facts and data from experiments conducted during Mendel’s time, Fairbanks and Rytting (2001) debunked all the allegations and came to conclusion that there is no justification to claim Mendel falsified his data.

The allegations that Mendel committed scientific misconduct are widespread and his reputation as a scientist has been tainted. These allegations don’t change Mendel’s law or the Mendelian genetics, which are the foundation of transmission genetics.

Literature

Fairbanks, D. and B. Rytting. 2001. Mendelian controversies: a botanical and historical review. Amer. J. Bot. 88(5):737-752.

Fisher, R. A. 1936. Has Mendel’s work been rediscovered? Annals of Science 1 (2):115-137

Hartl, D.  and D. Fairbanks (2007). Mud sticks: on the alleged falsification of Mendel’s data. Genetics 175:975-979.

 

 

Advertisements
Posted in Uncategorized | Leave a comment

The year in review: the 2011 top 10 cases involving misconduct in science

Top 10 lists are ubiquitous this time of year. The media loves to compile lists and we love to read them. One can find lists to please all the tastes, ranging from popular subjects including; top 10 movies, top 10 music albums, and top 10 books and some less conventional topics, like National Geographic’s top 10 list of the weirdest life-forms of 2011, editor’s picks.
The past year was, sadly, a big year for research misconduct. Here it is our contribution to the top 10 lists: the major acts of scientific misconduct in 2011.

10. Sangiliyandi Gurunathan, professor of the Kalasalingam University in India was asked to resign, accused of data manipulation.

9. Salon, an award-winning online news and entertainment Web site retracted a controversial 2005 story by Robert F. Kennedy, Jr. linking a mercury-based preservative, formerly used in vaccines, to causing autism in children.

8. Oklahoma University puts professor accused of scientific malpractice under unpaid leave of absence.

7. Charles Monnett, a wild-life ecologist who studied polar bears is under investigation for scientific misconduct.

6. Scientist of Erasmus Medical Center in Rotterdam was fired for violation of academic integrity.

5. Judy Mikovits, formerly the research director at the Whittemore Peterson Institute for Neuro-Immune Disease in Reno, Nevada, was fired for duplicating data in a study linking a virus to the chronic fatigue syndrome.

4. Former MIT researcher, Luk van Parijs was sentenced to house arrest, community service, and financial restitution.

3. Marc D. Hauser, a psychology professor, was found guilty of 8 cases of scientific misconduct will not return to teach at Harvard University.

2. Editors of 16 peer reviewed journals retracted Joachim Boldt’s papers based on “unethical” research.

1. Diederik Stapel, of Tilburg University, admitted falsifying data for dozen of papers that were published in the last decade.

The past year was a bad one for scientific integrity. Divulgation of cases of scientific misconducts used to be restricted to academic media, but the past year, many cases caught the attention of popular media outlets and the whole scientific community; researchers, journals, and editors took the heat.
We hope cases of scientific misconduct are reduced in 2012. The preoccupation with research integrity is increasing and many institutions and research foundations are working diligently to spread best practices in research and prepare scientists to be responsible and ethical contributors to the world of science. Never before have we seen so many people involved in promoting scientific integrity. Let’s make 2012 the year of ethics in science.

Posted in Uncategorized | Tagged , , | Leave a comment

The Dutch Debacle Continues

Last week Science retracted a 2011 paper written by a prominent social psychologist who has been accused of fabricating massive amount of data. Dr. Stapel is a prolific author and according to Retraction Watch, many more of his papers may be retracted as the case develops.

It seems that Dr Stapel is the protagonist of the biggest case of scientific misconduct in recent years, no wonder this is getting a lot of attention from the media. I am sure his tainted reputation is well-known by everybody working on the field of social psychology and researchers will think twice before citing any of Dr. Stapel’s paper on their own papers. But how about papers that were retracted but didn’t draw much attention? Retracting papers nowadays when most of the journals are available on line is not so difficult, but in 1999, Budd et al. analyzed retracted articles and found they received more than 2,000 post-retraction citations. Less than 8% of the citations acknowledged the retraction in any way. I believe this number would look much better. However, as Dr Sheldon Tobe, from the University of Toronto said, a study can be retracted, but “it can be hard to make its effects go away,”

More and more papers have been retracted in the recent years, some for honest mistakes and others for scientific misconduct. But the time between publication and retraction has also increased. The Wall Street Journal used the PubMed database and analyzed 742 articles in medicine and biology. Take the year of 2000, for example. Only 4 papers were retracted and it took in average 5.25 months between publication and retraction. In 2009, 179 papers were retracted and it took in average 32.62 month between publication and retraction.

If this exponential increase in paper retractions is a tendency, maybe in the future we will have a library of disgraced papers to consult before citing a paper.

 

Literature Cited

Budd, John M., MaryEllen Sievert, Tom R. Schultz, and Caryn Scoville. 1999. “Effects of    Article Retraction on Citation and Practice in Medicine.” Bulletin of the Medical Library Association 87(4):437–43.

 

Posted in Uncategorized | Leave a comment

Peer review: the quality control of science

Could we scientists do a better job of reviewing manuscripts to catch papers with fabricated data or plagiarized text?  Is this the duty of peer-reviewers? Or should the journals be primarily responsible for discovering research misconduct?

I know we are all very busy and we all have so many things to read but, as the American Chemical Society stated, “every scientist has an obligation to do a fair share of reviewing (http://pubs.acs.org/userimages/ContentEditor/1218054468605/ethics.pdf).

The peer-reviewing system is an old system that has been keeping the quality of publications for many centuries. It all started in the 1600s, when the secretary of the Royal Society of London had the idea of asking other scientists to check the quality of the manuscripts prior to publication in the Society’s journal.

Obviously, editors and staff of peer-reviewed journals also play a big role on quality control of manuscripts. I am sure I am not the only one out there who received manuscripts for revision that clearly the authors didn’t do a good job proofreading and polishing the manuscript. ‘Hey, Mr. Editor, this manuscript should go back to the authors, send me back when it reads better!’, I wrote the editor.

When I think of the cases of prolific and well-known scientists publishing papers based on fake and fabricated data, I wonder if a good and critical reviewer could have spotted the problem. The other day we were discussing in our research ethics class if the review process should be blind. Some of the people in class preferred to know who the authors are but some argued that it should be totally anonymous so there is no bias. Would you be less critical if you review a manuscript from researcher from prestigious university or institution? How about reviewing the paper of your friend from grad school? What if you were aware that one of the authors had been found guilty of research misconduct in the past?  Would any of these factors change how you reviewed a paper? Should they?

Of course, the peer review system is far from perfect. Reviewers don’t have access to the raw data.  Some reviewers may take advantage of data not yet published for their personal gain or manuscripts being rejected because the reviewer failed to recognize the breakthrough idea or reviewers who deliberately rejected the manuscript for personal interest. Still, with all the flaws, the peer review system is the best way to ensure rigorous science.

Posted in Uncategorized | Leave a comment

How come nobody saw it?

I refer here to Dr. Diederik Stapel, a Dutch social psychologist who admitted had published papers based on fabricated data for more than a decade. In hindsight it is easy to spot the problems:  obviously, data collection for the experiments was not done correctly and the experiments were clearly designed to make a big media impact. Now everybody is wondering “how this could have happened and at this proportion”, said a social psychologist at the University of Amsterdam.

Dr. Stapel is well-known in his field, with a long list of papers published in prestigious journals and also in popular newspapers like The New York Times. His studies attracted public attention. While it is difficult to understand the effects of particle collision in our daily life, papers in social psychology have a broad impact as it deals with situations we go through almost every day. And some studies can even affect public policies. In one of his studies, Dr. Stapel showed that untidy and cluttered environment promotes stereotyping and make people more discriminating. After reading this, I feel like organizing my desk! Many of his papers showed very impressive correlation we all can relate to. Thus his popularity.

This case will have deep consequences. Even though the investigating commission concluded that Dr. Stapel acted alone, it could affect 21 PhD dissertations that were written with doubtful data. I can imagine the damage it will inflict on the field of social psychology too. And in science in general.

The damage is done and we need to learn from it. Next time we can talk about what we can do to avoid blatant cases of scientific fraud like this from happening.

Posted in data manipulation, research ethics, research misconduct | Leave a comment

Have your paper written and published for you for US$ 17,250

I knew most of us scientists are not found of writing. That’s why we are in science! It is so much more fun to do the experiments and analyze the data than draft a manuscript. So, if you don’t want to write (and have the money) you can hire a communication agency and they will do it for you.

It looks like the practice of ghostwriting in clinical trials is quite common. More, many doctors agree to have their names in manuscripts that are based on experiments designed and performed by medical writing companies and pharmaceutical industries. Then things get really scaring.

McHenry and Jureidini (2008) reported the case of a paper published in the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP) in 2001. It was a study on antidepressants for adolescents and the paper had 22 authors. The study was commissioned by SmithKline Beechan, which manufactures the drug in the study. The agency in charge of writing the paper would provide 6 drafts of the manuscript. The drafts were reviewed only by the sponsor, the primary author and 2 other authors. Review by more authors would add extra costs.

Then there is data manipulation. The study had only 2 primary efficacy variables, but 6 other variables were added and this change supported the claim that the drug was effective. It looks like a case of data manipulation. And when using a ghostwriter, it is not obvious who is responsible for manipulating the data.

It seems to me everything is huge in this case: the number of authors, the stakes involved and even the number of pages of the clinical study report (1400). But things were made simple as the ghostwriter only worked with a synopsis of the study report and at least 10 of the authors made no apparent contribution to the article.

Simon Stern and Trudo Lemmens, from the University of Toronto are “proposing that medical “guest writers” might be sued for fraud. This may be one way to discourage this practice but I think that there are other things we researchers can do to keep this practice at bay. Do you have any suggestion? I do.

McHenry, L. and J. Jureidini. 2008. Industry-sponsored ghostwriting in clinical trial reporting: a case study. Accountability in Research 15:152-167.

Stern, S. and T. Lemmens. 2011. Opinion: ghost writing is fraudulent. The Scientist, November 2.

Posted in data manipulation, research ethics | Leave a comment

House arrest for researcher found guilty of scientific misconduct

                                                              Photo / Donna Coveney

Allegations of data falsification started 7 years ago ended up a few months ago not very well for Luk van Parijs, a former MIT researcher. He was by found guilty of data manipulation and sentenced to 6-month house arrest, community service and financial restitution.

It is a sad story to hear, when you think the career of a highly trained person, who attend top institutions (Luk van Parijs was a graduate student at the Harvard Medical School and a post-doc at Caltech) went to waste for scientific misconduct.

Back in February, criminal charges were filed against van Parijs and prosecutors asked to 6-month jail to “discourage other researchers from engaging in similar behavior”. But a less harsh sentence was issued probably because van Parijs confessed misconduct and agreed to cooperate and also because several prominent scientists pleaded clemency. Among then, David Baltimore, a Nobel laureate who had been van Parijs’s supervisor at Caltech, who was also involved in a case of research misconduct himself some years ago. Do I see a pattern here?

It makes me wonder what leads highly educated and smart people to misbehave scientifically. Of course, there is lots of pressure to get the dwindling grant money to keep doing research, paying the salaries of the lab staff, graduate students and post-doc. And this money needs to be turned into publication. But fail to keep integrity and risk everything is not a good alternative, at least in my point of view.

Maybe these science superstars (as Neal Stewart calls them in his book Research Ethics for Scientists) feel they are above the rules and their self proclaimed demigod status allow them to cheat for the good of science. Perhaps they start with a minor falsification they think they will never been caught and things escalate. Or they become addicted to publish in very prestigious journals, as Stewart suggests. Who knows?

The fact is that research misconduct cost money, lots of it. And it affects negatively the life of many people. People involved in the fraudulent research usually get hurt directly in a way or another. How about those who tried to repeat the fake data? And those who cited the fabricated results? One of van Parijs’s paper had more than 200 citations!

Fraudulent science is bad for the research community, is bad for the taxpayers who are footing the bills and it is bad for science itself because it undermines the pillars science is built.

Posted in Uncategorized | Leave a comment