Currently viewing a development environment

Technology has made it easier to fake scientific results. Is a cultural shift required to fix the problem?

Paper retractions and image duplications are a symptom of a much larger problem

Bhavya Singh

Microbiology

McMaster University

Cases of scientific misconduct are on the rise. For every 10,000 papers on PubMed, 2.5 are retracted, with more than half of these retractions attributed to scientific misconduct, which includes mismanagement of data and plagiarism.

"Papers from twenty or thirty years ago were fairly simple – they [had] maybe one or two photos," says Elisabeth Bik, a microbiologist who now works as a scientific integrity consultant. "That’s around the time that I did my PhD. If we wanted to submit papers with photos, we had to make an actual appointment with a photographer! It was very hard to fake anything."

Tasks like photographing results and constructing academic figures were once specialized, requiring designated experts who had nothing to do with the data collection process. That's not the case in the 21st century. As technology has advanced, not only has the amount of data increased exponentially, but so has our ability to record and report this data. With more people competing for fewer academic jobs, scientists are constantly under pressure to acquire more data, publish in high impact journals, and secure more external funding. 

One study from Arizona State University found that the mounting professional pressure and the low chances of getting caught are some of the reasons that scientific misconduct is so prevalent. Coupled with availability of image editing tools and the ease of cutting-and-pasting phrases, it is also a lot less challenging to misrepresent findings.

In 2016, Bik and colleagues analyzed over 20,000 papers from 40 biomedical research journals, finding that one in 25 images had evidence of image duplication. 6.1% of papers from the Molecular and Cell Biology journal alone showed signs of inappropriate alterations.

Elisabeth Bik is a microbiologist who now works as a scientific integrity consultant.  

By Michel & Co., San Jose, CA, USA 

One of the organizations looking for solutions to this growing issue of scientific misconduct is the International Life Sciences Institute (ILSI). Founded in 1978, the ILSI is an organization of scientists working in food safety and nutritional science. One of their major aims is to ensure scientific integrity in nutrition-related research, especially since research findings in this field often inform public health policy decisions. To find a solution, ILSI's North American branch (ILSI North America) co-founded the Scientific Integrity Consortium to evaluate the extent of scientific misconduct, and to broaden the scope of this conversation beyond food science. In 2019, the consortium published their findings, which included guidelines on how to define research misconduct and detrimental research practices, in addition to a comprehensive list of recommendations to tackle the issue.

This includes encouraging scientists to connect their work to a broader social context and to consider the implications of their work for the general public. To nurture this culture, a number of steps need to be taken by both institutions and scientists. Institutions must provide the necessary educational resources, infrastructure and quality maintenance support for equipment and research, alongside better training and standardized universal expectations for integrity. On the other hand, scientists need to follow the given standardized procedures for research design and publication, engage in transparency and honest communication, and be mindful of the ethical implications of their work.

The committee acknowledged that the training received by scientists is insufficient to help them deal with the different stages of their career, and that the "publish-or-perish" mentality only makes it harder to create the cultural shift they recommend. For example, practices like "p-hacking", where an individual selectively analyzes data to create significance from non-significant results, are more likely to occur under the pressure to secure funding.

female scientist in the lab data work

To foster a culture of integrity in the scientific process, changes are needed.

By Shopify Partners 

Some ways to foster this change are to provide better ethics training for scientists, and to incorporate a scientific integrity "checklist" to be followed by scientists. The proposed checklist would entail best practices to be followed for designing studies and writing papers, such as ensuring that methods are reproducible, and that ethical data analysis standards are upheld. On the institutional level, journals should be encouraged to value rigorous research that may not always yield conventionally "exciting" findings. Currently, many journals prefer to publish positive findings as opposed to negative or null findings, which are equally important for scientific progress. One way that the consortium would like to nurture this culture is by changing the vocabulary we use to communicate these findings – instead of referring to results as "positive" or "negative," they suggest terms such as "anticipated"and "unanticipated" findings. 

Another important point made by the consortium was to further emphasize the role and importance of mentorship. Cases where scientific misconduct occurs can put students in a difficult position. Trainees often have to deal with the dilemma of reporting the misconduct at the risk of potentially losing their positions. This normalizes scientific misconduct, and can lead to further instances of academic dishonesty. Once misconduct is caught, trainees can also suffer the eventual backlash, including difficulty in finding future positions.

Not only can open science reduce the chances of misconduct, but it can be an excellent resource for fellow scientists, and a way to increase public trust in the scientific process. In line with the open science efforts, some scientists are suggesting that individuals should be able to offer post-publication comments on papers, as opposed to having a static review process that ends after publication. This would allow every reader to issue comments and feedback, keeping the paper under constant "live" review. While some journals — such as eLife — currently allow post-publication feedback, platforms like PubPeer allow scientists to search and leave comments on papers from any journal.

When asked about what policy changes she would like to see to reduce scientific misconduct, Bik highlighted the importance of open communication and clear guidelines. "Every journal and every institute should have a contact person that anyone can contact – I cannot report cases if I can’t find e-mail addresses," says Bik"There should be guidelines for when a paper should be retracted, versus when a paper should be corrected."

In addition, it's worth noting that many errors are honest errors, that don't necessarily deserve a paper retraction. Bik says that "90% of the scientists are very honest. We all make errors – the bigger our datasets get, the harder it becomes. There is so much data now!"

recent study following 12 retracted publications found that out of the 68 papers they had been cited in, only one had been re-assessed and corrected to account for the retraction. Even after retraction, findings from flawed papers can live on. One example is the frenzy that continues to surround the 1998 paper that falsely claimed a link between vaccines and autism, despite its retraction in 2010.

Furthermore, the  social stigma that follows a retraction due to scientific misconduct can actually spill over to collaborators who had nothing to do with the misconduct. Former collaborators of dishonest scientists can have an 8-9% drop in paper citations. Sadly, this means that potential whistleblowers might be less likely to report cases of misconduct in fear of jeopardizing their own careers by association with the perpetrator.

"I hope I can make people aware how much damage it can do to fake results – it can lead other people to pursue results that did not happen," says Bik. 

Knowing the consequences of scientific misconduct, Bik quit her full-time job to tackle this problem as a scientific integrity consultant. "My mission is to make sure that science is reliable."

Comment Peer Commentary

We ask other scientists from our Consortium to respond to articles with commentary from their expert perspective.

Kamila Kourbanova

Neuroscience and Molecular Biology

Johns Hopkins University

Bhavya expands on an important topic here that not everything we encounter in scientific journal articles is reliable. There is a big distinction between having replicable and accurate results vs. misleading and statistically construed results. The distinction here being, yes some experiments are difficult to replicate because of resources (ie. very specific/difficult to engineer organisms) but nonetheless results need to be recorded with limited bias and with scientific accuracy. Regarding bias, “P-hacking” is another version of “cherry-picking” – when a researcher will construe their data to “look nicer” (not difficult to do with statistics). There are many ways of eliminating this bias, for example completing research blinded or better yet, double-blinded to the experiment or subject conditions. Another method is having very stringent rules on outliers in your data, so as to limit “cherry-picking”.

Luckily, journals are getting better at publishing negative findings because the cost of not publishing is greater in the longterm. For example time, money, and resources can be wasted if a group repeats the same research that an earlier group did and “failed” in finding significant results. This article in JAMA shows that of all the phase 3 clinical drug trials started between 1998 and 2008, only 40% of the failed results were published. Think of all the effort and funding that could be saved if this information was made accessible!

Bhavya Singh responds:

Thank you very much for your peer comment, and thank you for reading! The positive publication bias is most definitely an issue - though I am surprised that 60% of unanticipated/null findings are un-published! That is a lot higher than I would have expected, and is quite concerning. Furthermore, with biases like P-hacking, it seems that conventionally “positive” findings are reported as overly significant, while unanticipated/null findings are severely under-reported.

You also make a very good point about how much funding could be going towards research that has already been conducted. This got me thinking. Before embarking on a new research project, most PIs will certainly do a literature search to see what has been previously found. However, maybe doing a grant search would be a better idea to see whether similar research has gotten funding in previous years, but has remained unpublished. They could then contact the investigators to get more information, if required. It’s an extra step, but would save time/resources.

Marnie Willman

Virology

University of Manitoba Bannatyne and National Microbiology Laboratory

Great piece, Bhavya. This is a dive into the deep end of one of the biggest, most contentious issues in all STEM fields. As we have technology to produce bigger and bigger datasets, and analysis and publication of these sets under extreme pressure from funding agencies, it’s a perfect storm. From the graduate student to the PI, financial pressure to achieve positive results (as articulated in this article) is so overwhelming that more scientists are crossing over to the dark side of data manipulation. Bhavya makes many good suggestions and points throughout the paper of many remedies to the situation, some of which are already in progress (including this Consortium!). This article serves as both a solemn reminder of the world researchers are currently living in, and a glimpse at a more positive future for all. Well said, concisely stated, and overall a very good read.

Bhavya Singh responds:

Thank you very much for your thoughtful comments!! The financial pressure is certainly one of the biggest drivers of data manipulation, especially since the entire funding/grant system is based on how “exciting” people find your research. However, as people are becoming aware of this bias towards certain kinds of results, I hope that we will start to see a shift!

Hannah Thomasy

Neuroscience

University of Washington

Such a timely piece Bhavya! I thought it was really interesting that you mentioned the backlash that trainees can face for reporting misconduct. It seems like a very important issue that often does not receive very much attention in discussions of research integrity. I’d love to learn more about how universities and other institutions are taking steps to protect lab members who report misconduct - does anyone’s program have protections for trainees in these situations?

Bhavya Singh responds:

Thank you so much reading and providing commentary. Yes, this is most definitely something that’s important and quite concerning! From what I know, the way that universities/institutions tackle these issues is on a case-by-case basis. While “officially”/“legally”, there are rules in place that would not affect trainees if they were to report misconduct, I fear that the cultural backlash might be a lot worse. If a trainee reports misconduct, the most likely scenario is that they will ask to switch labs to complete their degree, for fear of backlash from the PI and lab-mates. However, considering that all other faculty members are the PI’s colleagues, the way it plays out depends on the culture of the place. If misconduct, or something like “P-hacking” or duplicating images is just “not that big of a deal”, there could be subtle repercussions for the trainee and they could have difficulty finding a new lab. Often times, despite all of the rules and regulations being in place, the retaliation is invisible, and the whistleblower might find it easier to leave than to continue working/studying in the environment.

Teresa Ambrosio

Chemistry

University of Nottingham

This is a great article and a reflection for all scientists to be  objective and peer-review their work themselves before submitting for a formal publication. Prof. Frances Arnold gave a great lesson earlier this month by publicly announcing that her work wasn’t reproducible and she took the blame for not caring enough about double-checking data. It was a courageous decision to retract a paper from Science.
Unfortunately, science became a sad business where competition is fierce and metric counts when applying for grants or promotions. We should all probably go back to the time when research meant advancing science and  contributing to knowledge rather than just being a machine for  publication. Scientific integrity should be equally valuable as IF [impact factor] and metrics when competing for grants and promotions. 

Bhavya Singh responds:

I actually had no idea about Dr. Frances Arnold - I just looked into the paper and the retraction, and it is absolutely a very courageous and honest thing to do. Thank you for sharing! You are correct in saying that the increasing competition over the past few decades has really contributed to our “publish or perish”. Over time, I am hopeful that this will change.