No scientist is excited when a hypothesis turns out to be wrong. However, it is far worse to make a breakthrough finding and then have it discredited or denied publication due to flaws in the research.
The path to scientific discovery is riddled with potential pitfalls—each with power to render enormous efforts, time, and resources fruitless. While some circumstances are beyond the control of the researcher, a number of common missteps can be avoided through careful planning and vigilance.
Like any experiment, the most important first step is choosing a strong hypothesis.
At the beginning of their careers, young scientists
may forget that most of the questions they are interested in have already been explored. To avoid duplicating the efforts
of previous experiments, it is critical to conduct an exhaustive investigation into past efforts before pursuing any research topic.
Once an inventory of all past studies has been recorded, the question can be tailored to address a new variation on the subject matter. Whether it’s an innovative methodology or a different population sample, there are plenty of ways to take a new twist on an old question. The trick is to assess the previous inquiries and devise a strategic hypothesis based on the outcomes of those studies. Researchers should ask themselves, “What can I do better?”
In addition to selecting a unique question or approach to answering that question, emerging scientists must avoid compromising the direction of their research at the behest of outside pressures.
The interests of an institutional initiative may at times overwhelm the goals of a novice researcher. Stephen R. Hammes, MD, PhD, professor of medicine and chief of the Division of Endocrinology and Metabolism at the University of Rochester, worries about the compromising effects of such influences.
“Young investigators are sometimes forced to modify their ideas and goals to meld with research programs of more advanced scientists,” he says. “This can lead to a number of brilliant ideas being left on the cutting room floor.”
Understandably, the desire to please and impress more established researchers can motivate a change in direction. But young scientists need to exercise discernment when considering changes to their research—no matter who may be suggesting a new focus.
“When young investigators have to include salary support for senior principal investigators (PIs) on their grants, it takes essential funds from them at the very start of their careers,” Hammes went on.
He advises up-and-coming scientists to work for organizations that provide time and resources for them to seek answers to the questions that they are truly interested in.
“Finally, don’t be afraid to change your hypothesis if the experiments suggest that you should do so,” Hammes says. “The great thing about science is that things don’t always turn out like you predicted.”
Reproduce, Redo, Repeat
After selecting a hypothesis, scientists then face the challenges involved in actually testing their theory and substantiating results. Hammes points out the importance of consistency when it comes to producing reliable research.
“Reproducibility is a big issue with the National Institutes of Health (NIH) right now, and I agree with this,” he says. “There is no excuse for poorly conducted research.”
The Washington Post recently reported on a study of 100 experiments that had been published in three leading psychology journals, among which only 39 experiments were found to be reproducible. The review took four years and 270 researchers to complete.
For an experiment to be reproducible, it must have an extensive written protocol. Young scientists should spell out every procedure in the study and then present it to a peer group with researcher experience. Using the constructive criticism from the group’s feedback, the researcher can refine the processes of their study. These steps help ensure that a project is prepared for success rather than filled with avoidable landmines.
“It is also critical that papers explain in great detail how studies were done and the PIs remain very open to helping other labs reproduce and validate their work,” Hammes explains.
Recently, NIH released the “Principles and Guidelines for Reporting Preclinical Research” to help resolve issues related to reproducibility. The recommendations focus on the establishment of more rigorous guidelines for journal submissions and grant applications, such as requiring investigators to “report how often each experiment was performed and whether the results were substantiated by repetition under a range of conditions.”
The memorandum goes on to say that, “Sufficient information about sample collection must be provided to distinguish between independent biological data points and technical replicates.”
NIH is also pursuing a number of other avenues to bolster reproducibility, including recognizing sex as a biological variable, isolating misidentified cell lines, and enhancing data sharing, public access, PubMed Commons, and more.
As part of the same initiative, NIH launched online training videos to facilitate reproducibility goals (www.nih.gov/science/reproducibility/training.htm). The topics covered range from transparency to exclusion criteria.
Even investigators who are well versed in the principles of reproducibility may struggle to replicate results if they fail to closely supervise the activities of their lab—not to mention the infinite number of other research errors that can occur in a lawless laboratory.
“A common mistake that senior researchers make is to lose focus over the work being done in the lab,” Hammes says. “It is crucial to look over all primary data and require that members of the lab have organized notebooks with all data easy to find and interpret.”
Despite good intentions, some students and postdocs do not have extensive experience in maintaining a thorough lab notebook. Extra oversight might be necessary at the beginning of a project until the team is in the habit of keeping detailed and accurate records.
Hammes also warns scientists to be on the lookout for bias. “Young researchers want to please their bosses and may only show the experiment that worked as predicted to their mentor. Most disputes over publications that I see are due to PIs who lose track of their labs and trainees and allow sloppy or non-reproducible work to be put into figures,” he explains.
Bias can come in many forms, of course. According to a review in the Journal of Prosthodontic Research titled “Fifteen common mistakes encountered in clinical research,” the most critical misstep that a researcher can make is failing to employ satisfactory bias control measures.
“Bias control is what distinguishes good from bad research,” the authors wrote.
Examples of methods for limiting bias include: randomizing subjects to areas, interventions and control conditions; keeping investigators blind to the subject status during measurement and analysis; maintaining a credible control condition and ensuring at the outset and throughout the study that subjects are truly blind to the group they were assigned.
On top of these measures, researchers must also work to provide the cleanest possible data in the analysis phase. An article in the International Journal of Endocrinology and Metabolism states, “about 50% of the published articles have at least one [statistical] error.”
The review authors emphasize the importance of checking data “normality”—a type of model selection that looks at the distribution of data to see if it follows a normal pattern like a bell curve. Researchers must also report missing data, subject attrition, and the use of power calculations, along with explicitly stating the statistical assumptions made during the analysis.
Publish or Perish
The peer reviewers of medical journals take all of these factors into account. As the editor-in-chief of Molecular Endocrinology, Hammes has plenty of tips for improving publication submissions.
“When I talk to young authors, I tell them that the most important thing is to put together a well-written, very clear paper that anybody can follow—even somebody who is not an expert in your field,” Hammes says. “There is no greater turn off for a reviewer than a poorly written manuscript, or a poorly written grant proposal for that matter.”
Even brilliant research is often unable to save clumsy organization of data and lousy writing. Manuscripts with meaningful findings are frequently rejected simply due to slapdash composition.
But scientists who struggle with writing should not feel disheartened. After carefully reading one’s paper, it is essential to get critical feedback from mentors and colleagues. If an author still lacks confidence in the craft of their article or if English is not their first language, a professional editor can be hired to help.
“Compared to the cost of doing your research, these editors are inexpensive and can help sell your science,” Hammes explains.
Another part of the equation is the decision of when to submit—a quandary that scientists of all ages face.
“Should you wait until the story is perfect and shoot for the highest end journal that you can? Or should you put together a great story for a slightly lesser journal and then start working on the next paper?” Hammes asks.
He believes that there is no correct answer and determinations must be made on a case-by-case basis, but also encourages young researchers to avoid waiting too long before publishing. “I think that young investigators need to get their name out there.”
Senior investigators with established labs can generally afford to be pickier, but trainees need to carefully weigh out their options and decide what is best for their budding careers.
Building a Lab
Once an investigator has built a reputation and demonstrated a track record of scientific success, he or she will likely start thinking about his or her own lab. The pitfall that can come along with this ambition is the miscalculation of “too much too soon.”
According to Hammes, one of the most common mistakes made by early investigators is trying to build up the lab too quickly. “When first starting out, most investigators are not equipped to take on a graduate student or postdoctoral fellow,” he says. “I was advised from the start to hire one to two technicians and then roll up my sleeves, teach them everything that they need to know to complete the necessary experiments, and then get to work side-by-side until the papers start coming out.”
After a couple of years, the investigator may be better equipped to expand the lab and train students and postdoctoral fellows. This is the plan that Hammes followed and he endorses it as a general rule. Those that bite off more than they can chew at the inception of their lab will likely struggle to maintain a high level of quality in their research—largely due to a supervision learning curve.
Operating a laboratory does not necessarily get easier over time, however.
“Running a lab becomes more complicated every year,” Hammes says. “There are myriads of paperwork and on-line courses required by institutions that take time away from research.” To stay on top of everything, he believes that organization is key.
In avoiding all of these possible missteps, organization is a common theme. Systems and procedures play a central role in seeking an original thesis, staying on top of lab activities, testing reproducibility, and beyond. If an investigator can be diligent in these regards, they will be far more likely to avert research disasters and find success in their professional pursuits.