When I decided to pursue a PhD five years ago, I was ready to face the challenges of finding the right laboratory, securing funding and designing the perfect project. I knew the analysis and interpretation of results would produce hard questions and sleepless nights, and I dutifully reserved some mental capacity for worrying about the eventual task of thesis writing. Once I’d joined a lab in the bioengineering department at the Massachusetts Institute of Technology in Cambridge, where I study the self-assembling proteins that form powerful lenses in the eyes of squid, I quickly realized that getting data would be much less challenging than interpreting them — or trusting them.
For example, I once spent several weeks trying to measure the size of protein particles I was working with in an effort to determine whether they self-assembled into larger aggregates. I applied the same technique to the same protein under the same conditions and got three vastly different results over three days — each supporting a different hypothesis. Data are data, I thought, until I had a pile of them, and realized I had no idea whether any of them were meaningful.
A perfectly executed experiment might produce data that are utterly inconclusive, or you might get what look like beautiful, exciting data from a botched experiment. How can you tell the difference? Below are some tips on how I learnt to distinguish between the two.
Balance independence with mentorship
When I finally got my own project and bench as a PhD student, after years of lab practicals and closely supervised internships, I was desperate to do my own thing. I also thought the senior PhD students and postdoctoral researchers around me would be too busy with their own work to help with mine, and that I’d been hired to show I could do independent work.
Collection: Scientific data
I started trying to piece together protocols from various papers and wrestling new lab equipment into operation by trial and error. It took me five times longer to get data than it would have done if I’d just asked someone to teach me the experiment at the start, and I never trusted the data — how could I be sure I hadn’t missed a crucial step or standard control, or that the protocols I’d cobbled together constituted the most scientific approach? If I could go back, I’d ask a labmate to teach me protein protocols the right way — from the start.
Be open about your results — and to feedback
I used to hide all my negative data and failed experiments. I wasn’t confident enough in my experiments to distinguish between the two, and I thought people would consider anything I produced that didn’t have a positive result to be a failure. Then I started casually mentioning the frustration or bottleneck of the week to anyone who would listen, and always ended up getting incredible advice, either for resolving the problem or for knowing when to quit and move on. All the same, it took years for me to drop the proteins that clearly weren’t working for the intended applications, despite everyone telling me repeatedly to move on.
Postdocs and other more experienced researchers around you will have invaluable experience that could help you to disentangle human error, background noise and false positives or negatives from your actual experimental results. If nothing else, it might serve as a reminder that you’re trying to solve truly difficult problems.
Practise your protocols
No one excels at a new experiment on their first try. When I was first taught how to extract plasmid DNA from bacteria, arguably one of the simplest protocols in my field, I think I messed up every possible step — there are ten — at least once.
You won’t be able to trust your data until you trust your execution of the protocol, and it’s easy to make mistakes. Repeat experiments on different days, with different reagents; don’t trust your first negative data set; and don’t trust your first positive data set, either. When I got those vastly divergent protein size measurements, it was tempting to run with the first set of results because they were the best fit for my hypothesis. When I repeated the experiment with a more experienced hand, the results were disappointing — but, I realized, probably correct. Research is messy, and random errors can compound exponentially; once you’re confident that you have performed an experiment correctly, you can start to trust what you’re seeing.
Separate the planning from the execution
Another factor in getting replicable and trustworthy data is making sure your work is fully planned and checked before you start. Yes, you can Google protocols on the fly, or try to substitute equipment if you realize a key machine is booked minutes before you planned to use it, but even if you get data with these adaptations, a single mistake or short cut can introduce fatal uncertainty into the final results. I plan experiments at my desk, carefully adapting methods and printing my own final protocols, then perform them at my bench. Little improvisations can be impossible to remember and reproduce, and can lead to unpleasant surprises when you try to repeat the work later on. Innocent short cuts can also have unforeseen and unintended effects on the science.
Learn to accept negative data
Finally, don’t let negative results keep you awake at night — you can only control how well you execute the experiment, not what it tells you. Do your best, perform experiments with confidence, and share the results as honestly as you can. I was once told to imagine myself as an agnostic conduit between science and my adviser; I can communicate only the conclusions I observe, even if we’d both rather the data looked different.
Although in research it is crucial that you don’t fully trust your data until it has been triple-proven and peer-reviewed, we do have to gain some operational confidence in our methods and results. Otherwise, crippled by self-doubt, we’d never bring any new research into the world.
This is an article from the Nature Careers Community, a place for Nature readers to share their professional experiences and advice. Guest posts are encouraged.