5 Important Tips For Analyzing Your Data

Depending on the experimental design, many researchers will be doing complex assays that will require statistical analysis to determine if the hypothesis being tested is statistically significant or not. Unfortunately, many researchers go about this analysis the wrong way, resulting in spurious conclusions. The following points are guides to help think about the steps necessary in flow cytometry data analysis.

1. Before you start

Define your hypothesis. This may sound simplistic, but understanding the purpose of the experiments is the first step in performing good statistical analysis. Stating the hypothesis will allow the researcher to choose the correct statistical test BEFORE the experiments are begun, and more importantly define the null hypothesis. The null hypothesis is what is assumes to be true until evidence shows it otherwise.

For example, if the experimental question is “does treatment of patients with drug X increases the number of mature B cells in circulation”, then the null hypothesis would be that “treatment of patients with drug X causes no change to the number of mature B cells in circulation.” With the null hypothesis in place, the type of test to be performed is obvious. In this case, a T-Test would be the logical choice for testing the null hypothesis.

2. Set your threshold.

The threshold value is an estimate of the probably that the result has occurred by a statistical accident – that is at random. It is accepted in science to set this at 0.05, which is interpreted as a 5% chance that the significant results occurred by an accident. There is no magic to 0.05, more of an accepted convenience first proposed by R.A. Fisher. Many scientists are moving to use a threshold of 0.01 or even 0.001 indicating a smaller change of significance being a result of an accident.

3. Know the numbers.

All populations can be described by two numbers, the central tendency and the spread of the data. Depending on the type of data being looked at, different measures should be used. In the case of expression data, the fluorescent intensity is best represented by the median value. It represents the midpoint of the data and is robust because one does not need to know the complete dataset, it does not assume a normal distribution and is resistant to outliers. In the case of the question above, the change in the percentage of a sub-population of cells, the mean value is a better choice, assuming the data follows a normal distribution.

Measuring distribution or spread of the data is done using the standard deviation. The smaller the SD, the tighter the data is clustered around the mean. In the case of the median, the robust SD (rSD) is one of several measures used to describe the deviation around the median. Another measure of deviation around the median is the MAD – median absolute deviation. The MAD is the median of the absolute value of the deviations from the median.

Confused? Don’t worry how the statistic is calculate, the software can do that for you. It is easy to remember Mean/SD and Median/rSD as a simple way to know what values should be used together.

4. Can’t forget controls.

Controls in flow are essential at many levels. These include the FMO control (for gating) to the unstimulated control (for setting background in stimulation experiments), to the reference control (used to ensure the experiment is reproducible and identify the biological variation in the experiment). Additionally, make sure that the correct control is being used for statistical analysis. The FMO control, for example, is not a control to use to identify the negatives in a statistical analysis. That role should be played by fully gated known negatives or background (unstimulated) cells.

5. Make sure you perform enough replicates.

This youtube video has made the rounds (http://www.youtube.com/watch?v=PbODigCZqL8) and is something that one needs to be careful of. Don’t use just 3 patients! To ensure enough replicates are performed, consider performing an a priori power analysis. In this analysis, an estimate of the differences between the Treatment and Control are made and the sample size needed to detect that difference is determined. The power of a statistical test is important in reducing Type II errors (the false negative). To improve statistical power in a test, consider adding more samples. The larger the sample size, the more power the statistical test will have.

Statistical analysis is a powerful tool in flow cytometry and should be considered as part of the initial experimental design, rather than at the end with the data are completely collected. Identifying the proper null hypothesis will lead to identifying the correct statistical test. Setting the proper threshold, rather than running the test and seeing what the returned P-value is an essential way to ensure the significance of the data is properly measured and understood. Finally, collect enough events and enough patient samples to ensure adequate power and minimize the change of a false negative error.

Join Expert Cytometry's Mastery Class

ABOUT TIM BUSHNELL, PHD

Tim Bushnell holds a PhD in Biology from the Rensselaer Polytechnic Institute. He is a co-founder of—and didactic mind behind—ExCyte, the world’s leading flow cytometry training company, which organization boasts a veritable library of in-the-lab resources on sequencing, microscopy, and related topics in the life sciences.

Tim Bushnell, PhD

Similar Articles

Common Numbers-Based Questions I Get As A Flow Cytometry Core Manager And How To Answer Them

Common Numbers-Based Questions I Get As A Flow Cytometry Core Manager And How To Answer Them

By: Tim Bushnell, PhD

Numbers are all around us.  My personal favorite is ≅1.618 aka ɸ aka ‘the golden ratio’.  It’s found throughout history, where it has influenced architects and artists. We see it in nature, in plants, and it is used in movies to frame shots. It can be approximated by the Fibonacci sequence (another math favorite of mine). However, I have not worked out how to apply this to flow cytometry.  That doesn’t mean numbers aren’t important in flow cytometry. They are central to everything we do, and in this blog, I’m going to flit around numbers-based questions that I have received…

How To Do Variant Calling From RNASeq NGS Data

How To Do Variant Calling From RNASeq NGS Data

By: Deepak Kumar, PhD

Developing variant calling and analysis pipelines for NGS sequenced data have become a norm in clinical labs. These pipelines include a strategic integration of several tools and techniques to identify molecular and structural variants. That eventually helps in the apt variant annotation and interpretation. This blog will delve into the concepts and intricacies of developing a “variant calling” pipeline using GATK. “Variant calling” can also be performed using tools other than GATK, such as FREEBAYES and SAMTOOLS.  In this blog, I will walk you through variant calling methods on Illumina germline RNASeq data. In the steps, wherever required, I will…

Understanding Clinical Trials And Drug Development As A Research Scientist

Understanding Clinical Trials And Drug Development As A Research Scientist

By: Deepak Kumar, PhD

Clinical trials are studies designed to test the novel methods of diagnosing and treating health conditions – by observing the outcomes of human subjects under experimental conditions.  These are interventional studies that are performed under stringent clinical laboratory settings. Contrariwise, non-interventional studies are performed outside the clinical trial settings that provide researchers an opportunity to monitor the effect of drugs in real-life situations. Non-interventional trials are also termed observational studies as they include post-marketing surveillance studies (PMS) and post-authorization safety studies (PASS). Clinical trials are preferred for testing newly developed drugs since interventional studies are conducted in a highly monitored…

How To Profile DNA And RNA Expression Using Next Generation Sequencing (Part-2)

How To Profile DNA And RNA Expression Using Next Generation Sequencing (Part-2)

By: Deepak Kumar, PhD

In the first blog of this series, we explored the power of sequencing the genome at various levels. We also dealt with how the characterization of the RNA expression levels helps us to understand the changes at the genome level. These changes impact the downstream expression of the target genes. In this blog, we will explore how NGS sequencing can help us comprehend DNA modification that affect the expression pattern of the given genes (epigenetic profiling) as well as characterizing the DNA-protein interactions that allow for the identification of genes that may be regulated by a given protein.  DNA Methylation Profiling…

How To Profile DNA And RNA Expression Using Next Generation Sequencing

How To Profile DNA And RNA Expression Using Next Generation Sequencing

By: Deepak Kumar, PhD

Why is Next Generation Sequencing so powerful to explore and answer both clinical and research questions. With the ability to sequence whole genomes, identifying novel changes between individuals, to exploring what RNA sequences are being expressed, or to examine DNA modifications and protein-DNA interactions occurring that can help researchers better understand the complex regulation of transcription. This, in turn, allows them to characterize changes during different disease states, which can suggest a way to treat said disease.  Over the next two blogs, I will highlight these different methods along with illustrating how these can help clinical diagnostics as well as…

What Is Next Generation Sequencing (NGS) And How Is It Used In Drug Development

What Is Next Generation Sequencing (NGS) And How Is It Used In Drug Development

By: Deepak Kumar, PhD

NGS methodologies have been used to produce high-throughput sequence data. These data with appropriate computational analyses facilitate variant identification and prove to be extremely valuable in pharmaceutical industries and clinical practice for developing drug molecules inhibiting disease progression. Thus, by providing a comprehensive profile of an individual’s variome — particularly that of clinical relevance consisting of pathogenic variants — NGS helps in determining new disease genes. The information thus obtained on genetic variations and the target disease genes can be used by the Pharma companies to develop drugs impeding these variants and their disease-causing effect. However simple this may allude…

7 Key Image Analysis Terms For New Microscopist

7 Key Image Analysis Terms For New Microscopist

By: Heather Brown-Harding, PhD

As scientists, we need to perform image analysis after we’ve acquired images in the microscope, otherwise, we have just a pretty picture and not data. The vocabulary for image processing and analysis can be a little intimidating to those new to the field. Therefore, in this blog, I’m going to break down 7 terms that are key when post-processing of images. 1. RGB Image Images acquired during microscopy can be grouped into two main categories. Either monochrome (that can be multichannel) or “RGB.” RGB stands for red, green, blue – the primary colors of light. The cameras in our phones…

We Tested 5 Major Flow Cytometry SPADE Programs for Speed - Here Are The Results

We Tested 5 Major Flow Cytometry SPADE Programs for Speed - Here Are The Results

By: Tim Bushnell, PhD

In the flow cytometry community, SPADE (Spanning-tree Progression Analysis of Density-normalized Events) is a favored algorithm for dealing with highly multidimensional or otherwise complex datasets. Like tSNE, SPADE extracts information across events in your data unsupervised and presents the result in a unique visual format. Given the growing popularity of this kind of algorithm for dealing with complex datasets, we decided to test the SPADE algorithm in 5 software packages, including Cytobank, FCS Express, FlowJo, R, and the original, free software made available by the author of SPADE. Which was the fastest?

5 FlowJo Hacks To Boost The Quality Of Your Flow Cytometry Analysis

5 FlowJo Hacks To Boost The Quality Of Your Flow Cytometry Analysis

By: Tim Bushnell, PhD

FlowJo is a powerful tool for performing and analyzing flow cytometry experiments, if you know how to use it to the fullest. This includes understanding embedding and using keywords, the FlowJo compensation wizard, spillover spreading matrix, FlowJo and R, and creating tables in FlowJo. Extending your use of FJ using these hacks will help organize your data, improve analysis and make your exported data easier to understand and explain to others. Take a few moments and explore all you can do with FJ beyond just gating populations.

Top Industry Career eBooks

Get the Advanced Microscopy eBook

Get the Advanced Microscopy eBook

Heather Brown-Harding, PhD

Learn the best practices and advanced techniques across the diverse fields of microscopy, including instrumentation, experimental setup, image analysis, figure preparation, and more.

Get The Free Modern Flow Cytometry eBook

Get The Free Modern Flow Cytometry eBook

Tim Bushnell, PhD

Learn the best practices of flow cytometry experimentation, data analysis, figure preparation, antibody panel design, instrumentation and more.

Get The Free 4-10 Compensation eBook

Get The Free 4-10 Compensation eBook

Tim Bushnell, PhD

Advanced 4-10 Color Compensation, Learn strategies for designing advanced antibody compensation panels and how to use your compensation matrix to analyze your experimental data.