4 Steps To Implementing a QC Program For Your Flow Cytometry Experiments
Written By: Tim Bushnell, PhD
With the emphasis on reproducibility in science, it is important to look at the process of quality control and quality assurance and how these can help improve the reproducibility of data.
First, what is the difference between quality control (QC) and quality assurance (QA)? For the purposes of this article, we will define QC as the activities that are performed to ensure that the product, in this case, flow cytometry data, meets the standard. QA, on the other hand, is focused on how the processes used to develop the product and how these can be improved as well as establishing methods that prevent poor quality product.
A lot of time is spent talking about QC, especially in regards to how the instrumentation performs. Less time is focused on QA. Looking at these two processes in the context of a flow cytometry experiment, QA is focused on the development of the panel, while QC is focused on how well and consistently the panel performs once it is put into production. This blog will focus on one manner of implementing a flow cytometry panel and how to QA and QC should be integrated into the process.
1. Start at the end:
To effectively develop a flow cytometry panel, it is critical to know what the biological question is. This, in turn, will lead to an understanding of what the panel will be designed to prove or refute. The way that this is done in hypothesis-driven science is to perform a statistical test to determine if the data supports refuting the null hypothesis (H0). This requires the investigator to determine the type of data that will be extracted from the experiment and how it will be analyzed.The statistical analysis plan, which is developed before the panel is developed, drives the rest of the process. This should not be left to the end to prevent the temptation of HARKing or p-Hacking.
2. Build the panel:
This topic has been given lots of attention in other blogs, such as the one linked above. Briefly, there are three considerations to balance:
- Target antigenClearly, it is important to use the correct antibody targeting the antigen of interest. This can be tricky since there can be many different clones targeting the same antigen. Websites such as the Human CD Antigen Table, BenchSci.com, the Antibody Registry andIn this analysis of which CD3 clone to use, two sources were used – BenchSci.com and the OMIPs as published in Cytometry A.Figure 1: As this table shows, there is no specific consensus as to which CD3 clone to use. Thus checking the primary literature is critical.
Furthermore, as was pointed out by Bradbury and Plückthun, not all antibodies work well. This can be due to a variety of issues, so navigating which vendors regent to purchase can also be a test. Sites such as the Antibodypedia and Antybuddy are great resources to use to get an idea of what clone should be used.
Finally, the expression of the target needs to be considered. This can be divided into three levels – low/unknown, medium and high. Primary lineage markers are in the high category. The medium expression category includes those markers that are used to refine the subset of interest and the low/unknown are those markers with either low or unknown expression. These are the targets of interest.
- Fluorochrome brightnessNext is to consider the brightness of a given fluorochrome. This can be measured by the Staining Index. Briefly, this compares the distance between the positive and negative signals, which is divided by two times that standard deviation of the negative.Figure 2: Measuring the Staining Index.
This data can be generated on the investigator’s instrument, or it is possible to turn to one of the published tables like this one at BioLegend or this one from BD Biosciences. These can provide a guideline to relative brightness, but this could be impacted on the investigator’s instrument due to different laser power, laser lines or filters.
- Instrument sensitivity:The third part of the panel design triangle is to identify where the instrument is most sensitive. While there can be several ways to measure sensitivity, the use of the SSM or spectral spillover matrix. This allows for the assessment and ranking of both fluorochromes and detectors.
Figure 3: SSM for an instrument. Across the top are the fluorochromes and down the side are the detectors. Areas that are high (red) are areas of potential concern.
Figure 4: Ranking the detectors that receive the most error and fluorochromes that contribute the most error in a polychromatic panel.
With these three pieces of information, the researcher can go on trying to develop the panel. This requires a bit of searching the databases for available fluorochrome/antibody combinations. This can be automated with one of the many tools out there for this, which range from vendor specific such as: Biolegend, Bio-Rad, and Thermofisher. Alternatively, the use of Fluorofinder, or Chromocyte, both of which are vendor independent.
3. Optimize the panel:
With the panel developed, now comes the optimization and validation steps. This includes titration, voltration, and panel testing. In this optimization phase, the necessary controls for identifying the populations of interest are also identified. This is also where Standard Operating Procedures are developed and approved, critical tools to ensure the consistent processing of the samples and experiment.
- Titration:The goal of titration is to identify the best concentration for the assay being performed. Each antibody to be used needs to be optimized. Cells are labeled with serial dilutions of the antibody and the staining index is calculated. The concentration which yields the best signal to noise (as calculated by the staining index) is the one that should be used. Don’t forget that the antibody should be titrated as it will be used. So if your samples will be fixed, so should the titration samples.
- Voltration:Determining optimal voltage can be achieved in several ways. The peak 2 method has been discussed before. The optimal voltage can be further refined by performing a a voltage optimization (voltration) experiment. In this experiment, cells are labeled with the titrated amount of antibody and run on the instrument starting at the recommended voltage and increasing the voltage till the positive events are out of the linear dynamic range of the detector. The staining index is plotted and data looks something like this.Figure 5: Voltration experiment.
As can be seen from this data, as the voltage is increased for the Cy7-APC detector, the signal quickly falls out of the linear range of the detector, and off-scale. Looking at the curve, the peak 2 recommended voltage is best for this fluorochrome. The BV650 detector, on the other hand, shows an increasing improvement in the SI at 700 volts, resulting in an increase in the SI of about 15%. Thus this becomes the recommended voltage.
Once these voltages are established, it is important to monitor them over time. To do this, use a relatively bright bead, like the 6th peak of the 8-peak bead set (or your other favorite bead), and run at these voltages to establish the MFI value for each detector. From there you can set up a template, like the one shown in Figure 6.
Figure 6: QC tracking template.
When sitting down to run the instrument, open up the template, and check the voltages on the system so that the bead is hitting the target threshold. Once that is set up, run the experiment. This is a level of QC that the investigator can add to the experimental workflow that helps to improve the quality and consistency of the date.
- Optimization:The optimization process is the final step before the panel is put into use. This consists of a series of tests to identify the optimal conditions to run the experiment. After titration and voltration, the optimization should include the sample preparation process, running the experiment on the flow cytometer and the data analysis workflow.This is where standard operating procedures (SOPs) are developed and refined. These SOPs provide detailed directions on how to run the experiment, from beginning to end. The SOP helps ensure the experiment is run in the correct manner each time, and each person who will be performing the experiment should be trained on the SOP. Changes to SOPs should not be taken lightly, and it is strongly recommended there is a change process developed in case the SOP must be changed.Additionally, during this optimization process, various controls should be run to determine how useful they are in identifying the population of interest. This includes FMO controls, unstimulated and stimulated controls, unstained controls and more. This is also a place to identify and document internal negative controls. It is also a place to develop the reference control, a standard sample that is run each time the assay is performed to ensure that the process worked.With the reference control, it is important to use this to establish acceptance criteria so that when the experiment is being performed the process can be tracked and issues identified (more on that later).
- Validation (optional, but recommended)Validation is the process where the performance of the assay is assessed. The optimization step determines the best conditions to run the assay and validation uses those conditions to assess how well the assay performs. In regulated environments, this is an important step, but many researcher labs don’t do this extra step. It is a good place to make sure everything is working correctly. Performing such experiments as the same sample run by N different people on the same day, or the same sample run by the same person over X days are ways to see the day to day variation and researcher variation to provide additional limits for the assay.
4. Implement the Panel
This is where the fun begins. The experimental design is set, the metrics are established and the SOPs are written. The team is trained on the SOPs and the samples are getting lined up. This is where the QC information comes into play. Take the peak 6 bead for tracking voltage. Since the beads were used to set target values, tracking the voltage over time for each detector should be implemented.
Figure 7: Levy-Jennings plot for one PMT
The Levy-Jennings plot is one way to track changes over time. In this figure, the green line represents the average of the data, the yellow lines +/- 1 standard deviation, and the red lines are +/-2 standard deviations. Depending on how tight the standards are will determine when there are issues. As you can see from this data, the voltage ranges vary within approximately 1 standard deviation from the mean.
This analysis can also be applied to the reference control. In the data shown below, three different populations are tracked over time. In this case, the Levy-Jennings control lines are based on the established ranges that were calculated based on the optimization runs.
Figure 8: Tracking changes over time.
This QC tracking provides a level of quality that goes directly to the reproducibility of the data. When running the experiment and using the reference control, issues with the experimental process can be detected and provide grounds to exclude data where the controls are outside of the acceptable range.
Additionally, all the paperwork associated with each experiment should be reviewed by a supervisor and initialed to indicate they were reviewed.
Quality control is the hallmark of improving reproducibility. QC programs are designed to help determine when the process in question goes off the expected path. Depending on the deviation from the established acceptance criteria will dictact the level of intervention that needs to occur. This can be as easy as cleaning the instrument and rerunning the QC, or as extreme as removing the data from the final analysis. Since there is documentation as to the deviations, this provides the rationale for excluding data.
At the end of the day, improving reproducibility is a critical step to improve the confidence of data. With reports of 25% or less of data being reproducible, it is essential that researchers adopt a mindset that will reduce the concerns and improve the quality of the scientific product. Consider adding a few additional steps in the process of developing your flow cytometry experiments that will provide that piece of mind.
To learn more about the 4 Steps To Implementing a QC Program For Your Flow Cytometry Experiments, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.
Latest posts by Tim Bushnell, PhD (see all)
- 4 Factors To Improve Flow Cytometry Cell Sorting Speed - March 26, 2020
- 4 Flow Cytometry Assays For Monitoring Intracellular Processes - February 27, 2020
- Discover The Myriad Applications Of Beads In Flow Cytometry - February 13, 2020