5 Best Practices For Accurate Flow Cytometry Results
How do you follow best practices in flow cytometry to improve reproducibility?
Reproducibility is in the science spotlight these days. With the growing body of evidence showing how much translational research is not reproducible, funding agencies and journals are taking note.
Flow cytometry, as a technique, has changed and developed over the years, with researchers constantly evolving and evaluating best practices based on technological developments.
However, in the dark recesses of old lab notebooks, there still exist the time-worn protocols of yesteryear that come back to haunt the next generation of graduate students. The lure of getting a head start by using an already written protocol drives them to perform experiments using obsolete and outdated methods, dooming their research to the bin of irreproducible results.
It’s time to shine the light of modern cytometry on these bygone practices, and in doing so, provide tools for researchers to improve their experiments with current best practices.
1. Manual Data Compensation
In the days of analog flow cytometers, data was processed (transformed) before it was displayed and saved in the file.
Researchers had limited tools available to them so, experiments were compensated by manipulating a series of sliders to remove spillover from secondary channels. This became a bit of a guessing game, as shown in Figure 1.
Figure 1: Manual data compensation is a trial and error method.
The best way to perform proper compensation is to take advantage of automated compensation.
In most acquisition and analysis software packages, there is an automated compensation algorithm. To use these built-in algorithms, the user must collect a series of single-color controls, identify the positive and negative populations for each control, and let the computer do the heavy lifting (Figure 2).
- The control must be at least as bright as the experimental sample the compensation will be applied to.
- The backgrounds of the positive and negative samples must be identical.
- The control must match the experimental fluorochrome. This means the tube must be acquired at the same voltage and the exact same fluorochrome has been used.
Within these 3 rules, are some inherent assumptions.
- That the fluorescent signal is within the linear range of the detector.
- Sufficient number of events are collected.
- The controls were treated identically to the experimental sample.
If the controls meet these 3 rules, automated compensation will be accurate.
Figure 2: How single stained controls are used to determine the compensation.
2. Isotype controls
After antigen exposure, B cells undergo class-switch recombination, which results in the constant region of the heavy chain being swapped to a different type (e.g. from an IgM to an IgG1).
The variable region remains unchanged, so the biological result of CSR is that the antibody interacts with different effector molecules.
In flow cytometry, one of the earliest controls used to detect background binding and determine positivity was termed the “isotype control”: an antibody of the same isotype as the antibody being tested, but with binding specificity to an irrelevant antigen.
The theory was that 2 antibodies of the same isotype would show similar non-specific binding to the target cells, and thus the background binding on the cells could be identified. This requires several assumptions about antibody binding including:
- The affinity of the variable region of the isotype control has similar affinity for secondary targets as the target antibody.
- There are no primary targets for the isotype antibody to bind to.
- The fluorochrome to protein ratio is the same on both antibodies.
For example, consider the mouse IgG2a κ clone MOPC-173. This clone was first produced in the early 1970s, and the variable region has an “unknown” target.
Reading the technical specifications on this clone, vendors claim it is routinely tested against a suite of normal cells from various organisms to show it doesn’t bind.
Who is to say the target is not on a rare subset of cells that have not been discovered because they were always excluded when this reagent was used?
As for the F/P ratio, it is well-known that different antibodies, even of the same isotype, have different binding capacities for fluorochromes and one may never know if the F/P ratios are matched.
One potential exception to this is in the case of large protein-based fluorochromes and their tandem derivatives. Due to steric issues, most of these conjugates have an F/P ratio of 1:1, but even that is not universally guaranteed.
These limitations lead us to discourage the use of an isotype control, as it provides no additional information and may even lead to erroneous conclusions. For those looking for more discussion, here is a list of references to read:
- Keeney et al. (1998) Cytometry 34:280-283
- Baumgarth and Roederer (2000) J Immunol Methods 243:77-97
- Maecker and Trotter (2006) Cytometry A 69A:1037-1042
- Hulspas et al. (2009) Cytometry B 76B:355-364
- Andersen et al. (2016) Cytometry A 89:1001-1009
There is no perfect control for nonspecific binding. Rather, it must be procedurally minimized at several levels, including using high-quality antibodies, proper blocking (see Anderson for excellent experiments on this topic), titration to ensure the appropriate concentration of antibody, and the use of proper controls such as the FMO (discussed below), biological controls, internal negative populations, and more.
3. Absence of the Fluorescence Minus One (FMO) control
If the isotype control can’t be used to set positivity, the question is, “How can a researcher do it?”
The answer is that there is no one specific control that should be relied upon to determine positive from negative events.
Rather, controls addressing spectral spreading of panel fluorochromes into the channel of interest, known positive and negative control samples, stimulation controls, and more need to be consulted.
The Fluorescence Minus One, or FMO, is a control that addresses the loss of sensitivity in a given channel.
The FMO is critical when accurate discrimination is necessary, such as for rare events or dimly expressed markers. During the panel development phase, it is recommended to test all possible FMOs and keep those that are critical to determine the proper gate placement.
Figure 3 shows how a typical FMO is used to set a gate. In this experiment, PBMCs were stained with 5 fluorochromes (DAPI, FITC, PE, Cy5.5PE, and APC), and acquired on a flow cytometer with 405, 488, and 633 nm excitation sources.
On the far left is the unstained control, and the fully stained sample is shown on the right. The FMO control is in the middle, stained with all the fluorochromes except for PE.
Using an unstained control to determine positivity results in the red dashed line, and it appears there are PE positive cells in the FMO. However, since there is no PE in this tube, the signal must come from somewhere else, such as spillover spreading.
This is how the FMO control helps establish the positivity in the full stained sample, and addresses the spread of the data due to the fluorescence spillover into the channel of interest.
Figure 3: FMO control for a 5-color experiment.
4. Not optimizing the PMTs
Setting the voltage on a PMT can be a daunting task.
In the days of analog cytometers, people were taught to put the negative population in the first log decade. So the researcher would draw a quadrant on the plot, and adjust the voltage to put the negatives in the bottom left without giving it any further thought.
But, what really makes a good voltage? A good PMT voltage should meet the following criteria:
- The dimmest cells are in a region where electronic noise (EN) contributes no more than 10-20% of the variance
- The positive signal is on scale and in the linear region of the detector
With digital cytometers, and advancements in signal processing and data transformation, it became possible to more fully appreciate the true sensitivity of the PMT. At this time, the concept of determining an optimal voltage started to take hold in the cytometry community.
This has been formalized for BD instruments using the Cytometry Setup and Tracking protocols.
However, there is a simple way to do this on other machines, that was published by Maecker and Trotter in 2006, and is termed the “peak 2” method. In this method, a dim particle is run over a voltage series, and the spread of the data, as measured by CV, is plotted against the voltage, generating a curve that looks something like the one in Figure 4.
Figure 4: PMT optimization using the peak 2 method.
This curve shows that at low voltages the CVs are very broad. As the voltage increases, CVs decrease until an inflection point is reached and the slope of the curve changes.
This inflection point represents the point where increasing voltage does not decrease the CV, and is the best starting point for setting voltage. This can be fine-tuned for a given fluorochrome with a voltration experiment, but that’s a subject for another post.
By optimizing the PMT voltage, the issues of incorrectly setting voltages are eliminated.
In this methodology, the researcher applies the voltage and acquires the sample. Exactly where the negatives fall is less critical.
5. Lack of experiment-specific QC protocols
Many researchers consider Quality Control (QC) the domain of the team that is supporting and maintaining the instrument. This is true for one aspect of the QC process, ensuring the instrument is performing consistently.
This QC is usually performed in the morning, however, instrument status may change over the course of the day.
How many researchers add quality control protocols for their experiments to catch these variations?
Experiment-specific QC can be a very simple addition to the experimental workflow, but provides an invaluable resource in determining how the system is behaving when the experiment is performed and how well the staining process was performed.
These two controls, using a beadset for instrument performance and a reference control for staining variation, give the researcher an added level of confidence in the performance of instrument and protocol.
An important thing to remember about any QC protocol is that not only is it performed, but the results are written down somewhere. The adage of “if it isn’t written down, it didn’t happen” is especially true here.
Take, for example, the data in figure 5. Here, the researchers optimized the instrument and used a bead (the 6th peak of the Spherotech 8 peak beadset) to establish target values and acceptable variation.
Before any samples were collected, the investigators ran a peak 6 bead, and adjusted voltages to achieve a target value established during experimental optimization. You can read about this process here.
To assess how well the instrument performed over time, the data (in this case PMT voltage) was analyzed using a Levey-Jennings plot. This analysis plots the data over time, and adds lines to indicate the running average, and +/- 1 and 2 standard deviations from the mean.
A representative Levey-Jennings plot is shown in Figure 5. This plot is interpreted by examining the position of each new data point. Should a point fall outside the quality control level (here that is +/- 2 SD), it is an indication to troubleshoot the issue before collecting actual samples.
Figure 5: Tracking QC in a Levey-Jennings plot. If it isn’t written down, it didn’t happen.
There you have it, 5 lessons, from the trenches of flow cytometry, looking at important aspects of how best practices have changed over time, which practices need to be adopted, and which are outdated. Put those old, coffee-stained protocols away and take advantage of the best practices for digital instruments to write new and improved ones (coffee stains optional). Your data will thank you.
To learn more about the 5 Best Practices For Accurate Flow Cytometry Results, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.