How To Use Flow Cytometry To Analyze Rare Cells Within Heterogeneous Samples
The power of flow cytometry is in the ability to analyze thousands of events rapidly so that finding the proverbial needle in the haystack of heterogenous cells can be done relatively easily.
The discoveries that flow cytometry has enabled because of this ability cannot be underestimated. Prior to flow cytometry, finding rare cells, let alone studying them, was a massive undertaking.
Consider the comparison (and warning) that the great inventor Nikola Tesla once expressed…
“If he had a needle to find in a haystack, he would not stop to reason where it was most likely to be found, but would proceed at once with the feverish diligence of a bee, to examine straw after straw until he found the object of his search… just a little theory and calculation would have saved him ninety percent of his labor.”
Flow cytometry provides one with the theory, calculation, and procedure they need to find a needle (rare cell population) in a haystack (whole blood, cell sample, etc.).
If you’re getting into rare event analysis, there are some additional considerations when attempting to analyze a population that is 0.1% (or lower) of the starting population that don’t apply when trying to purify a population of 10% or so.
Here are 4 keys to sorting rare cells by flow cytometry…
1. Reverse engineer your experiment.
When starting to plan for a rare event sort, it is important to consider the downstream application and begin the calculations to see how many cells you need to start with.
For example, if the downstream assay will need 10,000 cells to start with, and the population of interest is 0.02%, one will need to start with 50 million cells initially.
If one assumes a 50% recovery, the starting population becomes 100 million cells.
Continuing on with the math, sorting at 10,000 events per second, this sort would take about 10,000 seconds, or about 2 hours and 47 minutes — if nothing goes wrong, and not counting the setup time.
Table 1: Time (in hours) to sort the number of cells, given a sort rate of 10,000 events/second.
2. Enrich your flow cytometer sample when possible.
Time on the cell sorters is often expensive.
Additionally, the longer the time it takes to get the sorted populations back, the longer one has to stay to complete any downstream processing.
This is where an enrichment step becomes quite valuable.
Consider using depletion as a method with a magnetic bead.
This process requires identifying a target(s) that is present on non-target cells.
Cells are labeled with antibodies against that target conjugated to a magnetic bead.
The whole solution is exposed to a magnetic field, and labeled cells stick to the side of the tube (or column, depending on the protocol).
The cells of interest are recovered in the supernatant.
Figure 1: Illustrating the workflow for magnetic enrichment.
Looking at the math, with our 100 million cells, a depletion of 90% of the non-target cells will yield 10 million cells.
This also enriches the target population from 0.02% to ~0.2%, while reducing the time on the sorter from ~3 hours to something in the order of ~15 minutes.
Most magnetic enrichment processes take an hour or less.
So the whole process saves over an hour!
3. Avoid common antibody panel design problems.
The more complex the panel, the greater potential for problems in the downstream sorting process.
This is especially true for rare event analysis.
The first level of panel design focuses on pairing the bright fluorochromes with low or unknown expression targets.
If this is overlaid with the rare events, brighter is better.
The other consideration in panel design is to minimize the spread of error due to spectral spread.
To do this, one must optimize the instrument to determine the error contribution in each detector based on the fluorochromes being used.
The data below comes from optimizing a BD LSR-II using the protocols outlined in this paper from Nguyen et al. (2013), Cytometry A 83:306.
To simplify, two columns of data are presented, the left showing the amount of error a given detector receives, and the right showing the amount of error a given fluorochrome contributes to the panel.
Figure 2: Measurements of the error contributed by different fluorochromes and error received by different detectors on an 18-fluorescent channel BD LSR-II
This data shows that the Blue-A detector (PerCP-Cy5.5) receives the most error of all detectors, so not an ideal choice for making a sensitive measurement.
The Blue-B detector (FITC), on the other hand, receives very little error, making it a good channel for sensitive measurements.
On the right side, PE-Cy5 is not a good choice either, as it contributes a lot of error to the panel, while FITC again contributes a much smaller amount.
Of course, drilling into the data will reveal additional information as to which detectors are more affected by a given fluorochrome.
Taken in total, this data is critical for designing a panel, especially so for rare event analysis.
Other panel design considerations have to include the addition of a viability dye.
Cells die in the process, so removing them from the sort is important.
Likewise, optimizing the panel reagents by titration is a critical step to reduce nonspecific binding, and thus sensitivity.
One can also optimize the voltages using a similar method.
In this case, cells are stained with the optimal amount of antibody and a voltage range, starting below the published values of the PMT, is performed.
Data, as shown below, is plotted and the optimal voltage is identified.
Figure 3: Results of voltage optimization for two different fluorochromes.
4. Follow the four rules of flow cytometer optimization.
In addition to the information above, details about the electrics of the sorter are an important consideration.
Rule 1: The sort nozzle should be 4-5 times the diameter of the largest average cell diameter that will pass through the instrument.
Rule 2: The size of the nozzle will dictate the pressure of the sort (larger nozzle = lower pressure), which in turn will affect the frequency of droplet generation (larger nozzle = lower frequency), which in turn affects the number of cells that can be sorted (larger nozzle = slower sort rate).
Figure 4: Relationship between nozzle diameter, sheath pressure, and frequency. From Arnold and Lannigan (2010) Current Protocols in Cytometry 1.24
Rule 3: The event rate should be ¼ of the frequency.
Since what is being sorted are droplets, it is important that the correct droplet be sorted.
To do that, it is often ideal to sort a two-drop window, which means that we don’t want a second (contaminating) cell in the second droplet.
If one looks at the Poisson distribution for the number of cells in a drop for various ratios of cells to drops, one finds that data that looks like this (courtesy of Dr. Rui Gardner, head of the Flow Cytometry Core Facility at Memorial Sloan Kettering Cancer Center):
Figure 5: Relationship between number of events in a given drop with different cell/drop ratios.
Increasing the frequency to 1 cell per 2 drops will increase the speed of the sort, but also increase the chance of coincident events (2 cells in one drop), as well as the chance of two cells in successive drops.
Decreasing to 1 cell per 6 drops slows the sort down without an appreciable improvement over 1 cell per 4 drops.
Speed also increases the spread of the data, thus decreasing the resolution of the signal.
Slower is better.
Rule 4: Pulse size matters.
The electronic pulse is a sum of the cell size and beam height.
If we were to take a theoretical 5 micron cell and a cell sorter with a 20 micron beam height, the pulse width would be 25 microns.
The velocity for this sorter is 30 meters per second, resulting in a pulse time of 0.83 microseconds (calculations courtesy of Ryan Duggan).
During this time, if a second cell enters the interrogation point, it must be discarded (‘aborted’), as the electronics cannot process the signal.
The faster the event rate, the greater the chance of these aborted events.
In addition to this processing time, the system can be programmed to interrogate a longer window.
This has the advantage of gathering more light with the consequence of possibly increasing these aborted events.
On the analyzer, it may be less critical for aborts, but in the case of a rare event sort, we want to minimize these aborts, therefore it is critical to watch the abort rate on the sort counter and reduce event rate to decrease this abort rate.
Before charging ahead, you may want to use this chart, prepared by Dr. Rui Gardner, to discuss and plan any sorting experiments.
It covers many more topics that will come up in preparing for such a sort.
Figure 6: Topics to cover when considering cell sorting experiments. Prepared by Dr. Rui Gardner.
In conclusion, journeys into the realm of rare event analysis and sorting are fraught with peril. Poorly designed panels, failure to plan for the end-results, and not taking into account instrument characteristics, all can result in a failed sort, lost opportunity, and delay in the necessary data. Take the time at the beginning, before starting down the path of rare event sorting, to understand the different issues that will potentially impact your outcome and develop a plan to address each of them. With that forethought, it will become possible to identify the needle in your cellular haystack consistently and reproducibly.
To learn more about how to use flow cytometry to analyze rare cells within heterogeneous samples and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.