Flow cytometry bioinformatics is the application of bioinformatics to flow cytometry data, which involves storing, retrieving, organizing and analyzing flow cytometry data using extensive computational resources and tools. Flow cytometry bioinformatics requires extensive use of and contributes to the development of techniques from computational statistics and machine learning. Flow cytometry and related methods allow the quantification of multiple independent biomarkers on large numbers of single cells. The rapid growth in the multidimensionality and throughput of flow cytometry data, particularly in the 2000s, has led to the creation of a variety of computational analysis methods, data standards, and public databases for the sharing of results.
Computational methods exist to assist in the preprocessing of flow cytometry data, identifying cell populations within it, matching those cell populations across samples, and performing diagnosis and discovery using the results of previous steps. For preprocessing, this includes compensating for spectral overlap, transforming data onto scales conducive to visualization and analysis, assessing data for quality, and normalizing data across samples and experiments. For population identification, tools are available to aid traditional manual identification of populations in two-dimensional scatter plots (gating), to use dimensionality reduction to aid gating, and to find populations automatically in higher-dimensional space in a variety of ways. It is also possible to characterize data in more comprehensive ways, such as the density-guided binary space partitioning technique known as probability binning, or by combinatorial gating. Finally, diagnosis using flow cytometry data can be aided by supervised learning techniques, and discovery of new cell types of biological importance by high-throughput statistical methods, as part of pipelines incorporating all of the aforementioned methods.
Open standards, data and software are also key parts of flow cytometry bioinformatics. Data standards include the widely adopted Flow Cytometry Standard (FCS) defining how data from cytometers should be stored, but also several new standards under development by the International Society for Advancement of Cytometry (ISAC) to aid in storing more detailed information about experimental design and analytical steps. Open data is slowly growing with the opening of the CytoBank database in 2010, and FlowRepository in 2012, both of which allow users to freely distribute their data, and the latter of which has been recommended as the preferred repository for MIFlowCyt-compliant data by ISAC. Open software is most widely available in the form of a suite of Bioconductor packages, but is also available for web execution on the GenePattern platform.
Flow cytometers operate by hydrodynamically focusing suspended cells so that they separate from each other within a fluid stream. The stream is interrogated by one or more lasers, and the resulting fluorescent and scattered light is detected by photomultipliers. By using optical filters, particular fluorophores on or within the cells can be quantified by peaks in their emission spectra. These may be endogenous fluorophores such as chlorophyll or transgenic green fluorescent protein, or they may be artificial fluorophores covalently bonded to detection molecules such as antibodies for detecting proteins, or hybridization probes for detecting DNA or RNA.
The ability to quantify these has led to flow cytometry being used in a wide range of applications, including but not limited to:
Until the early 2000s, flow cytometry could only measure a few fluorescent markers at a time. Through the late 1990s into the mid-2000s, however, rapid development of new fluorophores resulted in modern instruments capable of quantifying up to 18 markers per cell. [7] More recently, the new technology of mass cytometry replaces fluorophores with rare-earth elements detected by time of flight mass spectrometry, achieving the ability to measure the expression of 34 or more markers. [8] At the same time, microfluidic qPCR methods are providing a flow cytometry-like method of quantifying 48 or more RNA molecules per cell. [9] The rapid increase in the dimensionality of flow cytometry data, coupled with the development of high-throughput robotic platforms capable of assaying hundreds to thousands of samples automatically have created a need for improved computational analysis methods. [7]
Flow cytometry data is in the form of a large matrix of intensities over M wavelengths by N events. Most events will be a particular cell, although some may be doublets (pairs of cells which pass the laser closely together). For each event, the measured fluorescence intensity over a particular wavelength range is recorded.
The measured fluorescence intensity indicates the amount of that fluorophore in the cell, which indicates the amount that has bound to detector molecules such as antibodies. Therefore, fluorescence intensity can be considered a proxy for the amount of detector molecules present on the cell. A simplified, if not strictly accurate, way of considering flow cytometry data is as a matrix of M measurements times N cells where each element corresponds to the amounts of molecules.
The process of moving from primary FCM data to disease diagnosis and biomarker discovery involves four major steps:
Saving of the steps taken in a particular flow cytometry workflow is supported by some flow cytometry software, and is important for the reproducibility of flow cytometry experiments. However, saved workspace files are rarely interchangeable between software. [10] An attempt to solve this problem is the development of the Gating-ML XML-based data standard (discussed in more detail under the standards section), which is slowly being adopted in both commercial and open source flow cytometry software. [11] The CytoML R package is also filling the gap by importing/exporting the Gating-ML that is compatible with FlowJo, CytoBank and FACS Diva softwares.
Prior to analysis, flow cytometry data must typically undergo pre-processing to remove artifacts and poor quality data, and to be transformed onto an optimal scale for identifying cell populations of interest. Below are various steps in a typical flow cytometry preprocessing pipeline.
When more than one fluorochrome is used with the same laser, their emission spectra frequently overlap. Each particular fluorochrome is typically measured using a bandpass optical filter set to a narrow band at or near the fluorochrome's emission intensity peak. The result is that the reading for any given fluorochrome is actually the sum of that fluorochrome's peak emission intensity, and the intensity of all other fluorochromes' spectra where they overlap with that frequency band. This overlap is termed spillover, and the process of removing spillover from flow cytometry data is called compensation. [12]
Compensation is typically accomplished by running a series of representative samples each stained for only one fluorochrome, to give measurements of the contribution of each fluorochrome to each channel. [12] The total signal to remove from each channel can be computed by solving a system of linear equations based on this data to produce a spillover matrix, which when inverted and multiplied with the raw data from the cytometer produces the compensated data. [12] [13] The processes of computing the spillover matrix, or applying a precomputed spillover matrix to compensate flow cytometry data, are standard features of flow cytometry software. [14]
Cell populations detected by flow cytometry are often described as having approximately log-normal expression. [15] As such, they have traditionally been transformed to a logarithmic scale. In early cytometers, this was often accomplished even before data acquisition by use of a log amplifier. On modern instruments, data is usually stored in linear form, and transformed digitally prior to analysis.
However, compensated flow cytometry data frequently contains negative values due to compensation, and cell populations do occur which have low means and normal distributions. [16] Logarithmic transformations cannot properly handle negative values, and poorly display normally distributed cell types. [16] [17] Alternative transformations which address this issue include the log-linear hybrid transformations Logicle [16] [18] and Hyperlog, [19] as well as the hyperbolic arcsine and the Box–Cox. [20]
A comparison of commonly used transformations concluded that the biexponential and Box–Cox transformations, when optimally parameterized, provided the clearest visualization and least variance of cell populations across samples. [17] However, a later comparison of the flowTrans package used in that comparison indicated that it did not parameterize the Logicle transformation in a manner consistent with other implementations, potentially calling those results into question. [21]
Particularly in newer, high-throughput experiments, there is a need for visualization methods to help detect technical errors in individual samples. One approach is to visualize summary statistics, such as the empirical distribution functions of single dimensions of technical or biological replicates to ensure they are the similar. [22] For more rigor, the Kolmogorov–Smirnov test can be used to determine if individual samples deviate from the norm. [22] The Grubbs's test for outliers may be used to detect samples deviating from the group.
A method for quality control in higher-dimensional space is to use probability binning with bins fit to the whole data set pooled together. [23] Then the standard deviation of the number of cells falling in the bins within each sample can be taken as a measure of multidimensional similarity, with samples that are closer to the norm having a smaller standard deviation. [23] With this method, higher standard deviation can indicate outliers, although this is a relative measure as the absolute value depends partly on the number of bins.
With all of these methods, the cross-sample variation is being measured. However, this is the combination of technical variations introduced by the instruments and handling, and actual biological information that is desired to be measured. Disambiguating the technical and the biological contributions to between-sample variation can be a difficult to impossible task. [24]
Particularly in multi-centre studies, technical variation can make biologically equivalent populations of cells difficult to match across samples. Normalization methods to remove technical variance, frequently derived from image registration techniques, are thus a critical step in many flow cytometry analyses. Single-marker normalization can be performed using landmark registration, in which peaks in a kernel density estimate of each sample are identified and aligned across samples. [24]
The complexity of raw flow cytometry data (dozens of measurements for thousands to millions of cells) makes answering questions directly using statistical tests or supervised learning difficult. Thus, a critical step in the analysis of flow cytometric data is to reduce this complexity to something more tractable while establishing common features across samples. This usually involves identifying multidimensional regions that contain functionally and phenotypically homogeneous groups of cells. [27] This is a form of cluster analysis. There are a range of methods by which this can be achieved, detailed below.
The data generated by flow-cytometers can be plotted in one or two dimensions to produce a histogram or scatter plot. The regions on these plots can be sequentially separated, based on fluorescence intensity, by creating a series of subset extractions, termed "gates". These gates can be produced using software, e.g. FlowJo, [28] FCS Express, [29] WinMDI, [30] CytoPaint (aka Paint-A-Gate), [31] VenturiOne, Cellcion, CellQuest Pro, Cytospec, [32] Kaluza. [33] or flowCore.
In datasets with a low number of dimensions and limited cross-sample technical and biological variability (e.g., clinical laboratories), manual analysis of specific cell populations can produce effective and reproducible results. However, exploratory analysis of a large number of cell populations in a high-dimensional dataset is not feasible. [34] In addition, manual analysis in less controlled settings (e.g., cross-laboratory studies) can increase the overall error rate of the study. [35] In one study, several computational gating algorithms performed better than manual analysis in the presence of some variation. [26] However, despite the considerable advances in computational analysis, manual gating remains the main solution for the identification of specific rare cell populations that are not well-separated from other cell types.
The number of scatter plots that need to be investigated increases with the square of the number of markers measured (or faster since some markers need to be investigated several times for each group of cells to resolve high-dimensional differences between cell types that appear to be similar in most markers). [36] To address this issue, principal component analysis has been used to summarize the high-dimensional datasets using a combination of markers that maximizes the variance of all data points. [37] However, PCA is a linear method and is not able to preserve complex and non-linear relationships. More recently, two dimensional minimum spanning tree layouts have been used to guide the manual gating process. Density-based down-sampling and clustering was used to better represent rare populations and control the time and memory complexity of the minimum spanning tree construction process. [38] More sophisticated dimension reduction algorithms are yet to be investigated. [39]
Developing computational tools for identification of cell populations has been an area of active research only since 2008. Many individual clustering approaches have recently been developed, including model-based algorithms (e.g., flowClust [41] and FLAME [42] ), density based algorithms (e.g. FLOCK [43] and SWIFT, graph-based approaches (e.g. SamSPECTRAL [44] ) and most recently, hybrids of several approaches (flowMeans [45] and flowPeaks [46] ). These algorithms are different in terms of memory and time complexity, their software requirements, their ability to automatically determine the required number of cell populations, and their sensitivity and specificity. The FlowCAP (Flow Cytometry: Critical Assessment of Population Identification Methods) project, with active participation from most academic groups with research efforts in the area, is providing a way to objectively cross-compare state-of-the-art automated analysis approaches. [26] Other surveys have also compared automated gating tools on several datasets. [47] [48] [49] [50]
Probability binning is a non-gating analysis method in which flow cytometry data is split into quantiles on a univariate basis. [51] The locations of the quantiles can then be used to test for differences between samples (in the variables not being split) using the chi-squared test. [51]
This was later extended into multiple dimensions in the form of frequency difference gating, a binary space partitioning technique where data is iteratively partitioned along the median. [52] These partitions (or bins) are fit to a control sample. Then the proportion of cells falling within each bin in test samples can be compared to the control sample by the chi squared test.
Finally, cytometric fingerprinting uses a variant of frequency difference gating to set bins and measure for a series of samples how many cells fall within each bin. [23] These bins can be used as gates and used for subsequent analysis similarly to automated gating methods.
High-dimensional clustering algorithms are often unable to identify rare cell types that are not well separated from other major populations. Matching these small cell populations across multiple samples is even more challenging. In manual analysis, prior biological knowledge (e.g., biological controls) provides guidance to reasonably identify these populations. However, integrating this information into the exploratory clustering process (e.g., as in semi-supervised learning) has not been successful.
An alternative to high-dimensional clustering is to identify cell populations using one marker at a time and then combine them to produce higher-dimensional clusters. This functionality was first implemented in FlowJo. [28] The flowType algorithm builds on this framework by allowing the exclusion of the markers. [53] This enables the development of statistical tools (e.g. RchyOptimyx) that can investigate the importance of each marker and exclude high-dimensional redundancies. [54]
After identification of the cell population of interest, a cross sample analysis can be performed to identify phenotypical or functional variations that are correlated with an external variable (e.g., a clinical outcome). These studies can be partitioned into two main groups:
In these studies, the goal usually is to diagnose a disease (or a sub-class of a disease) using variations in one or more cell populations. For example, one can use multidimensional clustering to identify a set of clusters, match them across all samples, and then use supervised learning to construct a classifier for prediction of the classes of interest (e.g., this approach can be used to improve the accuracy of the classification of specific lymphoma subtypes [55] ). Alternatively, all the cells from the entire cohort can be pooled into a single multidimensional space for clustering before classification. [56] This approach is particularly suitable for datasets with a high amount of biological variation (in which cross-sample matching is challenging) but requires technical variations to be carefully controlled. [57]
In a discovery setting, the goal is to identify and describe cell populations correlated with an external variable (as opposed to the diagnosis setting in which the goal is to combine the predictive power of multiple cell types to maximize the accuracy of the results). Similar to the diagnosis use-case, cluster matching in high-dimensional space can be used for exploratory analysis but the descriptive power of this approach is very limited, as it is hard to characterize and visualize a cell population in a high-dimensional space without first reducing the dimensionality. [56] [58] Finally, combinatorial gating approaches have been particularly successful in exploratory analysis of FCM data. Simplified Presentation of Incredibly Complex Evaluations (SPICE) is a software package that can use the gating functionality of FlowJo to statistically evaluate a wide range of different cell populations and visualize those that are correlated with the external outcome. flowType and RchyOptimyx (as discussed above) expand this technique by adding the ability of exploring the impact of independent markers on the overall correlation with the external outcome. This enables the removal of unnecessary markers and provides a simple visualization of all identified cell types. In a recent analysis of a large (n=466) cohort of HIV+ patients, this pipeline identified three correlates of protection against HIV, only one of which had been previously identified through extensive manual analysis of the same dataset. [53]
Flow Cytometry Standard (FCS) was developed in 1984 to allow recording and sharing of flow cytometry data. [59] Since then, FCS became the standard file format supported by all flow cytometry software and hardware vendors. The FCS specification has traditionally been developed and maintained by the International Society for Advancement of Cytometry (ISAC). [60] Over the years, updates were incorporated to adapt to technological advancements in both flow cytometry and computing technologies with FCS 2.0 introduced in 1990, [61] FCS 3.0 in 1997, [62] and the most current specification FCS 3.1 in 2010. [63] FCS used to be the only widely adopted file format in flow cytometry. Recently, additional standard file formats have been developed by ISAC.
ISAC is considering replacing FCS with a flow cytometry specific version of the Network Common Data Form (netCDF) file format. [64] netCDF is a set of freely available software libraries and machine independent data formats that support the creation, access, and sharing of array-oriented scientific data. In 2008, ISAC drafted the first version of netCDF conventions for storage of raw flow cytometry data. [65]
The Archival Cytometry Standard (ACS) is being developed to bundle data with different components describing cytometry experiments. [66] It captures relations among data, metadata, analysis files and other components, and includes support for audit trails, versioning and digital signatures. The ACS container is based on the ZIP file format with an XML-based table of contents specifying relations among files in the container. The XML Signature W3C Recommendation has been adopted to allow for digital signatures of components within the ACS container. An initial draft of ACS has been designed in 2007 and finalized in 2010. Since then, ACS support has been introduced in several software tools including FlowJo and Cytobank.
The lack of gating interoperability has traditionally been a bottleneck preventing reproducibility of flow cytometry data analysis and the usage of multiple analytical tools. To address this shortcoming, ISAC developed Gating-ML, an XML-based mechanism to formally describe gates and related data (scale) transformations. [10] The draft recommendation version of Gating-ML was approved by ISAC in 2008 and it is partially supported by tools like FlowJo, the flowUtils, CytoML libraries in R/BioConductor, and FlowRepository. [66] It supports rectangular gates, polygon gates, convex polytopes, ellipsoids, decision trees and Boolean collections of any of the other types of gates. In addition, it includes dozens of built in public transformations that have been shown to potentially useful for display or analysis of cytometry data. In 2013, Gating-ML version 2.0 was approved by ISAC's Data Standards Task Force as a Recommendation. This new version offers slightly less flexibility in terms of the power of gating description; however, it is also significantly easier to implement in software tools. [11]
The Classification Results (CLR) File Format [67] has been developed to exchange the results of manual gating and algorithmic classification approaches in a standard way in order to be able to report and process the classification. CLR is based in the commonly supported CSV file format with columns corresponding to different classes and cell values containing the probability of an event being a member of a particular class. These are captured as values between 0 and 1. Simplicity of the format and its compatibility with common spreadsheet tools have been the major requirements driving the design of the specification. Although it was originally designed for the field of flow cytometry, it is applicable in any domain that needs to capture either fuzzy or unambiguous classifications of virtually any kinds of objects.
As in other bioinformatics fields, development of new methods has primarily taken the form of free open source software, and several databases have been created for depositing open data.
AutoGate [68] performs compensation, gating, preview of clusters, exhaustive projection pursuit (EPP), multi-dimension scaling and phenogram, produces a visual dendogram to express HiD readiness. It is free to researchers and clinicians at academic, government, and non-profit institutions.
The Bioconductor project is a repository of free open source software, mostly written in the R programming language. [69] As of July 2013, Bioconductor contained 21 software packages for processing flow cytometry data. [70] These packages cover most of the range of functionality described earlier in this article.
GenePattern is a predominantly genomic analysis platform with over 200 tools for analysis of gene expression, proteomics, and other data. A web-based interface provides easy access to these tools and allows the creation of automated analysis pipelines enabling reproducible research. Recently, a GenePattern Flow Cytometry Suite has been developed in order to bring advanced flow cytometry data analysis tools to experimentalists without programmatic skills. It contains close to 40 open source GenePattern flow cytometry modules covering methods from basic processing of flow cytometry standard (i.e., FCS) files to advanced algorithms for automated identification of cell populations, normalization and quality assessment. Internally, most of these modules leverage functionality developed in BioConductor.
Much of the functionality of the Bioconductor packages for flow cytometry analysis has been packaged up for use with the GenePattern [71] workflow system, in the form of the GenePattern Flow Cytometry Suite. [72]
FACSanadu [73] is an open source portable application for visualization and analysis of FCS data. Unlike Bioconductor, it is an interactive program aimed at non-programmers for routine analysis. It supports standard FCS files as well as COPAS profile data.
hema.to is a web service for the classification of flow cytometry data of patients suspected to have lymphoma. [74] The artificial intelligence within the tool uses a deep convolutional neural network to recognize patterns of distinct subtypes. All data and code is open access. [75] It processes raw data, which makes gating unnecessary. For best performance on new data, fine tuning by knowledge transfer is required. [76]
The Minimum Information about a Flow Cytometry Experiment (MIFlowCyt), requires that any flow cytometry data used in a publication be available, although this does not include a requirement that it be deposited in a public database. [77] Thus, although the journals Cytometry Part A and B, as well as all journals from the Nature Publishing Group require MIFlowCyt compliance, there is still relatively little publicly available flow cytometry data. Some efforts have been made towards creating public databases, however.
Firstly, CytoBank, which is a complete web-based flow cytometry data storage and analysis platform, has been made available to the public in a limited form. [78] Using the CytoBank code base, FlowRepository was developed in 2012 with the support of ISAC to be a public repository of flow cytometry data. [79] FlowRepository facilitates MIFlowCyt compliance, [80] and as of July 2013 contained 65 public data sets. [81]
In 2012, the flow cytometry community has started to release a set of publicly available datasets. A subset of these datasets representing the existing data analysis challenges is described below. For comparison against manual gating, the FlowCAP-I project has released five datasets, manually gated by human analysts, and two of them gated by eight independent analysts. [26] The FlowCAP-II project included three datasets for binary classification and also reported several algorithms that were able to classify these samples perfectly. FlowCAP-III included two larger datasets for comparison against manual gates as well as one more challenging sample classification dataset. As of March 2013, public release of FlowCAP-III was still in progress. [82] The datasets used in FlowCAP-I, II, and III either have a low number of subjects or parameters. However, recently several more complex clinical datasets have been released including a dataset of 466 HIV-infected subjects, which provides both 14 parameter assays and sufficient clinical information for survival analysis. [54] [83] [84] [85]
Another class of datasets are higher-dimensional mass cytometry assays. A representative of this class of datasets is a study which includes analysis of two bone marrow samples using more than 30 surface or intracellular markers under a wide range of different stimulations. [8] The raw data for this dataset is publicly available as described in the manuscript, and manual analyses of the surface markers are available upon request from the authors.
Despite rapid development in the field of flow cytometry bioinformatics, several problems remain to be addressed.
Variability across flow cytometry experiments arises from biological variation among samples, technical variations across instruments used, as well as methods of analysis. In 2010, a group of researchers from Stanford University and the National Institutes of Health pointed out that while technical variation can be ameliorated by standardizing sample handling, instrument setup and choice of reagents, solving variation in analysis methods will require similar standardization and computational automation of gating methods. [86] They further opined that centralization of both data and analysis could aid in decreasing variability between experiments and in comparing results. [86]
This was echoed by another group of Pacific Biosciences and Stanford University researchers, who suggested that cloud computing could enable centralized, standardized, high-throughput analysis of flow cytometry experiments. [87] They also emphasised that ongoing development and adoption of standard data formats could continue to aid in reducing variability across experiments. [87] They also proposed that new methods will be needed to model and summarize results of high-throughput analysis in ways that can be interpreted by biologists, [87] as well as ways of integrating large-scale flow cytometry data with other high-throughput biological information, such as gene expression, genetic variation, metabolite levels and disease states. [87]
Bioinformatics is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The subsequent process of analyzing and interpreting data is often referred to as computational biology, though the distinction between the two terms is often disputed.
Proteomics is the large-scale study of proteins. Proteins are vital macromolecules of all living organisms, with many functions such as the formation of structural fibers of muscle tissue, enzymatic digestion of food, or synthesis and replication of DNA. In addition, other kinds of proteins include antibodies that protect an organism from infection, and hormones that send important signals throughout the body.
Flow cytometry (FC) is a technique used to detect and measure the physical and chemical characteristics of a population of cells or particles.
Hoechst stains are part of a family of blue fluorescent dyes used to stain DNA. These bis-benzimides were originally developed by Hoechst AG, which numbered all their compounds so that the dye Hoechst 33342 is the 33,342nd compound made by the company. There are three related Hoechst stains: Hoechst 33258, Hoechst 33342, and Hoechst 34580. The dyes Hoechst 33258 and Hoechst 33342 are the ones most commonly used and they have similar excitation–emission spectra.
In microbiology, a colony-forming unit is a unit which estimates the number of microbial cells in a sample that are viable, able to multiply via binary fission under the controlled conditions. Counting with colony-forming units requires culturing the microbes and counts only viable cells, in contrast with microscopic examination which counts all cells, living or dead. The visual appearance of a colony in a cell culture requires significant growth, and when counting colonies, it is uncertain if the colony arose from a single cell or a group of cells. Expressing results as colony-forming units reflects this uncertainty.
GenePattern is a freely available computational biology open-source software package originally created and developed at the Broad Institute for the analysis of genomic data. Designed to enable researchers to develop, capture, and reproduce genomic analysis methodologies, GenePattern was first released in 2004. GenePattern is currently developed at the University of California, San Diego.
Cell sorting is the process through which a particular cell type is separated from others contained in a sample on the basis of its physical or biological properties, such as size, morphological parameters, viability and both extracellular and intracellular protein expression. The homogeneous cell population obtained after sorting can be used for a variety of applications including research, diagnosis, and therapy.
FlowJo is a software package for analyzing flow cytometry data. Files produced by modern flow cytometers are written in the Flow Cytometry Standard format with an .fcs file extension. FlowJo will import and analyze cytometry data regardless of which flow cytometer is used to collect the data.
LabKey Server is a software suite available for scientists to integrate, analyze, and share biomedical research data. The platform provides a secure data repository that allows web-based querying, reporting, and collaborating across a range of data sources. Specific scientific applications and workflows can be added on top of the basic platform and leverage a data processing pipeline.
Cell cycle analysis by DNA content measurement is a method that most frequently employs flow cytometry to distinguish cells in different phases of the cell cycle. Before analysis, the cells are usually permeabilised and treated with a fluorescent dye that stains DNA quantitatively, such as propidium iodide (PI) or 4,6-diamidino-2-phenylindole (DAPI). The fluorescence intensity of the stained cells correlates with the amount of DNA they contain. As the DNA content doubles during the S phase, the DNA content (and thereby intensity of fluorescence) of cells in the G0 phase and G1 phase (before S), in the S phase, and in the G2 phase and M phase (after S) identifies the cell cycle phase position in the major phases (G0/G1 versus S versus G2/M phase) of the cell cycle. The cellular DNA content of individual cells is often plotted as their frequency histogram to provide information about relative frequency (percentage) of cells in the major phases of the cell cycle.
Cytometry by time of flight, or CyTOF, is an application of mass cytometry used to quantify labeled targets on the surface and interior of single cells. CyTOF allows the quantification of multiple cellular components simultaneously using an ICP-MS detector.
Neuronal tracing, or neuron reconstruction is a technique used in neuroscience to determine the pathway of the neurites or neuronal processes, the axons and dendrites, of a neuron. From a sample preparation point of view, it may refer to some of the following as well as other genetic neuron labeling techniques,
Mass cytometry is a mass spectrometry technique based on inductively coupled plasma mass spectrometry and time of flight mass spectrometry used for the determination of the properties of cells (cytometry). In this approach, antibodies are conjugated with isotopically pure elements, and these antibodies are used to label cellular proteins. Cells are nebulized and sent through an argon plasma, which ionizes the metal-conjugated antibodies. The metal signals are then analyzed by a time-of-flight mass spectrometer. The approach overcomes limitations of spectral overlap in flow cytometry by utilizing discrete isotopes as a reporter system instead of traditional fluorophores which have broad emission spectra.
EuroFlow consortium was founded in 2005 as 2U-FP6 funded project and launched in spring 2006. At first, EuroFlow was composed of 18 diagnostic research groups and two SMEs from eight different European countries with complementary knowledge and skills in the field of flow cytometry and immunophenotyping. During 2012 both SMEs left the project so it obtained full scientific independence. The goal of EuroFlow consortium is to innovate and standardize flow cytometry leading to global improvement and progress in diagnostics of haematological malignancies and individualisation of treatment.
An imaging cycler microscope (ICM) is a fully automated (epi) fluorescence microscope which overcomes the spectral resolution limit resulting in parameter- and dimension-unlimited fluorescence imaging. The principle and robotic device was described by Walter Schubert in 1997 and has been further developed with his co-workers within the human toponome project. The ICM runs robotically controlled repetitive incubation-imaging-bleaching cycles with dye-conjugated probe libraries recognizing target structures in situ (biomolecules in fixed cells or tissue sections). This results in the transmission of a randomly large number of distinct biological informations by re-using the same fluorescence channel after bleaching for the transmission of another biological information using the same dye which is conjugated to another specific probe, a.s.o. Thereby noise-reduced quasi-multichannel fluorescence images with reproducible physical, geometrical, and biophysical stabilities are generated. The resulting power of combinatorial molecular discrimination (PCMD) per data point is given by 65,536k, where 65,536 is the number of grey value levels (output of a 16-bit CCD camera), and k is the number of co-mapped biomolecules and/or subdomains per biomolecule(s). High PCMD has been shown for k = 100, and in principle can be expanded for much higher numbers of k. In contrast to traditional multichannel–few-parameter fluorescence microscopy (panel a in the figure) high PCMDs in an ICM lead to high functional and spatial resolution (panel b in the figure). Systematic ICM analysis of biological systems reveals the supramolecular segregation law that describes the principle of order of large, hierarchically organized biomolecular networks in situ (toponome). The ICM is the core technology for the systematic mapping of the complete protein network code in tissues (human toponome project). The original ICM method includes any modification of the bleaching step. Corresponding modifications have been reported for antibody retrieval and chemical dye-quenching debated recently. The Toponome Imaging Systems (TIS) and multi-epitope-ligand cartographs (MELC) represent different stages of the ICM technological development. Imaging cycler microscopy received the American ISAC best paper award in 2008 for the three symbol code of organized proteomes.
Flow Cytometry Standard (FCS) is a data file standard for the reading and writing of data from flow cytometry experiments. The FCS specification has traditionally been developed and maintained by the International Society for Advancement of Cytometry (ISAC). FCS used to be the only widely adopted file format in flow cytometry. Recently, additional standard file formats have been developed by ISAC.
Minimum information standards are sets of guidelines and formats for reporting data derived by specific high-throughput methods. Their purpose is to ensure the data generated by these methods can be easily verified, analysed and interpreted by the wider scientific community. Ultimately, they facilitate the transfer of data from journal articles into databases in a form that enables data to be mined across multiple data sets. Minimal information standards are available for a vast variety of experiment types including microarray (MIAME), RNAseq (MINSEQE), metabolomics (MSI) and proteomics (MIAPE).
Tissue image cytometry or tissue cytometry is a method of digital histopathology and combines classical digital pathology and computational pathology into one integrated approach with solutions for all kinds of diseases, tissue and cell types as well as molecular markers and corresponding staining methods to visualize these markers. Tissue cytometry uses virtual slides as they can be generated by multiple, commercially available slide scanners, as well as dedicated image analysis software – preferentially including machine and deep learning algorithms. Tissue cytometry enables cellular analysis within thick tissues, retaining morphological and contextual information, including spatial information on defined cellular subpopulations.
Single-cell multi-omics integration describes a suite of computational methods used to harmonize information from multiple "omes" to jointly analyze biological phenomena. This approach allows researchers to discover intricate relationships between different chemical-physical modalities by drawing associations across various molecular layers simultaneously. Multi-omics integration approaches can be categorized into four broad categories: Early integration, intermediate integration, late integration methods. Multi-omics integration can enhance experimental robustness by providing independent sources of evidence to address hypotheses, leveraging modality-specific strengths to compensate for another's weaknesses through imputation, and offering cell-type clustering and visualizations that are more aligned with reality
This article was adapted from the following source under a CC BY 4.0 license (2013) (reviewer reports): Kieran O'Neill; Nima Aghaeepour; Josef Spidlen; Ryan Brinkman (5 December 2013). "Flow cytometry bioinformatics". PLOS Computational Biology . 9 (12): e1003365. doi: 10.1371/JOURNAL.PCBI.1003365 . ISSN 1553-734X. PMC 3867282 . PMID 24363631. Wikidata Q21045422.
{{cite book}}
: |journal=
ignored (help)CS1 maint: multiple names: authors list (link){{cite journal}}
: Cite journal requires |journal=
(help){{cite journal}}
: CS1 maint: DOI inactive as of September 2024 (link)