A major contributor to this article appears to have a close connection with its subject.(July 2011) |
In statistics, the strictly standardized mean difference (SSMD) is a measure of effect size. It is the mean divided by the standard deviation of a difference between two random values each from one of two groups. It was initially proposed for quality control [1] and hit selection [2] in high-throughput screening (HTS) and has become a statistical parameter measuring effect sizes for the comparison of any two groups with random values. [3]
In high-throughput screening (HTS), quality control (QC) is critical. An important QC characteristic in a HTS assay is how much the positive controls, test compounds, and negative controls differ from one another. This QC characteristic can be evaluated using the comparison of two well types in HTS assays. Signal-to-noise ratio (S/N), signal-to-background ratio (S/B), and the Z-factor have been adopted to evaluate the quality of HTS assays through the comparison of two investigated types of wells. However, the S/B does not take into account any information on variability; and the S/N can capture the variability only in one group and hence cannot assess the quality of assay when the two groups have different variabilities. [1] Zhang JH et al. proposed the Z-factor. [4] The advantage of the Z-factor over the S/N and S/B is that it takes into account the variabilities in both compared groups. As a result, the Z-factor has been broadly used as a QC metric in HTS assays. [ citation needed ] The absolute sign in the Z-factor makes it inconvenient to derive its statistical inference mathematically.
To derive a better interpretable parameter for measuring the differentiation between two groups, Zhang XHD [1] proposed SSMD to evaluate the differentiation between a positive control and a negative control in HTS assays. SSMD has a probabilistic basis due to its strong link with d+-probability (i.e., the probability that the difference between two groups is positive). [2] To some extent, the d+-probability is equivalent to the well-established probabilistic index P(X > Y) which has been studied and applied in many areas. [5] [6] [7] [8] [9] Supported on its probabilistic basis, SSMD has been used for both quality control and hit selection in high-throughput screening. [1] [2] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]
As a statistical parameter, SSMD (denoted as ) is defined as the ratio of mean to standard deviation of the difference of two random values respectively from two groups. Assume that one group with random values has mean and variance and another group has mean and variance . The covariance between the two groups is Then, the SSMD for the comparison of these two groups is defined as [1]
If the two groups are independent,
If the two independent groups have equal variances ,
In the situation where the two groups are correlated, a commonly used strategy to avoid the calculation of is first to obtain paired observations from the two groups and then to estimate SSMD based on the paired observations. Based on a paired difference with population mean and , SSMD is
In the situation where the two groups are independent, Zhang XHD [1] derived the maximum-likelihood estimate (MLE) and method-of-moment (MM) estimate of SSMD. Assume that groups 1 and 2 have sample mean , and sample variances . The MM estimate of SSMD is then [1]
When the two groups have normal distributions with equal variance, the uniformly minimal variance unbiased estimate (UMVUE) of SSMD is, [10]
where are the sample sizes in the two groups and . [3]
In the situation where the two groups are correlated, based on a paired difference with a sample size , sample mean and sample variance , the MM estimate of SSMD is
The UMVUE estimate of SSMD is [22]
SSMD looks similar to t-statistic and Cohen's d, but they are different with one another as illustrated in. [3]
SSMD is the ratio of mean to the standard deviation of the difference between two groups. When the data is preprocessed using log-transformation as we normally do in HTS experiments, SSMD is the mean of log fold change divided by the standard deviation of log fold change with respect to a negative reference. In other words, SSMD is the average fold change (on the log scale) penalized by the variability of fold change (on the log scale) [23] . For quality control, one index for the quality of an HTS assay is the magnitude of difference between a positive control and a negative reference in an assay plate. For hit selection, the size of effects of a compound (i.e., a small molecule or an siRNA) is represented by the magnitude of difference between the compound and a negative reference. SSMD directly measures the magnitude of difference between two groups. Therefore, SSMD can be used for both quality control and hit selection in HTS experiments.
The number of wells for the positive and negative controls in a plate in the 384-well or 1536-well platform is normally designed to be reasonably large . [24] Assume that the positive and negative controls in a plate have sample mean , sample variances , and sample sizes . Usually, the assumption that the controls have equal variance in a plate holds. In such a case, The SSMD for assessing quality in that plate is estimated as [10]
where . When the assumption of equal variance does not hold, the SSMD for assessing quality in that plate is estimated as [1]
If there are clearly outliers in the controls, the SSMD can be estimated as [23]
where are the medians and median absolute deviations in the positive and negative controls, respectively.
The Z-factor based QC criterion is popularly used in HTS assays. However, it has been demonstrated that this QC criterion is most suitable for an assay with very or extremely strong positive controls. [10] In an RNAi HTS assay, a strong or moderate positive control is usually more instructive than a very or extremely strong positive control because the effectiveness of this control is more similar to the hits of interest. In addition, the positive controls in the two HTS experiments theoretically have different sizes of effects. Consequently, the QC thresholds for the moderate control should be different from those for the strong control in these two experiments. Furthermore, it is common that two or more positive controls are adopted in a single experiment. [11] Applying the same Z-factor-based QC criteria to both controls leads to inconsistent results as illustrated in the literatures. [10] [11]
The SSMD-based QC criteria listed in the following table [20] take into account the effect size of a positive control in an HTS assay where the positive control (such as an inhibition control) theoretically has values less than the negative reference.
Quality Type | A: Moderate Control | B: Strong Control | C: Very Strong Control | D: Extremely Strong Control |
---|---|---|---|---|
Excellent | ||||
Good | ||||
Inferior | ||||
Poor |
In application, if the effect size of a positive control is known biologically, adopt the corresponding criterion based on this table. Otherwise, the following strategy should help to determine which QC criterion should be applied: (i) in many small molecule HTS assay with one positive control, usually criterion D (and occasionally criterion C) should be adopted because this control usually has very or extremely strong effects; (ii) for RNAi HTS assays in which cell viability is the measured response, criterion D should be adopted for the controls without cells (namely, the wells with no cells added) or background controls; (iii) in a viral assay in which the amount of viruses in host cells is the interest, criterion C is usually used, and criterion D is occasionally used for the positive control consisting of siRNA from the virus. [20]
Similar SSMD-based QC criteria can be constructed for an HTS assay where the positive control (such as an activation control) theoretically has values greater than the negative reference. More details about how to apply SSMD-based QC criteria in HTS experiments can be found in a book. [20]
In an HTS assay, one primary goal is to select compounds with a desired size of inhibition or activation effect. The size of the compound effect is represented by the magnitude of difference between a test compound and a negative reference group with no specific inhibition/activation effects. A compound with a desired size of effects in an HTS screen is called a hit. The process of selecting hits is called hit selection. There are two main strategies of selecting hits with large effects. [20] One is to use certain metric(s) to rank and/or classify the compounds by their effects and then to select the largest number of potent compounds that is practical for validation assays. [17] [19] [22] The other strategy is to test whether a compound has effects strong enough to reach a pre-set level. In this strategy, false-negative rates (FNRs) and/or false-positive rates (FPRs) must be controlled. [14] [15] [16] [25] [26]
SSMD can not only rank the size of effects but also classify effects as shown in the following table based on the population value () of SSMD. [20] [27]
Effect subtype | Thresholds for negative SSMD | Thresholds for positive SSMD |
---|---|---|
Extremely strong | ||
Very strong | ||
Strong | ||
Fairly strong | ||
Moderate | ||
Fairly moderate | ||
Fairly weak | ||
Weak | ||
Very weak | ||
Extremely weak | ||
No effect |
The estimation of SSMD for screens without replicates differs from that for screens with replicates. [20] [23]
In a primary screen without replicates, assuming the measured value (usually on the log scale) in a well for a tested compound is and the negative reference in that plate has sample size , sample mean , median , standard deviation and median absolute deviation , the SSMD for this compound is estimated as [20] [23]
where . When there are outliers in an assay which is usually common in HTS experiments, a robust version of SSMD [23] can be obtained using
In a confirmatory or primary screen with replicates, for the i-th test compound with replicates, we calculate the paired difference between the measured value (usually on the log scale) of the compound and the median value of a negative control in a plate, then obtain the mean and variance of the paired difference across replicates. The SSMD for this compound is estimated as [20]
In many cases, scientists may use both SSMD and average fold change for hit selection in HTS experiments. The dual-flashlight plot [28] can display both average fold change and SSMD for all test compounds in an assay and help to integrate both of them to select hits in HTS experiments [29] . The use of SSMD for hit selection in HTS experiments is illustrated step-by-step in [23]
In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.
In particle physics, Fermi's interaction is an explanation of the beta decay, proposed by Enrico Fermi in 1933. The theory posits four fermions directly interacting with one another. This interaction explains beta decay of a neutron by direct coupling of a neutron with an electron, a neutrino and a proton.
High-throughput screening (HTS) is a method for scientific discovery especially used in drug discovery and relevant to the fields of biology, materials science and chemistry. Using robotics, data processing/control software, liquid handling devices, and sensitive detectors, high-throughput screening allows a researcher to quickly conduct millions of chemical, genetic, or pharmacological tests. Through this process one can quickly recognize active compounds, antibodies, or genes that modulate a particular biomolecular pathway. The results of these experiments provide starting points for drug design and for understanding the noninteraction or role of a particular location.
In physics, the Polyakov action is an action of the two-dimensional conformal field theory describing the worldsheet of a string in string theory. It was introduced by Stanley Deser and Bruno Zumino and independently by L. Brink, P. Di Vecchia and P. S. Howe in 1976, and has become associated with Alexander Polyakov after he made use of it in quantizing the string in 1981. The action reads:
In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher information matrix:
In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator.
The Z-factor is a measure of statistical effect size. It has been proposed for use in high-throughput screening, and commonly written as Z' to judge whether the response in a particular assay is large enough to warrant further attention.
A yield surface is a five-dimensional surface in the six-dimensional space of stresses. The yield surface is usually convex and the state of stress of inside the yield surface is elastic. When the stress state lies on the surface the material is said to have reached its yield point and the material is said to have become plastic. Further deformation of the material causes the stress state to remain on the yield surface, even though the shape and size of the surface may change as the plastic deformation evolves. This is because stress states that lie outside the yield surface are non-permissible in rate-independent plasticity, though not in some models of viscoplasticity.
In probability and statistics, the Hellinger distance is used to quantify the similarity between two probability distributions. It is a type of f-divergence. The Hellinger distance is defined in terms of the Hellinger integral, which was introduced by Ernst Hellinger in 1909.
Spinodal decomposition is a mechanism by which a single thermodynamic phase spontaneously separates into two phases. Decomposition occurs when there is no thermodynamic barrier to phase separation. As a result, phase separation via decomposition does not require the nucleation events resulting from thermodynamic fluctuations, which normally trigger phase separation.
The term generalized logistic distribution is used as the name for several different families of probability distributions. For example, Johnson et al. list four forms, which are listed below.
In probability theory and statistics, the half-normal distribution is a special case of the folded normal distribution.
The Tracy–Widom distribution is a probability distribution from random matrix theory introduced by Craig Tracy and Harold Widom. It is the distribution of the normalized largest eigenvalue of a random Hermitian matrix. The distribution is defined as a Fredholm determinant.
In statistical mechanics, the Griffiths inequality, sometimes also called Griffiths–Kelly–Sherman inequality or GKS inequality, named after Robert B. Griffiths, is a correlation inequality for ferromagnetic spin systems. Informally, it says that in ferromagnetic spin systems, if the 'a-priori distribution' of the spin is invariant under spin flipping, the correlation of any monomial of the spins is non-negative; and the two point correlation of two monomial of the spins is non-negative.
In statistics, the standardized mean of a contrast variable , is a parameter assessing effect size. The SMCV is defined as mean divided by the standard deviation of a contrast variable. The SMCV was first proposed for one-way ANOVA cases and was then extended to multi-factor ANOVA cases .
In statistics, a dual-flashlight plot is a type of scatter-plot in which the standardized mean of a contrast variable (SMCV) is plotted against the mean of a contrast variable representing a comparison of interest . The commonly used dual-flashlight plot is for the difference between two groups in high-throughput experiments such as microarrays and high-throughput screening studies, in which we plot the SSMD versus average log fold-change on the y- and x-axes, respectively, for all genes or compounds investigated in an experiment. As a whole, the points in a dual-flashlight plot look like the beams of a flashlight with two heads, hence the name dual-flashlight plot.
In high-throughput screening (HTS), one of the major goals is to select compounds with a desired size of inhibition or activation effects. A compound with a desired size of effects in an HTS screen is called a hit. The process of selecting hits is called hit selection.
In statistics, a c+-probability is the probability that a contrast variable obtains a positive value. Using a replication probability, the c+-probability is defined as follows: if we get a random draw from each group and calculate the sampled value of the contrast variable based on the random draws, then the c+-probability is the chance that the sampled values of the contrast variable are greater than 0 when the random drawing process is repeated infinite times. The c+-probability is a probabilistic index accounting for distributions of compared groups.
In probability theory and statistics, the asymmetric Laplace distribution (ALD) is a continuous probability distribution which is a generalization of the Laplace distribution. Just as the Laplace distribution consists of two exponential distributions of equal scale back-to-back about x = m, the asymmetric Laplace consists of two exponential distributions of unequal scale back to back about x = m, adjusted to assure continuity and normalization. The difference of two variates exponentially distributed with different means and rate parameters will be distributed according to the ALD. When the two rate parameters are equal, the difference will be distributed according to the Laplace distribution.
In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of generative models. The goal of diffusion models is to learn a diffusion process that generates the probability distribution of a given dataset. It mainly consists of three major components: the forward process, the reverse process, and the sampling procedure. Three examples of generic diffusion modeling frameworks used in computer vision are denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations.