N50, L50, and related statistics

Last updated

In computational biology, N50 and L50 are statistics of a set of contig or scaffold lengths. The N50 is similar to a mean or median of lengths, but has greater weight given to the longer contigs. It is used widely in genome assembly, especially in reference to contig lengths within a draft assembly. There are also the related U50, UL50, UG50, UG50%, N90, NG50, and D50 statistics.

Contents

To provide a better assessment of assembly output for viral and microbial datasets, a new metric called U50 should be used. The U50 identifies unique, target-specific contigs by using a reference genome as baseline, aiming at circumventing some limitations that are inherent to the N50 metric. The use of the U50 metric allows for a more accurate measure of assembly performance by analyzing only the unique, non-overlapping contigs. Most viral and microbial sequencing have high background noise (i.e., host and other non-targets), which contributes to having a skewed, misrepresented N50 value - this is corrected by U50. [1]

Definition

N50

N50 statistic defines assembly quality in terms of contiguity. Given a set of contigs, the N50 is defined as the sequence length of the shortest contig at 50% of the total assembly length. It can be thought of as the point of half of the mass of the distribution; the number of bases from all contigs longer than the N50 will be close to the number of bases from all contigs shorter than the N50. For example, consider 9 contigs with the lengths 2,3,4,5,6,7,8,9, and 10; their sum is 54, half of the sum is 27, and the size of the genome also happens to be 54. 50% of this assembly would be 10 + 9 + 8 = 27 (half the length of the sequence). Thus the N50=8, which is the size of the contig which, along with the larger contigs, contain half of sequence of a particular genome. Note: When comparing N50 values from different assemblies, the assembly sizes must be the same size in order for N50 to be meaningful.

N50 can be described as a weighted median statistic such that 50% of the entire assembly is contained in contigs or scaffolds equal to or larger than this value.

L50

Given a set of contigs, each with its own length, the L50 is defined as count of smallest number of contigs whose length sum makes up half of genome size. From the example above the L50=3.

N90

The N90 statistic is less than or equal to the N50 statistic; it is the length for which the collection of all contigs of that length or longer contains at least 90% of the sum of the lengths of all contigs.

NG50

Note that N50 is calculated in the context of the assembly size rather than the genome size. Therefore, comparisons of N50 values derived from assemblies of significantly different lengths are usually not informative, even if for the same genome. To address this, the authors of the Assemblathon competition came up with a new measure called NG50. The NG50 statistic is the same as N50 except that it is 50% of the known or estimated genome size that must be of the NG50 length or longer. This allows for meaningful comparisons between different assemblies. In the typical case that the assembly size is not more than the genome size, the NG50 statistic will not be more than the N50 statistic.

D50

The D50 statistic (also termed D50 test) is similar to the N50 statistic in definition though it is generally not used to describe genome assemblies. The D50 statistic is the lowest value d for which the sum of the lengths of the largest d lengths is at least 50% of the sum of all of the lengths. [2]

U50

U50 is the length of the smallest contig such that 50% of the sum of all unique, target-specific contigs is contained in contigs of size U50 or larger. [1]

UL50

UL50 is the number of contigs whose length sum produces U50.

UG50

UG50 is the length of the smallest contig such that 50% of the reference genome is contained in unique, target-specific contigs of size UG50 or larger.

UG50%

UG50% is the estimated percent coverage length of the UG50 in direct relation to the length of the reference genome. The calculation is (100 × (UG50/Length of reference genome). The UG50%, as a percentage-based metric, can be used to compare assembly results from different samples or studies.

Examples

Consider two fictional, highly simplified genome assemblies, A and B, that are derived from two different species. Assembly A contains six contigs of lengths 80  kbp, 70 kbp, 50 kbp, 40 kbp, 30 kbp, and 20 kbp. The sum size of assembly A is 290 kbp, the N50 contig length is 70 kbp because 80 + 70 is greater than 50% of 290, and the L50 contig count is 2 contigs. The contig lengths of assembly B are the same as those of assembly A, except for the presence of two additional contigs with lengths of 10 kbp and 5 kbp. The size of assembly B is 305 kbp, the N50 contig length drops to 50 kbp because 80 + 70 + 50 is greater than 50% of 305, and the L50 contig count is 3 contigs. This example illustrates that one can sometimes increase the N50 length simply by removing some of the shortest contigs or scaffolds from an assembly.

If the estimated or known size of the genome from the fictional species A is 500 kbp then the NG50 contig length is 30 kbp because 80 + 70 + 50 + 40 + 30 is greater than 50% of 500. In contrast, if the estimated or known size of the genome from species B is 350 kbp then it has an NG50 contig length of 50 kbp because 80 + 70 + 50 is greater than 50% of 350.

Alternate computation

N50 can be found mathematically for a list L of positive integers as follows:

  1. Create another list L' , which is identical to L, except that every element n in L has been replaced with n copies of itself.
  2. The median of L' is the N50 of L. (The 10% quantile of L' is the N90 statistic.)

For example: If L = (2, 2, 2, 3, 3, 4, 8, 8), then L' consists of six 2's, six 3's, four 4's, and sixteen 8's. That is, L' has twice as many 2s as L; it has three times as many 3s as L; it has four times as many 4s; etc. The median of the 32-element set L' is the average of the 16th smallest element, 4, and 17th smallest element, 8, so the N50 is 6. We can see that the sum of all values in the list L that are smaller than or equal to the N50 of 6 is 16 = 2+2+2+3+3+4 and the sum of all values in the list L that are larger than or equal to 6 is also 16 = 8+8. For comparison with the N50 of 6, note that the mean of the list L is 4 while the median is 3. To recapitulate in a more visual way, we have:

Values of the list      L =  (2,    2,    2,    3,       3,       4,          8,                      8)

Values of the new list  L' = (2  2  2  2  2  2  3  3  3  3  3  3  4  4  4 4 8  8  8  8  8  8  8  8  8  8  8  8  8  8  8  8)

Ranks of L' values =       1  2  3  4  5  6  7  8  9  10 11 12 13 14 15 1617 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32

See also

Related Research Articles

<span class="mw-page-title-main">Median</span> Middle quantile of a data set or probability distribution

In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of the center. Median income, for example, may be a better way to describe center of the income distribution because increases in the largest incomes alone have no effect on median. For this reason, the median is of central importance in robust statistics.

In genetics, shotgun sequencing is a method used for sequencing random DNA strands. It is named by analogy with the rapidly expanding, quasi-random shot grouping of a shotgun.

<span class="mw-page-title-main">Order statistic</span> Kth smallest value in a statistical sample

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

A contig is a set of overlapping DNA segments that together represent a consensus region of DNA. In bottom-up sequencing projects, a contig refers to overlapping sequence data (reads); in top-down sequencing projects, contig refers to the overlapping clones that form a physical map of the genome that is used to guide sequencing and assembly. Contigs can thus refer both to overlapping DNA sequences and to overlapping physical segments (fragments) contained in clones depending on the context.

In statistics, the Mann–Whitney U test is a nonparametric test of the null hypothesis that, for randomly selected values X and Y from two populations, the probability of X being greater than Y is equal to the probability of Y being greater than X.

In computer science, a selection algorithm is an algorithm for finding the th smallest value in a collection of ordered values, such as numbers. The value that it finds is called the th order statistic. Selection includes as special cases the problems of finding the minimum, median, and maximum element in the collection. Selection algorithms include quickselect, and the median of medians algorithm. When applied to a collection of values, these algorithms take linear time, as expressed using big O notation. For data that is already structured, faster algorithms may be possible; as an extreme case, selection in an already-sorted array takes time .

The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used either to test the location of a population based on a sample of data, or to compare the locations of two populations using two matched samples. The one-sample version serves a purpose similar to that of the one-sample Student's t-test. For two matched samples, it is a paired difference test like the paired Student's t-test. The Wilcoxon test can be a good alternative to the t-test when population means are not of interest; for example, when one wishes to test whether a population's median is nonzero, or whether there is a better than 50% chance that a sample from one population is greater than a sample from another population.

Synthetic genomics is a nascent field of synthetic biology that uses aspects of genetic modification on pre-existing life forms, or artificial gene synthesis to create new DNA or entire lifeforms.

DNA sequencing theory is the broad body of work that attempts to lay analytical foundations for determining the order of specific nucleotides in a sequence of DNA, otherwise known as DNA sequencing. The practical aspects revolve around designing and optimizing sequencing projects, predicting project performance, troubleshooting experimental results, characterizing factors such as sequence bias and the effects of software processing algorithms, and comparing various sequencing methods to one another. In this sense, it could be considered a branch of systems engineering or operations research. The permanent archive of work is primarily mathematical, although numerical calculations are often conducted for particular problems too. DNA sequencing theory addresses physical processes related to sequencing DNA and should not be confused with theories of analyzing resultant DNA sequences, e.g. sequence alignment. Publications sometimes do not make a careful distinction, but the latter are primarily concerned with algorithmic issues. Sequencing theory is based on elements of mathematics, biology, and systems engineering, so it is highly interdisciplinary. The subject may be studied within the context of computational biology.

Velvet is an algorithm package that has been designed to deal with de novo genome assembly and short read sequencing alignments. This is achieved through the manipulation of de Bruijn graphs for genomic sequence assembly via the removal of errors and the simplification of repeated regions. Velvet has also been implemented in commercial packages, such as Sequencher, Geneious, MacVector and BioNumerics.

<span class="mw-page-title-main">RNA-Seq</span> Lab technique in cellular biology

RNA-Seq is a technique that uses next-generation sequencing (NGS) to reveal the presence and quantity of RNA molecules in a biological sample, providing a snapshot of gene expression in the sample, also known as transcriptome.

<span class="mw-page-title-main">Reference genome</span> Digital nucleic acid sequence database

A reference genome is a digital nucleic acid sequence database, assembled by scientists as a representative example of the set of genes in one idealized individual organism of a species. As they are assembled from the sequencing of DNA from a number of individual donors, reference genomes do not accurately represent the set of genes of any single individual organism. Instead, a reference provides a haploid mosaic of different DNA sequences from each donor. For example, one of the most recent human reference genomes, assembly GRCh38/hg38, is derived from >60 genomic clone libraries. There are reference genomes for multiple species of viruses, bacteria, fungus, plants, and animals. Reference genomes are typically used as a guide on which new genomes are built, enabling them to be assembled much more quickly and cheaply than the initial Human Genome Project. Reference genomes can be accessed online at several locations, using dedicated browsers such as Ensembl or UCSC Genome Browser.

<span class="mw-page-title-main">Hybrid genome assembly</span>

In bioinformatics, hybrid genome assembly refers to utilizing various sequencing technologies to achieve the task of assembling a genome from fragmented, sequenced DNA resulting from shotgun sequencing. Genome assembly presents one of the most challenging tasks in genome sequencing as most modern DNA sequencing technologies can only produce reads that are, on average, 25-300 base pairs in length. This is orders of magnitude smaller than the average size of a genome. This assembly is computationally difficult and has some inherent challenges, one of these challenges being that genomes often contain complex tandem repeats of sequences that can be thousands of base pairs in length. These repeats can be long enough that second generation sequencing reads are not long enough to bridge the repeat, and, as such, determining the location of each repeat in the genome can be difficult. Resolving these tandem repeats can be accomplished by utilizing long third generation sequencing reads, such as those obtained using the PacBio RS DNA sequencer. These sequences are, on average, 10,000-15,000 base pairs in length and are long enough to span most repeated regions. Using a hybrid approach to this process can increase the fidelity of assembling tandem repeats by being able to accurately place them along a linear scaffold and make the process more computationally efficient.

De novo transcriptome assembly is the de novo sequence assembly method of creating a transcriptome without the aid of a reference genome.

<span class="mw-page-title-main">Scaffolding (bioinformatics)</span> Bioinformatics technique

Scaffolding is a technique used in bioinformatics. It is defined as follows:

Link together a non-contiguous series of genomic sequences into a scaffold, consisting of sequences separated by gaps of known length. The sequences that are linked are typically contiguous sequences corresponding to read overlaps.

<span class="mw-page-title-main">Median of medians</span> Fast approximate median algorithm

In computer science, the median of medians is an approximate median selection algorithm, frequently used to supply a good pivot for an exact selection algorithm, most commonly quickselect, that selects the kth smallest element of an initially unsorted array. Median of medians finds an approximate median in linear time. Using this approximate median as an improved pivot, the worst-case complexity of quickselect reduces from quadratic to linear, which is also the asymptotically optimal worst-case complexity of any selection algorithm. In other words, the median of medians is an approximate median-selection algorithm that helps building an asymptotically optimal, exact general selection algorithm, by producing good pivot elements.

SPAdes is a genome assembly algorithm which was designed for single cell and multi-cells bacterial data sets. Therefore, it might not be suitable for large genomes projects.

L50 may refer to :

De novo sequence assemblers are a type of program that assembles short nucleotide sequences into longer ones without the use of a reference genome. These are most commonly used in bioinformatic studies to assemble genomes or transcriptomes. Two common types of de novo assemblers are greedy algorithm assemblers and De Bruijn graph assemblers.

A plant genome assembly represents the complete genomic sequence of a plant species, which is assembled into chromosomes and other organelles by using DNA fragments that are obtained from different types of sequencing technology.

References

  1. 1 2 Castro, Christina J.; Ng, Terry Fei Fan (November 2017). "U50: A New Metric for Measuring Assembly Output Based on Non-Overlapping, Target-Specific Contigs". Journal of Computational Biology. 24 (11): 1071–1080. doi:10.1089/cmb.2017.0013. PMC   5783553 . PMID   28418726.
  2. Han, J.; Sanders, C. M.; Wang, C.; Yang, Q.; Wimbish, J.; Boone, B. E.; Thomas, S. J.; Levy, S.E. (25 September 2012). Measurement of T cell repertoire diversity in the peripheral blood by novel multiplex PCR and high-performance sequencing methods. MipTec. Basel Switzerland. Archived from the original on 5 October 2015. Retrieved 5 October 2015.