Genome skimming is a sequencing approach that uses low-pass, shallow sequencing of a genome (up to 5%), to generate fragments of DNA, known as genome skims. [1] [2] These genome skims contain information about the high-copy fraction of the genome. [2] The high-copy fraction of the genome consists of the ribosomal DNA, plastid genome (plastome), mitochondrial genome (mitogenome), and nuclear repeats such as microsatellites and transposable elements. [3] It employs high-throughput, next generation sequencing technology to generate these skims. [1] Although these skims are merely 'the tip of the genomic iceberg', phylogenomic analysis of them can still provide insights on evolutionary history and biodiversity at a lower cost and larger scale than traditional methods. [2] [3] [4] Due to the small amount of DNA required for genome skimming, its methodology can be applied in other fields other than genomics. Tasks like this include determining the traceability of products in the food industry, enforcing international regulations regarding biodiversity and biological resources, and forensics. [5]
In addition to the assembly of the smaller organellar genomes, genome skimming can also be used to uncover conserved ortholog sequences for phylogenomic studies. In phylogenomic studies of multicellular pathogens, genome skimming can be used to find effector genes, discover endosymbionts and characterize genomic variation. [6]
The Internal transcribed spacers (ITS) are non-coding regions within the 18-5.8-28S rDNA in eukaryotes and are one feature of rDNA that has been used in genome skimming studies. [7] ITS are used to detect different species within a genus, due to their high inter-species variability. [7] These have low individual variability, preventing the identification of distinct strains or individuals. [7] They are also present in all eukaryotes, have a high evolution rate and has been used in phylogenetic analysis between and across species. [7]
When targeting nuclear rDNA, it is suggested that a minimum final sequencing depth of 100X is achieved, and sequences with less than 5X depth are masked. [1]
The plastid genome, or plastome, has been used extensively in identification and evolutionary studies using genome skimming due to its high abundance within plants (~3-5% of cell DNA), small size, simple structure, greater conservation of gene structure than nuclear or mitochondrial genes. [8] [9] Plastids studies have previously been limited by the number of regions that could be assessed in traditional approaches. [9] Using genome skimming, the sequencing of the entire plastid genome, or plastome, can be done at a fraction of the cost and time required for typical sequencing approaches like Sanger sequencing. [3] Plastomes have been suggested as a method to replace traditional DNA barcodes in plants, [3] such as the rbcL and matK barcode genes. Compared to the typical DNA barcode, genome skimming produces plastomes at a tenth of the cost per base. [5] Recent uses of genome skims of plastomes have allowed greater resolution of phylogenies, higher differentiation of specific groups within taxa, and more accurate estimates of biodiversity. [9] Additionally, the plastome has been used to compare species within a genus to look at evolutionary changes and diversity within a group. [9]
When targeting plastomes, it is suggested that a minimum final sequencing depth of 30X is achieved for single-copy regions to ensure high-quality assemblies. Single nucleotide polymorphisms (SNPs) with less than 20X depth should be masked. [1]
The mitochondrial genome, or mitogenome, is used as a molecular marker in a great variety of studies because of its maternal inheritance, high copy-number in the cell, lack of recombination, and high mutation rate. It is often used for phylogenetic studies as it is very uniform across metazoan groups, with a circular, double-stranded DNA molecule structure, about 15 to 20 kilobases, with 37 ribosomal RNA genes, 13 protein-coding genes, and 22 transfer RNA genes. Mitochondrial barcode sequences, such as COI, NADH2, 16S rRNA, and 12S rRNA, can also be used for taxonomic identification. [10] The increased publishing of complete mitogenomes allows for inference of robust phylogenies across many taxonomic groups, and it can capture events such as gene rearrangements and positioning of mobile genetic elements. Using genome skimming to assemble complete mitogenomes, the phylogenetic history and biodiversity of many organisms can be resolved. [4]
When targeting mitogenomes, there are no specific suggestions for minimum final sequencing depth, as mitogenomes are more variable in size and more variable in complexity in plant species, increasing the difficulty of assembling repeated sequences. However, highly conserved coding sequences and nonrepetitive flanking regions can be assembled using reference-guided assembly. Sequences should be masked similarly to targeting plastomes and nuclear ribosomal DNA. [1]
Nuclear repeats in the genome are an underused source of phylogenetic data. When the nuclear genome is sequenced at 5% of the genome, thousands of copies of the nuclear repeats will be present. Although the repeats sequenced will only be representative of those in the entire genome, it has been shown that these sequenced fractions accurately reflect genomic abundance. These repeats can be clustered de novo and their abundance is estimated. The distribution and occurrence of these repeat types can be phylogenetically informative and provide information about the evolutionary history of various species. [1]
Low-copy DNA can prove useful for evolution developmental and phylogenetic studies. [11] It can be mined from high-copy fractions in a number of ways such as developing primers from databases that contain conserved orthologous genes, single‐copy conserved orthologous gene, and shared copy genes. [11] Another method is looking for novel probes that target low-copy genes using transcriptomics via Hyb-Seq. [11] While nuclear genomes assembled using genome skims are extremely fragmented, some low-copy single-copy nuclear genes can be successfully assembled. [12]
Previous methods of trying to recover degraded DNA were based on Sanger sequencing and relied on large intact DNA templates and were affected by contamination and method of preservation. Genome skimming, on the other hand, can be used to extract genetic information from preserved species in herbariums and museums, where the DNA is often very degraded, and very little remains. [4] [13] Studies in plants show that DNA as old as 80 years and with as little as 500 pg of degraded DNA, can be used with genome skimming to infer genomic information. [13] In herbaria, even with low yield and low-quality DNA, one study was still able to produce "high-quality complete chloroplast and ribosomal DNA sequences" at a large scale for downstream analyses. [14]
In field studies, invertebrates are stored in ethanol which is usually discarded during DNA-based studies. [15] Genome skimming has been shown to detect the low quantity of DNA from this ethanol-fraction and provide information about the biomass of the specimens in a fraction, the microbiota of outer tissue layers and the gut contents (like prey) released by the vomit reflex. [15] Thus, genome skimming can provide an additional method of understanding ecology via low copy DNA. [15]
DNA extraction protocols will vary depending on the source of the sample (i.e. plants, animals, etc.). The following DNA extraction protocols have been used in genome skimming:
Library preparation protocols will depend on a variety of factors: organism, tissue type, etc. In the cases of preserved specimens, specific library preparation protocols modifications may have to be made. [1] The following library preparation protocols have been used in genome skimming:
Sequencing with short reads or long reads will depend on the target genome or genes. Microsatellites in nuclear repeats require longer reads. [23] The following sequencing platforms have been used in genome skimming:
The Illumina MiSeq platform has been chosen by certain researchers for its long read length for short reads. [6]
After genome skimming, high-copy organellar DNA can be assembled with a reference guide or assembled de novo. High-copy nuclear repeats can be clustered de novo. [1] Assemblers chosen will depend on the target genome and whether short or long reads are used. The following tools have been used to assemble genomes from genome skims:
Plastomes | Mitogenomes |
Annotation is used to identify genes in the genome assemblies. The annotation tool chosen will depend on the target genome and the target features of that genome. The following annotation tools have been used in genome skimming to annotate organellar genomes:
Plastomes | Mitogenomes | tRNAs | rRNAs
|
The assembled sequences are globally aligned, and then phylogenetic trees are inferred using phylogenetic reconstruction software. The software chosen for phylogeny reconstruction will depend on whether a Maximum Likelihood (ML), Maximum Parsimony (MP), or Bayesian Inference (BI) method is appropriate. The following phylogenetic reconstruction programs have been used in genome skimming:
Maximum Likelihood (ML) | Maximum Parsimony (MP) | Bayesian Inference (BI) | Other |
Various protocols, pipelines, and bioinformatic tools have been developed to help automate the downstream processes of genome skimming.
Hyb-Seq is a new protocol for capturing low-copy nuclear genes that combines target enrichment and genome skimming. [29] Target enrichment of the low-copy loci is achieved through designed enrichment probes for specific single-copy exons, but requires a nuclear draft genome and transcriptome of the targeted organism. The target-enriched libraries are then sequenced, and the resulting reads processed, assembled, and identified. Using off-target reads, rDNA cistrons and complete plastomes can also be assembled. Through this process, Hyb-Seq is able to produce genome-scale datasets for phylogenomics.
GetOrganelle is a toolkit that assembles organellar genomes uses genome skimming reads. [30] Organelle-associated reads are recruited using a modified “baiting and iterative mapping” approach. The reads aligning to the target genome, using Bowtie2, [31] are referred to as “seed reads”. The seed reads are used as “baits” to recruit more organelle-associated reads via multiple iterations of extension. The read extension algorithm uses a hashing approach, where the reads are cut into substrings of certain lengths, referred to as “words”. At each extension iteration, these “words” are added to a hash table, referred to as a “baits pool”, which dynamically increases in size with each iteration. Due to the low sequencing coverage of genome skims, non-target reads, even those with high sequence similarity to target reads, are largely not recruited. Using the final recruited organellar-associated reads, GetOrganelle conducts a de novo assembly, using SPAdes. [32] The assembly graph is filtered and untangled, producing all possible paths of the graph, and therefore all configurations of the circular organellar genomes.
Skmer is an assembly-free and alignment-free tool to compute genomic distances between the query and reference genome skims. [33] Skmer uses a 2 stage approach to compute these distances. First, it generates k-mer frequency profiling using a tool called JellyFish [34] and then these k-mers are converted into hashes. [33] A random subset of these hashes are selected to form a so-called "sketch". [33] For its second stage, Skmer uses Mash [35] to estimate the Jaccard index of two of these sketches. [33] The combination of these 2 stages is used to estimate the evolutionary distance. [33]
Geneious is an integrative software platform that allows users to perform various steps in bioinformatic analysis such as assembly, alignment, and phylogenetics by incorporating other tools within a GUI based platform. [18] [28]
PhyloHerb is a bioinformatic pipeline write in python. It uses built-in database or user specified reference to extract orthologous sequences from plastid, mitochondrial and nuclear ribosomal regions using a BLAST search. [36]
Although genome skimming is usually chosen as a cost-effective method to sequence organellar genomes, genome skimming can be done in silico if (deep) whole-genome sequencing data has already been obtained. Genome skimming has been demonstrated to simplify organellar genome assembly by subsampling the reads of the nuclear genome via in silico genome skimming. [37] [38] Since the organellar genomes will be high-copy in the cell, in silico genome skimming essentially filters out nuclear sequences, leaving a higher organellar to nuclear sequence ratio for assembly, reducing the complexity of the assembly paradigm. In silico genome skimming was first done as a proof-of-concept, optimizing the parameters for read type, read length, and sequencing coverage. [1]
Other than the current uses listed above, genome skimming has also been applied to other tasks, such as quantifying pollen mixtures, [19] monitoring and conservation of certain populations. [39] Genome skimming can also be used for variant calling, to examine single nucleotide polymorphisms across a species. [22]
Genome skimming is a cost-effective, rapid and reliable method to generate large shallow datasets, [5] since several datasets (plastid, mitochondrial, nuclear) are generated per run. [3] It is very simple to implement, requires less lab work and optimization, and does not require a priori knowledge of the organism nor its genome size. [3] This provides a low-risk avenue for biological inquiry and hypothesis generation without a huge commitment of resources. [6]
Genome skimming is an especially advantageous approach regarding cases where the genomic DNA may be old and degraded from chemical treatments, such as specimens from herbarium and museum collections, [4] a largely untapped genomic resource. Genome skimming allows for the molecular characterization of rare or extinct species. [5] The preservation processes in ethanol often damage the genomic DNA, which hinders the success of standard PCR protocols [3] and other amplicon-based approaches. [5] This presents an opportunity to sequence samples with very low DNA concentrations, without the need for DNA enrichment or amplification. Library preparation for specific to genome skimming has been shown to work with as low as 37 ng of DNA (0.2 ng/ul), 135-fold less than recommended by Illumina. [1]
Although genome skimming is mostly used to extract high-copy plastomes and mitogenomes, it can also provide partial sequences of low-copy nuclear sequences. These sequences may not be sufficiently complete for phylogenomic analysis, but can be sufficient for designing PCR primers and probes for hybridization-based approaches. [1]
Genome skimming is not dependent on any specific primers and remains unaffected by gene rearrangements. [4]
Genome skimming scratches the surface of the genome, so it will not suffice for biological questions that require gene prediction and annotation. [6] These downstream steps are required for deep and more meaningful analyses.
Although plastid genomic sequences are abundant in genome skims, the presence of mitochondrial and nuclear pseudogenes of plastid origin can potentially pose issues for plastome assemblies. [1]
A combination of sequencing depth and read type, as well as genomic target (plastome, mitogenome, etc.), will influence the success of single-end and paired-end assemblies, so these parameters must be carefully chosen. [1]
Both the wet-lab and the bioinformatics parts of genome skimming have certain challenges with scalability. Although the cost of sequencing in genome skimming is affordable at $80 for 1 Gb in 2016, the library preparation for sequencing is still very expensive, at least ~$200 per sample (as of 2016). Additionally, most library preparation protocols have not been fully automated with robotics yet. On the bioinformatics side, large complex databases and automated workflows need to be designed to handle the large amounts of data resulting from genome skimming. The automation of the following processes need to be implemented: [40]
Some of these scalability challenges have already been implemented, as shown above in the "Tools and Pipelines" section.
Genomics is an interdisciplinary field of molecular biology focusing on the structure, function, evolution, mapping, and editing of genomes. A genome is an organism's complete set of DNA, including all of its genes as well as its hierarchical, three-dimensional structural configuration. In contrast to genetics, which refers to the study of individual genes and their roles in inheritance, genomics aims at the collective characterization and quantification of all of an organism's genes, their interrelations and influence on the organism. Genes may direct the production of proteins with the assistance of enzymes and messenger molecules. In turn, proteins make up body structures such as organs and tissues as well as control chemical reactions and carry signals between cells. Genomics also involves the sequencing and analysis of genomes through uses of high throughput DNA sequencing and bioinformatics to assemble and analyze the function and structure of entire genomes. Advances in genomics have triggered a revolution in discovery-based research and systems biology to facilitate understanding of even the most complex biological systems such as the brain.
A DNA sequencer is a scientific instrument used to automate the DNA sequencing process. Given a sample of DNA, a DNA sequencer is used to determine the order of the four bases: G (guanine), C (cytosine), A (adenine) and T (thymine). This is then reported as a text string, called a read. Some DNA sequencers can be also considered optical instruments as they analyze light signals originating from fluorochromes attached to nucleotides.
In bioinformatics, sequence assembly refers to aligning and merging fragments from a longer DNA sequence in order to reconstruct the original sequence. This is needed as DNA sequencing technology might not be able to 'read' whole genomes in one go, but rather reads small pieces of between 20 and 30,000 bases, depending on the technology used. Typically, the short fragments (reads) result from shotgun sequencing genomic DNA, or gene transcript (ESTs).
DNA sequencing is the process of determining the nucleic acid sequence – the order of nucleotides in DNA. It includes any method or technology that is used to determine the order of the four bases: adenine, guanine, cytosine, and thymine. The advent of rapid DNA sequencing methods has greatly accelerated biological and medical research and discovery.
Metagenomics is the study of genetic material recovered directly from environmental or clinical samples by a method called sequencing. The broad field may also be referred to as environmental genomics, ecogenomics, community genomics or microbiomics.
ChIP-sequencing, also known as ChIP-seq, is a method used to analyze protein interactions with DNA. ChIP-seq combines chromatin immunoprecipitation (ChIP) with massively parallel DNA sequencing to identify the binding sites of DNA-associated proteins. It can be used to map global binding sites precisely for any protein of interest. Previously, ChIP-on-chip was the most common technique utilized to study these protein–DNA relations.
RNA-Seq is a technique that uses next-generation sequencing to reveal the presence and quantity of RNA molecules in a biological sample, providing a snapshot of gene expression in the sample, also known as transcriptome.
SOAP is a suite of bioinformatics software tools from the BGI Bioinformatics department enabling the assembly, alignment, and analysis of next generation DNA sequencing data. It is particularly suited to short read sequencing data.
The 1000 Plant Transcriptomes Initiative (1KP) was an international research effort to establish the most detailed catalogue of genetic variation in plants. It was announced in 2008 and headed by Gane Ka-Shu Wong and Michael Deyholos of the University of Alberta. The project successfully sequenced the transcriptomes of 1000 different plant species by 2014; its final capstone products were published in 2019.
Exome sequencing, also known as whole exome sequencing (WES), is a genomic technique for sequencing all of the protein-coding regions of genes in a genome. It consists of two steps: the first step is to select only the subset of DNA that encodes proteins. These regions are known as exons—humans have about 180,000 exons, constituting about 1% of the human genome, or approximately 30 million base pairs. The second step is to sequence the exonic DNA using any high-throughput DNA sequencing technology.
De novo transcriptome assembly is the de novo sequence assembly method of creating a transcriptome without the aid of a reference genome.
Illumina dye sequencing is a technique used to determine the series of base pairs in DNA, also known as DNA sequencing. The reversible terminated chemistry concept was invented by Bruno Canard and Simon Sarfati at the Pasteur Institute in Paris. It was developed by Shankar Balasubramanian and David Klenerman of Cambridge University, who subsequently founded Solexa, a company later acquired by Illumina. This sequencing method is based on reversible dye-terminators that enable the identification of single nucleotides as they are washed over DNA strands. It can also be used for whole-genome and region sequencing, transcriptome analysis, metagenomics, small RNA discovery, methylation profiling, and genome-wide protein-nucleic acid interaction analysis.
In molecular phylogenetics, relationships among individuals are determined using character traits, such as DNA, RNA or protein, which may be obtained using a variety of sequencing technologies. High-throughput next-generation sequencing has become a popular technique in transcriptomics, which represent a snapshot of gene expression. In eukaryotes, making phylogenetic inferences using RNA is complicated by alternative splicing, which produces multiple transcripts from a single gene. As such, a variety of approaches may be used to improve phylogenetic inference using transcriptomic data obtained from RNA-Seq and processed using computational phylogenetics.
Third-generation sequencing is a class of DNA sequencing methods which produce longer sequence reads, under active development since 2008.
Transcriptomics technologies are the techniques used to study an organism's transcriptome, the sum of all of its RNA transcripts. The information content of an organism is recorded in the DNA of its genome and expressed through transcription. Here, mRNA serves as a transient intermediary molecule in the information network, whilst non-coding RNAs perform additional diverse functions. A transcriptome captures a snapshot in time of the total transcripts present in a cell. Transcriptomics technologies provide a broad account of which cellular processes are active and which are dormant. A major challenge in molecular biology is to understand how a single genome gives rise to a variety of cells. Another is how gene expression is regulated.
H3K9me3 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the tri-methylation at the 9th lysine residue of the histone H3 protein and is often associated with heterochromatin.
A plant genome assembly represents the complete genomic sequence of a plant species, which is assembled into chromosomes and other organelles by using DNA fragments that are obtained from different types of sequencing technology.
Clinical metagenomic next-generation sequencing (mNGS) is the comprehensive analysis of microbial and host genetic material in clinical samples from patients by next-generation sequencing. It uses the techniques of metagenomics to identify and characterize the genome of bacteria, fungi, parasites, and viruses without the need for a prior knowledge of a specific pathogen directly from clinical specimens. The capacity to detect all the potential pathogens in a sample makes metagenomic next generation sequencing a potent tool in the diagnosis of infectious disease especially when other more directed assays, such as PCR, fail. Its limitations include clinical utility, laboratory validity, sense and sensitivity, cost and regulatory considerations.
Linked-read sequencing, a type of DNA sequencing technology, uses specialized technique that tags DNA molecules with unique barcodes before fragmenting them. Unlike traditional sequencing technology, where DNA is broken into small fragments and then sequenced individually, resulting in short read lengths that has difficulties in accurately reconstructing the original DNA sequence, the unique barcodes of linked-read sequencing allows scientists to link together DNA fragments that come from the same DNA molecule. A pivotal benefit of this technology lies in the small quantities of DNA required for large genome information output, effectively combining the advantages of long-read and short-read technologies.
{{cite book}}
: |last=
has generic name (help)CS1 maint: multiple names: authors list (link){{cite journal}}
: Cite journal requires |journal=
(help)