Content | |
---|---|
Description | Whole-genome database |
Contact | |
Research center | Stanford University |
Laboratory | Stanford Genome Technology Center: Cherry Lab; Formerly: University of California, Santa Cruz |
Authors | Eurie L. Hong and 17 others [1] |
Primary citation | PMID 26980513 |
Release date | 2010 |
Access | |
Website | encodeproject |
The Encyclopedia of DNA Elements (ENCODE) is a public research project which aims "to build a comprehensive parts list of functional elements in the human genome." [2]
ENCODE also supports further biomedical research by "generating community resources of genomics data, software, tools and methods for genomics data analysis, and products resulting from data analyses and interpretations." [3] [2]
The current phase of ENCODE (2016-2019) is adding depth to its resources by growing the number of cell types, data types, assays and now includes support for examination of the mouse genome. [3]
ENCODE was launched by the US National Human Genome Research Institute (NHGRI) in September 2003. [4] [5] [6] [7] [8] Intended as a follow-up to the Human Genome Project, the ENCODE project aims to identify all functional elements in the human genome. [9]
The project involves a worldwide consortium of research groups, and data generated from this project can be accessed through public databases. The initial release of ENCODE was in 2013 and since has been changing according to the recommendations of consortium members and the wider community of scientists who use the Portal to access ENCODE data. The two-part goal for ENCODE is to serve as a publicly accessible data base for "experimental protocols, analytical procedures and the data themselves," and "the same interface should serve carefully curated metadata that record the provenance of the data and justify its interpretation in biological terms." [10] The project began its fourth phase (ENCODE 4) in February 2017. [11]
Humans are estimated to have approximately 20,000 protein-coding genes, which account for about 1.5% of DNA in the human genome. The primary goal of the ENCODE project is to determine the role of the remaining component of the genome, much of which was traditionally regarded as "junk". The activity and expression of protein-coding genes can be modulated by the regulome - a variety of DNA elements, such as promoters, transcriptional regulatory sequences, and regions of chromatin structure and histone modification. It is thought that changes in the regulation of gene activity can disrupt protein production and cell processes and result in disease. Determining the location of these regulatory elements and how they influence gene transcription could reveal links between variations in the expression of certain genes and the development of disease. [12]
ENCODE is also intended as a comprehensive resource to allow the scientific community to better understand how the genome can affect human health, and to "stimulate the development of new therapies to prevent and treat these diseases". [5]
The ENCODE Consortium is composed primarily of scientists who were funded by US National Human Genome Research Institute (NHGRI). Other participants contributing to the project are brought up into the Consortium or Analysis Working Group.
The pilot phase consisted of eight research groups and twelve groups participating in the ENCODE Technology Development Phase. After 2007, the number of participants expanded to 440 scientists based in 32 laboratories worldwide as the pilot phase was officially over. At the moment the consortium consists of different centers which perform different tasks.
ENCODE is a member of the International Human Epigenome Consortium (IHEC). [14]
NHGRI's main requirement for the products from ENCODE-funded research is to be shared in a free, highly accessible manner to all researchers to promote genomic research. ENCODE research allows the reproducibility and thus transparency of the software, methods, data, and other tools related to the genomic analysis. [3]
ENCODE is currently implemented in four phases: the pilot phase and the technology development phase, which were initiated simultaneously; [15] and the production phase. The fourth phase is a continuation of the third, and includes functional characterization and further integrative analysis for the encyclopedia.
The goal of the pilot phase was to identify a set of procedures that, in combination, could be applied cost-effectively and at high-throughput to accurately and comprehensively characterize large regions of the human genome. The pilot phase had to reveal gaps in the current set of tools for detecting functional sequences, and was also thought to reveal whether some methods used by that time were inefficient or unsuitable for large-scale utilization. Some of these problems had to be addressed in the ENCODE technology development phase, which aimed to devise new laboratory and computational methods that would improve our ability to identify known functional sequences or to discover new functional genomic elements. The results of the first two phases determined the best path forward for analyzing the remaining 99% of the human genome in a cost-effective and comprehensive production phase. [5]
The pilot phase tested and compared existing methods to rigorously analyze a defined portion of the human genome sequence. It was organized as an open consortium and brought together investigators with diverse backgrounds and expertise to evaluate the relative merits of each of a diverse set of techniques, technologies and strategies. The concurrent technology development phase of the project aimed to develop new high throughput methods to identify functional elements. The goal of these efforts was to identify a suite of approaches that would allow the comprehensive identification of all the functional elements in the human genome. Through the ENCODE pilot project, National Human Genome Research Institute (NHGRI) assessed the abilities of different approaches to be scaled up for an effort to analyze the entire human genome and to find gaps in the ability to identify functional elements in genomic sequence.
The ENCODE pilot project process involved close interactions between computational and experimental scientists to evaluate a number of methods for annotating the human genome. A set of regions representing approximately 1% (30 Mb) of the human genome was selected as the target for the pilot project and was analyzed by all ENCODE pilot project investigators. All data generated by ENCODE participants on these regions was rapidly released into public databases. [7] [16]
For use in the ENCODE pilot project, defined regions of the human genome - corresponding to 30Mb, roughly 1% of the total human genome - were selected. These regions served as the foundation on which to test and evaluate the effectiveness and efficiency of a diverse set of methods and technologies for finding various functional elements in human DNA.
Prior to embarking upon the target selection, it was decided that 50% of the 30Mb of sequence would be selected manually while the remaining sequence would be selected randomly. The two main criteria for manually selected regions were: 1) the presence of well-studied genes or other known sequence elements, and 2) the existence of a substantial amount of comparative sequence data. A total of 14.82Mb of sequence was manually selected using this approach, consisting of 14 targets that range in size from 500kb to 2Mb.
The remaining 50% of the 30Mb of sequence were composed of thirty, 500kb regions selected according to a stratified random-sampling strategy based on gene density and level of non-exonic conservation. The decision to use these particular criteria was made in order to ensure a good sampling of genomic regions varying widely in their content of genes and other functional elements. The human genome was divided into three parts - top 20%, middle 30%, and bottom 50% - along each of two axes: 1) gene density and 2) level of non-exonic conservation with respect to the orthologous mouse genomic sequence (see below), for a total of nine strata. From each stratum, three random regions were chosen for the pilot project. For those strata underrepresented by the manual picks, a fourth region was chosen, resulting in a total of 30 regions. For all strata, a "backup" region was designated for use in the event of unforeseen technical problems.
In greater detail, the stratification criteria were as follows:
The above scores were computed within non-overlapping 500 kb windows of finished sequence across the genome, and used to assign each window to a stratum. [17]
The pilot phase was successfully finished and the results were published in June 2007 in Nature [7] and in a special issue of Genome Research ; [18] the results published in the first paper mentioned advanced the collective knowledge about human genome function in several major areas, included in the following highlights: [7]
In September 2007, National Human Genome Research Institute (NHGRI) began funding the production phase of the ENCODE project. In this phase, the goal was to analyze the entire genome and to conduct "additional pilot-scale studies". [19]
As in the pilot project, the production effort is organized as an open consortium. In October 2007, NHGRI awarded grants totaling more than $80 million over four years. [20] The production phase also includes a Data Coordination Center, a Data Analysis Center, and a Technology Development Effort. [21] At that time the project evolved into a truly global enterprise, involving 440 scientists from 32 laboratories worldwide. Once the pilot phase was completed, the project "scaled up" in 2007, profiting immensely from new-generation sequencing machines. And the data was, indeed, big; researchers generated around 15 terabytes of raw data.
By 2010, over 1,000 genome-wide data sets had been produced by the ENCODE project. Taken together, these data sets show which regions are transcribed into RNA, which regions are likely to control the genes that are used in a particular type of cell, and which regions are associated with a wide variety of proteins. The primary assays used in ENCODE are ChIP-seq, DNase I Hypersensitivity, RNA-seq, and assays of DNA methylation.
In September 2012, the project released a much more extensive set of results, in 30 papers published simultaneously in several journals, including six in Nature , six in Genome Biology and a special issue with 18 publications of Genome Research . [22]
The authors described the production and the initial analysis of 1,640 data sets designed to annotate functional elements in the entire human genome, integrating results from diverse experiments within cell types, related experiments involving 147 different cell types, and all ENCODE data with other resources, such as candidate regions from genome-wide association studies (GWAS) and evolutionary constrained regions. Together, these efforts revealed important features about the organization and function of the human genome, which were summarized in an overview paper as follows: [23]
The most striking finding was that the fraction of human DNA that is biologically active is considerably higher than even the most optimistic previous estimates. In an overview paper, the ENCODE Consortium reported that its members were able to assign biochemical functions to over 80% of the genome. [23] Much of this was found to be involved in controlling the expression levels of coding DNA, which makes up less than 1% of the genome.
The most important new elements of the "encyclopedia" include:
Capturing, storing, integrating, and displaying the diverse data generated is challenging. The ENCODE Data Coordination Center (DCC) organizes and displays the data generated by the labs in the consortium, and ensures that the data meets specific quality standards when it is released to the public. Before a lab submits any data, the DCC and the lab draft a data agreement that defines the experimental parameters and associated metadata. The DCC validates incoming data to ensure consistency with the agreement. It also ensures that all data is annotated using appropriate Ontologies. [28] It then loads the data onto a test server for preliminary inspection, and coordinates with the labs to organize the data into a consistent set of tracks. When the tracks are ready, the DCC Quality Assurance team performs a series of integrity checks, verifies that the data is presented in a manner consistent with other browser data, and perhaps most importantly, verifies that the metadata and accompanying descriptive text are presented in a way that is useful to our users. The data is released on the public UCSC Genome Browser website only after all of these checks have been satisfied. In parallel, data is analyzed by the ENCODE Data Analysis Center, a consortium of analysis teams from the various production labs plus other researchers. These teams develop standardized protocols to analyze data from novel assays, determine best practices, and produce a consistent set of analytic methods such as standardized peak callers and signal generation from alignment pile-ups. [29]
The National Human Genome Research Institute (NHGRI) has identified ENCODE as a "community resource project". This important concept was defined at an international meeting held in Ft. Lauderdale in January 2003 as a research project specifically devised and implemented to create a set of data, reagents, or other material whose primary utility will be as a resource for the broad scientific community. Accordingly, the ENCODE data release policy stipulates that data, once verified, will be deposited into public databases and made available for all to use without restriction. [29]
With the continuation of the third phase, the ENCODE Consortium has become involved with additional projects whose goals run parallel to the ENCODE project. Some of these projects were part of the second phase of ENCODE.
The MODel organism ENCyclopedia Of DNA Elements (modENCODE) project is a continuation of the original ENCODE project targeting the identification of functional elements in selected model organism genomes, specifically Drosophila melanogaster and Caenorhabditis elegans . [30] The extension to model organisms permits biological validation of the computational and experimental findings of the ENCODE project, something that is difficult or impossible to do in humans. [30] Funding for the modENCODE project was announced by the National Institutes of Health (NIH) in 2007 and included several different research institutions in the US. [31] [32] The project completed its work in 2012.
In late 2010, the modENCODE consortium unveiled its first set of results with publications on annotation and integrative analysis of the worm and fly genomes in Science . [33] [34] Data from these publications is available from the modENCODE web site. [35]
modENCODE was run as a Research Network and the consortium was formed by 11 primary projects, divided between worm and fly. The projects spanned the following:
modERN, short for the model organism encyclopedia of regulatory networks, branched from the modENCODE project. The project has merged the C. elegans and Drosophila groups and focuses on the identification of additional transcription factor binding sites of the respective organisms. The project began at the same time as Phase III of ENCODE, and plans to end in 2017. [37] To date, the project has released 198 experiments, [38] with around 500 other experiments submitted and currently being processed by the DCC.
In early 2015, the NIH launched the Genomics of Gene Regulation (GGR) program. [39] The goal of the program, which will last for three years, is to study gene networks and pathways in different systems of the body, with the hopes to further understand the mechanisms controlling gene expressions. Although the ENCODE project is separate from GGR, the ENCODE DCC has been hosting GGR data in the ENCODE portal. [40]
In 2008, NIH began the Roadmap Epigenomics Mapping Consortium, whose goal was to produce "a public resource of human epigenomic data to catalyze basic biology and disease-oriented research". [41] In February 2015, the consortium released an article titled "Integrative analysis of 111 reference human epigenomes" that fulfilled the consortium's goal. The consortium integrated information and annotated regulatory elements across 127 reference epigenomes, 16 of which were part of the ENCODE project. [42] Data for the Roadmap project can either be found in the Roadmap portal or ENCODE portal.
The fruitENCODE: an encyclopedia of DNA elements for fruit ripening is a plant ENCODE project that aims to generate DNA methylation, histone modifications, DHS, gene expression, transcription factor binding datasets for all fleshy fruit species at different developmental stages. Prerelease data can be found in the fruitENCODE portal.
Although the consortium claims they are far from finished with the ENCODE project, many reactions to the published papers and the news coverage that accompanied the release were favorable. The Nature editors and ENCODE authors "... collaborated over many months to make the biggest splash possible and capture the attention of not only the research community but also of the public at large". [44] The ENCODE project's claim that 80% of the human genome has biochemical function [23] was rapidly picked up by the popular press who described the results of the project as leading to the death of junk DNA. [45] [46]
However the conclusion that most of the genome is "functional" has been criticized on the grounds that ENCODE project used a liberal definition of "functional", namely anything that is transcribed must be functional. This conclusion was arrived at despite the widely accepted view, based on genomic conservation estimates from comparative genomics, that many DNA elements such as pseudogenes that are transcribed are nevertheless non-functional. Furthermore, the ENCODE project has emphasized sensitivity over specificity leading possibly to the detection of many false positives. [47] [48] [49] Somewhat arbitrary choice of cell lines and transcription factors as well as lack of appropriate control experiments were additional major criticisms of ENCODE as random DNA mimics ENCODE-like 'functional' behavior. [50]
In response to some of the criticisms, other scientists argued that the wide spread transcription and splicing that is observed in the human genome directly by biochemical testing is a more accurate indicator of genetic function than genomic conservation estimates because conservation estimates are all relative and difficult to align due to incredible variations in genome sizes of even closely related species, it is partially tautological, and these estimates are not based on direct testing for functionality on the genome. [51] [52] Conservation estimates may be used to provide clues to identify possible functional elements in the genome, but it does not limit or cap the total amount of functional elements that could possibly exist in the genome. [52] Furthermore, much of the genome that is being disputed by critics seems to be involved in epigenetic regulation such as gene expression and appears to be necessary for the development of complex organisms. [51] [53] The ENCODE results were not necessarily unexpected since increases in attributions of functionality were foreshadowed by previous decades of research. [51] [53] Additionally, others have noted that the ENCODE project from the very beginning had a scope that was based on seeking biomedically relevant functional elements in the genome not evolutionary functional elements, which are not necessarily the same thing since evolutionary selection is neither sufficient nor necessary to establish a function. It is a very useful proxy to relevant functions, but an imperfect one and not the only one. [54]
Recently, ENCODE researchers reiterated that its main goal is identifying functional elements in the human genome. [55] In a follow up paper in 2020, ENCODE stated that functional annotation of identified elements is "still in its infancy." [56]
In response to the complaints about the definition of the word "function" some have noted that ENCODE did define what it meant and since the scope of ENCODE was seeking biomedically relevant functional elements in the genome, then the conclusion of the project should be interpreted "as saying that 80 % of the genome is engaging in relevant biochemical activities that are very likely to have causal roles in phenomena deemed relevant to biomedical research." [54] Ewan Birney, one of the ENCODE researchers, commented that "function" was used pragmatically to mean "specific biochemical activity" which included different classes of assays: RNA, "broad" histone modifications, "narrow" histone modifications, DNaseI hypersensitive sites, Transcription Factor ChIP-seq peaks, DNaseI Footprints, Transcription Factor bound motifs, and Exons. [57]
In 2014, ENCODE researchers noted that in the literature, functional parts of the genome have been identified differently in previous studies depending on the approaches used. There have been three general approaches used to identify functional parts of the human genome: genetic approaches (which rely on changes in phenotype), evolutionary approaches (which rely on conservation) and biochemical approaches (which rely on biochemical testing and was used by ENCODE). All three have limitations: genetic approaches may miss functional elements that do not manifest physically on the organism, evolutionary approaches have difficulties using accurate multispecies sequence alignments since genomes of even closely related species vary considerably, and with biochemical approaches, though having high reproducibility, the biochemical signatures do not always automatically signify a function. They concluded that in contrast to evolutionary and genetic evidence, biochemical data offer clues about both the molecular function served by underlying DNA elements and the cell types in which they act and ultimately all three approaches can be used in a complementary way to identify regions that may be functional in human biology and disease. Furthermore, they noted that the biochemical maps provided by ENCODE were the most valuable things from the project since they provide a starting point for testing how these signatures relate to molecular, cellular, and organismal function. [52]
The project has also been criticized for its high cost (~$400 million in total) and favoring big science which takes money away from highly productive investigator-initiated research. [58] The pilot ENCODE project cost an estimated $55 million; the scale-up was about $130 million and the US National Human Genome Research Institute NHGRI could award up to $123 million for the next phase. Some researchers argue that a solid return on that investment has yet to be seen. There have been attempts to scour the literature for the papers in which ENCODE plays a significant part and since 2012 there have been 300 papers, 110 of which come from labs without ENCODE funding. An additional problem is that ENCODE is not a unique name dedicated to the ENCODE project exclusively, so the word 'encode' comes up in many genetics and genomics literature. [59]
Another major critique is that the results do not justify the amount of time spent on the project and that the project itself is essentially unfinishable. Although often compared to Human Genome Project (HGP) and even termed as the HGP next step, the HGP had a clear endpoint which ENCODE currently lacks.
The authors seem to sympathize with the scientific concerns and at the same time try to justify their efforts by giving interviews and explaining ENCODE details not just to the scientific public, but also to mass media. They also claim that it took more than half a century from the realization that DNA is the hereditary material of life to the human genome sequence, so that their plan for the next century would be to really understand the sequence itself. [59]
The analysis of transcription factor binding data generated by the ENCODE project is currently available in the web-accessible repository FactorBook. [60] Essentially, Factorbook.org is a Wiki-based database for transcription factor-binding data generated by the ENCODE consortium. In the first release, Factorbook contains:
The human genome is a complete set of nucleic acid sequences for humans, encoded as DNA within the 23 chromosome pairs in cell nuclei and in a small DNA molecule found within individual mitochondria. These are usually treated separately as the nuclear genome and the mitochondrial genome. Human genomes include both protein-coding DNA sequences and various types of DNA that does not encode proteins. The latter is a diverse category that includes DNA coding for non-translated RNA, such as that for ribosomal RNA, transfer RNA, ribozymes, small nuclear RNAs, and several types of regulatory RNAs. It also includes promoters and their associated gene-regulatory elements, DNA playing structural and replicatory roles, such as scaffolding regions, telomeres, centromeres, and origins of replication, plus large numbers of transposable elements, inserted viral DNA, non-functional pseudogenes and simple, highly repetitive sequences. Introns make up a large percentage of non-coding DNA. Some of this non-coding DNA is non-functional junk DNA, such as pseudogenes, but there is no firm consensus on the total amount of junk DNA.
Non-coding DNA (ncDNA) sequences are components of an organism's DNA that do not encode protein sequences. Some non-coding DNA is transcribed into functional non-coding RNA molecules. Other functional regions of the non-coding DNA fraction include regulatory sequences that control gene expression; scaffold attachment regions; origins of DNA replication; centromeres; and telomeres. Some non-coding regions appear to be mostly nonfunctional, such as introns, pseudogenes, intergenic DNA, and fragments of transposons and viruses. Regions that are completely nonfunctional are called junk DNA.
Junk DNA is a DNA sequence that has no relevant biological function. Most organisms have some junk DNA in their genomes—mostly pseudogenes and fragments of transposons and viruses—but it is possible that some organisms have substantial amounts of junk DNA.
Functional genomics is a field of molecular biology that attempts to describe gene functions and interactions. Functional genomics make use of the vast data generated by genomic and transcriptomic projects. Functional genomics focuses on the dynamic aspects such as gene transcription, translation, regulation of gene expression and protein–protein interactions, as opposed to the static aspects of the genomic information such as DNA sequence or structures. A key characteristic of functional genomics studies is their genome-wide approach to these questions, generally involving high-throughput methods rather than a more traditional "candidate-gene" approach.
The 1000 Genomes Project (1KGP), taken place from January 2008 to 2015, was an international research effort to establish the most detailed catalogue of human genetic variation at the time. Scientists planned to sequence the genomes of at least one thousand anonymous healthy participants from a number of different ethnic groups within the following three years, using advancements in newly developed technologies. In 2010, the project finished its pilot phase, which was described in detail in a publication in the journal Nature. In 2012, the sequencing of 1092 genomes was announced in a Nature publication. In 2015, two papers in Nature reported results and the completion of the project and opportunities for future research.
GENCODE is a scientific project in genome research and part of the ENCODE scale-up project.
H3K27ac is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates acetylation of the lysine residue at N-terminal position 27 of the histone H3 protein.
H3K9me3 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the tri-methylation at the 9th lysine residue of the histone H3 protein and is often associated with heterochromatin.
H3K4me1 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the mono-methylation at the 4th lysine residue of the histone H3 protein and often associated with gene enhancers.
H3K36me3 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the tri-methylation at the 36th lysine residue of the histone H3 protein and often associated with gene bodies.
H3K79me2 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the di-methylation at the 79th lysine residue of the histone H3 protein. H3K79me2 is detected in the transcribed regions of active genes.
H4K20me is an epigenetic modification to the DNA packaging protein Histone H4. It is a mark that indicates the mono-methylation at the 20th lysine residue of the histone H4 protein. This mark can be di- and tri-methylated. It is critical for genome integrity including DNA damage repair, DNA replication and chromatin compaction.
H3K36me2 is an epigenetic modification to the DNA packaging protein Histone H3. It is a mark that indicates the di-methylation at the 36th lysine residue of the histone H3 protein.
H3K36me is an epigenetic modification to the DNA packaging protein Histone H3, specifically, the mono-methylation at the 36th lysine residue of the histone H3 protein.
H3R42me is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the mono-methylation at the 42nd arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
H3R17me2 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the di-methylation at the 17th arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
H3R26me2 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the di-methylation at the 26th arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
H3R8me2 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the di-methylation at the 8th arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
H3R2me2 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the di-methylation at the 2nd arginine residue of the histone H3 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
H4R3me2 is an epigenetic modification to the DNA packaging protein histone H4. It is a mark that indicates the di-methylation at the 3rd arginine residue of the histone H4 protein. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial.
The ENCODE Project aims to delineate precisely and comprehensively the segments of the human and mouse genomes that encode functional elements.
Importantly, although very large numbers of noncoding elements have been defined, the functional annotation of ENCODE-identified elements is still in its infancy.