Part of a series on |
Artificial intelligence |
---|
AlphaFold is an artificial intelligence (AI) program developed by DeepMind, a subsidiary of Alphabet, which performs predictions of protein structure. [1] The program is designed as a deep learning system. [2]
AlphaFold software has had three major versions. A team of researchers that used AlphaFold 1 (2018) placed first in the overall rankings of the 13th Critical Assessment of Structure Prediction (CASP) in December 2018. The program was particularly successful at predicting the most accurate structure for targets rated as the most difficult by the competition organisers, where no existing template structures were available from proteins with a partially similar sequence. A team that used AlphaFold 2 (2020) repeated the placement in the CASP14 competition in November 2020. [3] The team achieved a level of accuracy much higher than any other group. [2] [4] It scored above 90 for around two-thirds of the proteins in CASP's global distance test (GDT), a test that measures the degree to which a computational program predicted structure is similar to the lab experiment determined structure, with 100 being a complete match, within the distance cutoff used for calculating GDT. [2] [5]
AlphaFold 2's results at CASP14 were described as "astounding" [6] and "transformational". [7] Some researchers noted that the accuracy is not high enough for a third of its predictions, and that it does not reveal the mechanism or rules of protein folding for the protein folding problem to be considered solved. [8] [9] Nevertheless, there has been widespread respect for the technical achievement. On 15 July 2021 the AlphaFold 2 paper was published in Nature as an advance access publication alongside open source software and a searchable database of species proteomes. [10] [11] [12] The paper has since been cited more than 27 thousand times.
AlphaFold 3 was announced on 8 May 2024. It can predict the structure of complexes created by proteins with DNA, RNA, various ligands, and ions. [13] The new prediction method shows a minimum 50% improvement in accuracy for protein interactions with other molecules compared to existing methods. Moreover, for certain key categories of interactions, the prediction accuracy has effectively doubled. [14] Demis Hassabis and John Jumper from the team that developed AlphaFold won the Nobel Prize in Chemistry in 2024 for their work on “protein structure prediction”. The two had won the Breakthrough Prize in Life Sciences and the Albert Lasker Award for Basic Medical Research earlier in 2023. [15] [16]
Proteins consist of chains of amino acids which spontaneously fold to form the three dimensional (3-D) structures of the proteins. The 3-D structure is crucial to understanding the biological function of the protein.
Protein structures can be determined experimentally through techniques such as X-ray crystallography, cryo-electron microscopy and nuclear magnetic resonance, which are all expensive and time-consuming. [17] Such efforts, using the experimental methods, have identified the structures of about 170,000 proteins over the last 60 years, while there are over 200 million known proteins across all life forms. [5]
Over the years, researchers have applied numerous computational methods to predict the 3D structures of proteins from their amino acid sequences, accuracy of such methods in best possible scenario is close to experimental techniques (NMR) by the use of homology modeling based on molecular evolution. CASP, which was launched in 1994 to challenge the scientific community to produce their best protein structure predictions, found that GDT scores of only about 40 out of 100 can be achieved for the most difficult proteins by 2016. [5] AlphaFold started competing in the 2018 CASP using an artificial intelligence (AI) deep learning technique. [17]
DeepMind is known to have trained the program on over 170,000 proteins from the Protein Data Bank, a public repository of protein sequences and structures. The program uses a form of attention network, a deep learning technique that focuses on having the AI identify parts of a larger problem, then piece it together to obtain the overall solution. [2] The overall training was conducted on processing power between 100 and 200 GPUs. [2]
AlphaFold 1 (2018) was built on work developed by various teams in the 2010s, work that looked at the large databanks of related DNA sequences now available from many different organisms (most without known 3D structures), to try to find changes at different residues that appeared to be correlated, even though the residues were not consecutive in the main chain. Such correlations suggest that the residues may be close to each other physically, even though not close in the sequence, allowing a contact map to be estimated. Building on recent work prior to 2018, AlphaFold 1 extended this to estimate a probability distribution for just how close the residues might be likely to be—turning the contact map into a likely distance map. It also used more advanced learning methods than previously to develop the inference. [18] [19]
The 2020 version of the program (AlphaFold 2, 2020) is significantly different from the original version that won CASP 13 in 2018, according to the team at DeepMind. [21] [22]
The software design used in AlphaFold 1 contained a number of modules, each trained separately, that were used to produce the guide potential that was then combined with the physics-based energy potential. AlphaFold 2 replaced this with a system of sub-networks coupled together into a single differentiable end-to-end model, based entirely on pattern recognition, which was trained in an integrated way as a single integrated structure. [22] [23] Local physics, in the form of energy refinement based on the AMBER model, is applied only as a final refinement step once the neural network prediction has converged, and only slightly adjusts the predicted structure. [24]
A key part of the 2020 system are two modules, believed to be based on a transformer design, which are used to progressively refine a vector of information for each relationship (or "edge" in graph-theory terminology) between an amino acid residue of the protein and another amino acid residue (these relationships are represented by the array shown in green); and between each amino acid position and each different sequences in the input sequence alignment (these relationships are represented by the array shown in red). [23] Internally these refinement transformations contain layers that have the effect of bringing relevant data together and filtering out irrelevant data (the "attention mechanism") for these relationships, in a context-dependent way, learnt from training data. These transformations are iterated, the updated information output by one step becoming the input of the next, with the sharpened residue/residue information feeding into the update of the residue/sequence information, and then the improved residue/sequence information feeding into the update of the residue/residue information. [23] As the iteration progresses, according to one report, the "attention algorithm ... mimics the way a person might assemble a jigsaw puzzle: first connecting pieces in small clumps—in this case clusters of amino acids—and then searching for ways to join the clumps in a larger whole." [5] [ needs update ]
The output of these iterations then informs the final structure prediction module, [23] which also uses transformers, [25] and is itself then iterated. In an example presented by DeepMind, the structure prediction module achieved a correct topology for the target protein on its first iteration, scored as having a GDT_TS of 78, but with a large number (90%) of stereochemical violations – i.e. unphysical bond angles or lengths. With subsequent iterations the number of stereochemical violations fell. By the third iteration the GDT_TS of the prediction was approaching 90, and by the eighth iteration the number of stereochemical violations was approaching zero. [26]
The training data was originally restricted to single peptide chains. However, the October 2021 update, named AlphaFold-Multimer, included protein complexes in its training data. DeepMind stated this update succeeded about 70% of the time at accurately predicting protein-protein interactions. [27]
Announced on 8 May 2024, AlphaFold 3 was co-developed by Google DeepMind and Isomorphic Labs, both subsidiaries of Alphabet. AlphaFold 3 is not limited to single-chain proteins, as it can also predict the structures of protein complexes with DNA, RNA, post-translational modifications and selected ligands and ions. [28] [13]
AlphaFold 3 introduces the "Pairformer", a deep learning architecture inspired from the transformer, considered similar but simpler than the Evoformer introduced with AlphaFold 2. [29] [30] The raw predictions from the Pairformer module are passed to a diffusion model, which starts with a cloud of atoms and uses these predictions to iteratively progress towards a 3D depiction of the molecular structure. [13]
The AlphaFold server was created to provide free access to AlphaFold 3 for non-commercial research. [31]
In December 2018, DeepMind's AlphaFold placed first in the overall rankings of the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP). [32] [33]
The program was particularly successfully predicting the most accurate structure for targets rated as the most difficult by the competition organisers, where no existing template structures were available from proteins with a partially similar sequence. AlphaFold gave the best prediction for 25 out of 43 protein targets in this class, [33] [34] [35] achieving a median score of 58.9 on the CASP's global distance test (GDT) score, ahead of 52.5 and 52.4 by the two next best-placed teams, [36] who were also using deep learning to estimate contact distances. [37] [38] Overall, across all targets, the program achieved a GDT score of 68.5. [39]
In January 2020, implementations and illustrative code of AlphaFold 1 was released open-source on GitHub. [40] [17] but, as stated in the "Read Me" file on that website: "This code can't be used to predict structure of an arbitrary protein sequence. It can be used to predict structure only on the CASP13 dataset (links below). The feature generation code is tightly coupled to our internal infrastructure as well as external tools, hence we are unable to open-source it." Therefore, in essence, the code deposited is not suitable for general use but only for the CASP13 proteins. The company has not announced plans to make their code publicly available as of 5 March 2021.
In November 2020, DeepMind's new version, AlphaFold 2, won CASP14. [41] [42] Overall, AlphaFold 2 made the best prediction for 88 out of the 97 targets. [6]
On the competition's preferred global distance test (GDT) measure of accuracy, the program achieved a median score of 92.4 (out of 100), meaning that more than half of its predictions were scored at better than 92.4% for having their atoms in more-or-less the right place, [43] [44] a level of accuracy reported to be comparable to experimental techniques like X-ray crystallography. [21] [7] [39] In 2018 AlphaFold 1 had only reached this level of accuracy in two of all of its predictions. [6] 88% of predictions in the 2020 competition had a GDT_TS score of more than 80. On the group of targets classed as the most difficult, AlphaFold 2 achieved a median score of 87.
Measured by the root-mean-square deviation (RMS-D) of the placement of the alpha-carbon atoms of the protein backbone chain, which tends to be dominated by the performance of the worst-fitted outliers, 88% of AlphaFold 2's predictions had an RMS deviation of less than 4 Å for the set of overlapped C-alpha atoms. [6] 76% of predictions achieved better than 3 Å, and 46% had a C-alpha atom RMS accuracy better than 2 Å, [6] with a median RMS deviation in its predictions of 2.1 Å for a set of overlapped CA atoms. [6] AlphaFold 2 also achieved an accuracy in modelling surface side chains described as "really really extraordinary".
To additionally verify AlphaFold-2 the conference organisers approached four leading experimental groups for structures they were finding particularly challenging and had been unable to determine. In all four cases the three-dimensional models produced by AlphaFold 2 were sufficiently accurate to determine structures of these proteins by molecular replacement. These included target T1100 (Af1503), a small membrane protein studied by experimentalists for ten years. [5]
Of the three structures that AlphaFold 2 had the least success in predicting, two had been obtained by protein NMR methods, which define protein structure directly in aqueous solution, whereas AlphaFold was mostly trained on protein structures in crystals. The third exists in nature as a multidomain complex consisting of 52 identical copies of the same domain, a situation AlphaFold was not programmed to consider. For all targets with a single domain, excluding only one very large protein and the two structures determined by NMR, AlphaFold 2 achieved a GDT_TS score of over 80.
In 2022, DeepMind did not enter CASP15, but most of the entrants used AlphaFold or tools incorporating AlphaFold. [45]
AlphaFold 2 scoring more than 90 in CASP's global distance test (GDT) is considered a significant achievement in computational biology [5] and great progress towards a decades-old grand challenge of biology. [7] Nobel Prize winner and structural biologist Venki Ramakrishnan called the result "a stunning advance on the protein folding problem", [5] adding that "It has occurred decades before many people in the field would have predicted. It will be exciting to see the many ways in which it will fundamentally change biological research." [41]
Propelled by press releases from CASP and DeepMind, [46] [41] AlphaFold 2's success received wide media attention. [47] As well as news pieces in the specialist science press, such as Nature , [7] Science , [5] MIT Technology Review , [2] and New Scientist , [48] [49] the story was widely covered by major national newspapers,. [50] [51] [52] [53] A frequent theme was that ability to predict protein structures accurately based on the constituent amino acid sequence is expected to have a wide variety of benefits in the life sciences space including accelerating advanced drug discovery and enabling better understanding of diseases. [7] [54] Some have noted that even a perfect answer to the protein prediction problem would still leave questions about the protein folding problem—understanding in detail how the folding process actually occurs in nature (and how sometimes they can also misfold). [55]
In 2023, Demis Hassabis and John Jumper won the Breakthrough Prize in Life Sciences [16] as well as the Albert Lasker Award for Basic Medical Research for their management of the AlphaFold project. [56] Hassabis and Jumper proceeded to win the Nobel Prize in Chemistry in 2024 for their work on “protein structure prediction” with David Baker of the University of Washington. [15] [57]
The open access to source code of several AlphaFold versions (excluding AlphaFold 3) has been provided by DeepMind after requests from the scientific community. [58] [59] [60] Full source code of AlphaFold-3 is expected to be provided to open access by the end of 2024. [61] [62]
Content | |
---|---|
Data types captured | protein structure prediction |
Organisms | all UniProt proteomes |
Contact | |
Research center | EMBL-EBI |
Primary citation | [10] |
Access | |
Website | https://www.alphafold.ebi.ac.uk/ |
Download URL | yes |
Tools | |
Web | yes |
Miscellaneous | |
License | CC-BY 4.0 |
Curation policy | automatic |
The AlphaFold Protein Structure Database was launched on July 22, 2021, as a joint effort between AlphaFold and EMBL-EBI. At launch the database contains AlphaFold-predicted models of protein structures of nearly the full UniProt proteome of humans and 20 model organisms, amounting to over 365,000 proteins. The database does not include proteins with fewer than 16 or more than 2700 amino acid residues, [63] but for humans they are available in the whole batch file. [64] AlphaFold planned to add more sequences to the collection, the initial goal (as of beginning of 2022) being to cover most of the UniRef90 set of more than 100 million proteins. As of May 15, 2022, 992,316 predictions were available. [65]
In July 2021, UniProt-KB and InterPro [66] has been updated to show AlphaFold predictions when available. [67]
On July 28, 2022, the team uploaded to the database the structures of around 200 million proteins from 1 million species, covering nearly every known protein on the planet. [68]
AlphaFold has various limitations:
AlphaFold has been used to predict structures of proteins of SARS-CoV-2, the causative agent of COVID-19. The structures of these proteins were pending experimental detection in early 2020. [77] [7] Results were examined by the scientists at the Francis Crick Institute in the United Kingdom before release into the larger research community. The team also confirmed accurate prediction against the experimentally determined SARS-CoV-2 spike protein that was shared in the Protein Data Bank, an international open-access database, before releasing the computationally determined structures of the under-studied protein molecules. [78] The team acknowledged that although these protein structures might not be the subject of ongoing therapeutical research efforts, they will add to the community's understanding of the SARS-CoV-2 virus. [78] Specifically, AlphaFold 2's prediction of the structure of the ORF3a protein was very similar to the structure determined by researchers at University of California, Berkeley using cryo-electron microscopy. This specific protein is believed to assist the virus in breaking out of the host cell once it replicates. This protein is also believed to play a role in triggering the inflammatory response to the infection. [79]
Protein secondary structure is the local spatial conformation of the polypeptide backbone excluding the side chains. The two most common secondary structural elements are alpha helices and beta sheets, though beta turns and omega loops occur as well. Secondary structure elements typically spontaneously form as an intermediate before the protein folds into its three dimensional tertiary structure.
Protein folding is the physical process by which a protein, after synthesis by a ribosome as a linear chain of amino acids, changes from an unstable random coil into a more ordered three-dimensional structure. This structure permits the protein to become biologically functional.
Protein structure prediction is the inference of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of its secondary and tertiary structure from primary structure. Structure prediction is different from the inverse problem of protein design.
Critical Assessment of Structure Prediction (CASP), sometimes called Critical Assessment of Protein Structure Prediction, is a community-wide, worldwide experiment for protein structure prediction taking place every two years since 1994. CASP provides research groups with an opportunity to objectively test their structure prediction methods and delivers an independent assessment of the state of the art in protein structure modeling to the research community and software users. Even though the primary goal of CASP is to help advance the methods of identifying protein three-dimensional structure from its amino acid sequence many view the experiment more as a "world championship" in this field of science. More than 100 research groups from all over the world participate in CASP on a regular basis and it is not uncommon for entire groups to suspend their other research for months while they focus on getting their servers ready for the experiment and on performing the detailed predictions.
Predictor@home was a volunteer computing project that used BOINC software to predict protein structure from protein sequence in the context of the 6th biannual CASP, or Critical Assessment of Techniques for Protein Structure Prediction. A major goal of the project was the testing and evaluating of new algorithms to predict both known and unknown protein structures.
Rosetta@home is a volunteer computing project researching protein structure prediction on the Berkeley Open Infrastructure for Network Computing (BOINC) platform, run by the Baker lab. Rosetta@home aims to predict protein–protein docking and design new proteins with the help of about fifty-five thousand active volunteered computers processing at over 487,946 GigaFLOPS on average as of September 19, 2020. Foldit, a Rosetta@home videogame, aims to reach these goals with a crowdsourcing approach. Though much of the project is oriented toward basic research to improve the accuracy and robustness of proteomics methods, Rosetta@home also does applied research on malaria, Alzheimer's disease, and other pathologies.
Sir Demis Hassabis is a British artificial intelligence (AI) researcher, and entrepreneur. He is the chief executive officer and co-founder of Google DeepMind, and Isomorphic Labs, and a UK Government AI Adviser. In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
Homology modeling, also known as comparative modeling of protein, refers to constructing an atomic-resolution model of the "target" protein from its amino acid sequence and an experimental three-dimensional structure of a related homologous protein. Homology modeling relies on the identification of one or more known protein structures likely to resemble the structure of the query sequence, and on the production of a sequence alignment that maps residues in the query sequence to residues in the template sequence. It has been seen that protein structures are more conserved than protein sequences amongst homologues, but sequences falling below a 20% sequence identity can have very different structure.
The global distance test (GDT), also written as GDT_TS to represent "total score", is a measure of similarity between two protein structures with known amino acid correspondences but different tertiary structures. It is most commonly used to compare the results of protein structure prediction to the experimentally determined structure as measured by X-ray crystallography, protein NMR, or, increasingly, cryoelectron microscopy.
David Baker is an American biochemist and computational biologist who has pioneered methods to design proteins and predict their three-dimensional structures. He is the Henrietta and Aubrey Davis Endowed Professor in Biochemistry, an investigator with the Howard Hughes Medical Institute, and an adjunct professor of genome sciences, bioengineering, chemical engineering, computer science, and physics at the University of Washington. He was awarded the shared 2024 Nobel Prize in Chemistry for his work on computational protein design.
In computational biology, de novo protein structure prediction refers to an algorithmic process by which protein tertiary structure is predicted from its amino acid primary sequence. The problem itself has occupied leading scientists for decades while still remaining unsolved. According to Science, the problem remains one of the top 125 outstanding issues in modern science. At present, some of the most successful methods have a reasonable probability of predicting the folds of small, single-domain proteins within 1.5 angstroms over the entire structure.
RaptorX is a software and web server for protein structure and function prediction that is free for non-commercial use. RaptorX is among the most popular methods for protein structure prediction. Like other remote homology recognition and protein threading techniques, RaptorX is able to regularly generate reliable protein models when the widely used PSI-BLAST cannot. However, RaptorX is also significantly different from profile-based methods in that RaptorX excels at modeling of protein sequences without a large number of sequence homologs by exploiting structure information. RaptorX Server has been designed to ensure a user-friendly interface for users inexpert in protein structure prediction methods.
I-TASSER is a bioinformatics method for predicting three-dimensional structure model of protein molecules from amino acid sequences. It detects structure templates from the Protein Data Bank by a technique called fold recognition. The full-length structure models are constructed by reassembling structural fragments from threading templates using replica exchange Monte Carlo simulations. I-TASSER is one of the most successful protein structure prediction methods in the community-wide CASP experiments.
Jianlin (Jack) Cheng is the William and Nancy Thompson Missouri Distinguished Professor in the Electrical Engineering and Computer Science (EECS) Department at the University of Missouri, Columbia. He earned his PhD from the University of California-Irvine in 2006, his MS degree from Utah State University in 2001, and his BS degree from Huazhong University of Science and Technology in 1994.
DeepMind Technologies Limited, trading as DeepMind, is a British-American artificial intelligence research laboratory which serves as a subsidiary of Alphabet Inc.. Founded in the UK in 2010, it was acquired by Google in 2014 and merged with Google AI's Google Brain division to become Google DeepMind in April 2023. The company is based in London, with research centres in Canada, France, Germany, and the United States.
Pushmeet Kohli is a computer and machine learning scientist at Google DeepMind where he holds the position of Vice President of research for the "Secure and Reliable AI" and "AI for Science and Sustainability". Before joining DeepMind, he was partner scientist and director of research at Microsoft Research and a post-doctoral fellow at the University of Cambridge. Kohli's research investigates applications of machine learning and computer vision. He has also made contributions in game theory, discrete algorithms and psychometrics.
Molecular Operating Environment (MOE) is a drug discovery software platform that integrates visualization, modeling and simulations, as well as methodology development, in one package. MOE scientific applications are used by biologists, medicinal chemists and computational chemists in pharmaceutical, biotechnology and academic research. MOE runs on Windows, Linux, Unix, and macOS. Main application areas in MOE include structure-based design, fragment-based design, ligand-based design, pharmacophore discovery, medicinal chemistry applications, biologics applications, structural biology and bioinformatics, protein and antibody modeling, molecular modeling and simulations, virtual screening, cheminformatics & QSAR. The Scientific Vector Language (SVL) is the built-in command, scripting and application development language of MOE.
Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who also serves as the CEO. The company was incorporated on February 24, 2021 and announced on November 4, 2021. It was established under Alphabet Inc. as a spin-off from its AI research lab DeepMind, of which Hassabis is also founder and CEO.
John Michael Jumper is an American chemist and computer scientist. He currently serves as director at Google DeepMind. Jumper and his colleagues created AlphaFold, an artificial intelligence (AI) model to predict protein structures from their amino acid sequence with high accuracy. Jumper stated that the AlphaFold team plans to release 100 million protein structures.
The Predicted Aligned Error (PAE) is a quantitative output produced by AlphaFold, a protein structure prediction system developed by DeepMind. PAE estimates the expected positional error for each residue in a predicted protein structure if it were aligned to a corresponding residue in the true protein structure. This measurement helps scientists assess the confidence in the relative positions and orientations of different parts of the predicted protein model.
{{cite news}}
: CS1 maint: multiple names: authors list (link)