GROMACS

Last updated
GROMACS
Developer(s) University of Groningen
Royal Institute of Technology
Uppsala University [1]
Initial release1991;33 years ago (1991)
Stable release
2024.2 / 10 May 2024;14 days ago (2024-05-10) [2]
Repository
Written in C++, C, CUDA, OpenCL, SYCL
Operating system Linux, macOS, Windows, any other Unix variety
Platform Many
Available inEnglish
Type Molecular dynamics simulation
License LGPL versions >= 4.6 [3] ,
GPL versions < 4.6 [4]
Website www.gromacs.org

GROMACS is a molecular dynamics package mainly designed for simulations of proteins, lipids, and nucleic acids. It was originally developed in the Biophysical Chemistry department of University of Groningen, and is now maintained by contributors in universities and research centers worldwide. [5] [6] [7] GROMACS is one of the fastest and most popular software packages available, [8] [9] and can run on central processing units (CPUs) and graphics processing units (GPUs). [10] It is free, open-source software released under the GNU Lesser General Public License (LGPL) [3] (GPL prior to Version 4.6).

Contents

History

The GROMACS project originally began in 1991 at Department of Biophysical Chemistry, University of Groningen, Netherlands (1991–2000). Its name originally derived from this time (GROningen MAchine for Chemical Simulations) although currently GROMACS is not an abbreviation for anything, as little active development has taken place in Groningen in recent decades. The original goal was to construct a dedicated parallel computer system for molecular simulations, based on a ring architecture (since superseded by modern hardware designs). The molecular dynamics specific routines were rewritten in the programming language C from the Fortran 77-based program GROMOS, which had been developed in the same group.[ citation needed ]

Since 2001, GROMACS is developed by the GROMACS development teams at the Royal Institute of Technology and Uppsala University, Sweden.

Features

GROMACS is operated via the command-line interface, and can use files for input and output. It provides calculation progress and estimated time of arrival (ETA) feedback, a trajectory viewer, and an extensive library for trajectory analysis. [3] In addition, support for different force fields makes GROMACS very flexible. It can be executed in parallel, using Message Passing Interface (MPI) or threads. It contains a script to convert molecular coordinates from Protein Data Bank (PDB) files into the formats it uses internally. Once a configuration file for the simulation of several molecules (possibly including solvent) has been created, the simulation run (which can be time-consuming) produces a trajectory file, describing the movements of the atoms over time. That file can then be analyzed or visualized with several supplied tools. [11]

GROMACS has had GPU offload support since Version 4.5, originally limited to Nvidia GPUs. GPU support has been expanded and improved over the years [12] , and, in Version 2023, GROMACS has CUDA, OpenCL, and SYCL backends for running on GPUs of AMD, Apple, Intel, and Nvidia, often with great acceleration compared to CPU. [13]

Easter eggs

As of January 2010, GROMACS' source code contains approximately 400 alternative backronyms to GROMACS as jokes among the developers and biochemistry researchers. These include "Gromacs Runs On Most of All Computer Systems", "Gromacs Runs One Microsecond At Cannonball Speeds", "Good ROcking Metal Altar for Chronical Sinner", "Working on GRowing Old MAkes el Chrono Sweat", and "Great Red Owns Many ACres of Sand". They are randomly selected to possibly appear in GROMACS's output stream. In one instance, such an bacronym, "Giving Russians Opium May Alter Current Situation", caused offense. [14]

Applications

Under a non-GPL license, GROMACS is widely used in the Folding@home distributed computing project for simulations of protein folding, where it is the base code for the project's largest and most regularly used series of calculation cores. [15] [16] EvoGrid, a distributed computing project to evolve artificial life, also employs GROMACS. [17]

See also

Related Research Articles

GROningen MOlecular Simulation (GROMOS) is the name of a force field for molecular dynamics simulation, and a related computer software package. Both are developed at the University of Groningen, and at the Computer-Aided Chemistry Group at the Laboratory for Physical Chemistry at the Swiss Federal Institute of Technology (ETH Zurich). At Groningen, Herman Berendsen was involved in its development.

<span class="mw-page-title-main">Folding@home</span> Distributed computing project simulating protein folding

Folding@home is a distributed computing project aimed to help scientists develop new therapeutics for a variety of diseases by the means of simulating protein dynamics. This includes the process of protein folding and the movements of proteins, and is reliant on simulations run on volunteers' personal computers. Folding@home is currently based at the University of Pennsylvania and led by Greg Bowman, a former student of Vijay Pande.

<span class="mw-page-title-main">Molecular mechanics</span> Use of classical mechanics to model molecular systems

Molecular mechanics uses classical mechanics to model molecular systems. The Born–Oppenheimer approximation is assumed valid and the potential energy of all systems is calculated as a function of the nuclear coordinates using force fields. Molecular mechanics can be used to study molecule systems ranging in size and complexity from small to large biological systems or material assemblies with many thousands to millions of atoms.

General-purpose computing on graphics processing units is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing.

<span class="mw-page-title-main">CUDA</span> Parallel computing platform and programming model

Compute Unified Device Architecture (CUDA) is a proprietary parallel computing platform and application programming interface (API) that allows software to use certain types of graphics processing units (GPUs) for accelerated general-purpose processing, an approach called general-purpose computing on GPUs (GPGPU). CUDA API and its runtime: The CUDA API is an extension of the C programming language that adds the ability to specify thread-level parallelism in C and also to specify GPU device specific operations (like moving data between the CPU and the GPU). CUDA is a software layer that gives direct access to the GPU's virtual instruction set and parallel computational elements for the execution of compute kernels. In addition to drivers and runtime kernels, the CUDA platform includes compilers, libraries and developer tools to help programmers accelerate their applications.

This is a list of computer programs that are predominantly used for molecular mechanics calculations.

TRACE is a connectionist model of speech perception, proposed by James McClelland and Jeffrey Elman in 1986. It is based on a structure called "the TRACE," a dynamic processing structure made up of a network of units, which performs as the system's working memory as well as the perceptual processing mechanism. TRACE was made into a working computer program for running perceptual simulations. These simulations are predictions about how a human mind/brain processes speech sounds and words as they are heard in real time.

Acceleware Ltd. is a Canadian innovator of clean-tech oil and gas technologies composed of two business units: Radio Frequency (RF) Enhanced Oil Recovery and Seismic Imaging Software. The company is currently running a commercial-scale, RF XL pilot project at Marwayne, Alberta, Canada, to advance and validate its heavy oil and oil sands electrification technology. Acceleware's seismic imaging software solutions offer imaging for oil exploration in complex geologies.

<span class="mw-page-title-main">Molecular modeling on GPUs</span> Using graphics processing units for molecular simulations

Molecular modeling on GPU is the technique of using a graphics processing unit (GPU) for molecular simulations.

OpenMM is a library for performing molecular dynamics simulations on a wide variety of hardware architectures. First released in January 2010, it was written by Peter Eastman at the Vijay S. Pande lab at Stanford University. It is notable for its implementation in the Folding@home project's core22 kernel. Core22, also developed at the Pande lab, uses OpenMM to perform protein dynamics simulations on GPUs via CUDA and OpenCL. During the COVID-19 pandemic, a peak of 280,000 GPUs were estimated to be running OpenMM via core22.

<span class="mw-page-title-main">AutoDock</span>

AutoDock is a molecular modeling simulation software. It is especially effective for protein-ligand docking. AutoDock 4 is available under the GNU General Public License. AutoDock is one of the most cited docking software applications in the research community. It is used by the FightAIDS@Home and OpenPandemics - COVID-19 projects run at World Community Grid, to search for antivirals against HIV/AIDS and COVID-19. In February 2007, a search of the ISI Citation Index showed more than 1,100 publications had been cited using the primary AutoDock method papers. As of 2009, this number surpassed 1,200.

TeraChem is a computational chemistry software program designed for CUDA-enabled Nvidia GPUs. The initial development started at the University of Illinois at Urbana-Champaign and was subsequently commercialized. It is currently distributed by PetaChem, LLC, located in Silicon Valley. As of 2020, the software package is still under active development.

Local elevation is a technique used in computational chemistry or physics, mainly in the field of molecular simulation. It was developed in 1994 by Huber, Torda and van Gunsteren to enhance the searching of conformational space in molecular dynamics simulations and is available in the GROMOS software for molecular dynamics simulation. The method was, together with the conformational flooding method, the first to introduce memory dependence into molecular simulations. Many recent methods build on the principles of the local elevation technique, including the Engkvist-Karlström, adaptive biasing force, Wang–Landau, metadynamics, adaptively biased molecular dynamics, adaptive reaction coordinate forces, and local elevation umbrella sampling methods. The basic principle of the method is to add a memory-dependent potential energy term in the simulation so as to prevent the simulation to revisit already sampled configurations, which leads to the increased probability of discovering new configurations. The method can be seen as a continuous variant of the Tabu search method.

Martini is a coarse-grained (CG) force field developed by Marrink and coworkers at the University of Groningen, initially developed in 2004 for molecular dynamics simulation of lipids, later (2007) extended to various other molecules. The force field applies a mapping of four heavy atoms to one CG interaction site and is parametrized with the aim of reproducing thermodynamic properties.

<span class="mw-page-title-main">Simon Fisher</span> British geneticist and neuroscientist (born 1970)

Simon E. Fisher is a British geneticist and neuroscientist who has pioneered research into the genetic basis of human speech and language. He is a director of the Max Planck Institute for Psycholinguistics and Professor of language and genetics at the Donders Institute for Brain, Cognition and Behaviour in Nijmegen, The Netherlands.

Mario Barbatti is a Brazilian physicist, computational theoretical chemist, and writer. He is specialized in the development and application of mixed quantum-classical dynamics for the study of molecular excited states. He is also the leading developer of the Newton-X software package for dynamics simulations. Mario Barbatti held an A*Midex Chair of Excellence at the Aix Marseille University between 2015 and 2019, where he is a professor since 2015.

In the context of chemistry and molecular modelling, the Interface force field (IFF) is a force field for classical molecular simulations of atoms, molecules, and assemblies up to the large nanometer scale, covering compounds from across the periodic table. It employs a consistent classical Hamiltonian energy function for metals, oxides, and organic compounds, linking biomolecular and materials simulation platforms into a single platform. The reliability is often higher than that of density functional theory calculations at more than a million times lower computational cost. IFF includes a physical-chemical interpretation for all parameters as well as a surface model database that covers different cleavage planes and surface chemistry of included compounds. The Interface Force Field is compatible with force fields for the simulation of primarily organic compounds and can be used with common molecular dynamics and Monte Carlo codes. Structures and energies of included chemical elements and compounds are rigorously validated and property predictions are up to a factor of 100 more accurate relative to earlier models.

References

  1. "The GROMACS development team". Archived from the original on 2020-02-26. Retrieved 2012-06-27.
  2. "Downloads — GROMACS 2024.2 documentation". gromacs.org. Retrieved 2024-05-24.
  3. 1 2 3 "About GROMACS". gromacs.org. 17 May 2021. Retrieved 2024-05-24.
  4. "About Gromacs". gromacs.org. 16 August 2010. Archived from the original on 2020-11-27. Retrieved 2012-06-26.
  5. "People — Gromacs". gromacs.org. 14 March 2012. Archived from the original on 26 February 2020. Retrieved 26 June 2012.
  6. Van Der Spoel D, Lindahl E, Hess B, Groenhof G, Mark AE, Berendsen HJ (2005). "GROMACS: fast, flexible, and free". J Comput Chem. 26 (16): 1701–18. doi:10.1002/jcc.20291. PMID   16211538. S2CID   1231998.
  7. Hess B, Kutzner C, Van Der Spoel D, Lindahl E (2008). "GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation". J Chem Theory Comput. 4 (2): 435–447. doi:10.1021/ct700301q. hdl: 11858/00-001M-0000-0012-DDBF-0 . PMID   26620784. S2CID   1142192.
  8. Carsten Kutzner; David Van Der Spoel; Martin Fechner; Erik Lindahl; Udo W. Schmitt; Bert L. De Groot; Helmut Grubmüller (2007). "Speeding up parallel GROMACS on high-latency networks". Journal of Computational Chemistry. 28 (12): 2075–2084. doi:10.1002/jcc.20703. hdl: 11858/00-001M-0000-0012-E29A-0 . PMID   17405124. S2CID   519769.
  9. Berk Hess; Carsten Kutzner; David van der Spoel; Erik Lindahl (2008). "GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation". Journal of Chemical Theory and Computation. 4 (3): 435–447. doi:10.1021/ct700301q. hdl: 11858/00-001M-0000-0012-DDBF-0 . PMID   26620784. S2CID   1142192.
  10. "Installation guide". gromacs.org. 10 May 2024. Retrieved 24 May 2024.
  11. "Flow Chart — GROMACS 2024.2 documentation". gromacs.org. 10 May 2024. Retrieved 24 May 2024.
  12. Páll S, Zhmurov A, Bauer P, Abraham M, Lundborg M, Gray A, Hess B, Lindahl E (2020). "Heterogeneous parallelization and acceleration of molecular dynamics simulations in GROMACS". J Chem Phys. 153 (13): 134110. doi: 10.1063/5.0018516 . PMID   33032406.
  13. "Heterogeneous parallelization and GPU acceleration — GROMACS webpage". gromacs.org. 10 May 2024. Retrieved 24 May 2024.
  14. "Re: Working on Giving Russians Opium May Alter Current Situation". Folding@home. 17 January 2010. Retrieved 2012-06-26.
  15. Pande lab (11 June 2012). "Folding@home Open Source FAQ". Folding@home. Stanford University. Archived from the original (FAQ) on 17 July 2012. Retrieved 26 June 2012.
  16. Adam Beberg; Daniel Ensign; Guha Jayachandran; Siraj Khaliq; Vijay Pande (2009). "Folding@home: Lessons from eight years of volunteer distributed computing". 2009 IEEE International Symposium on Parallel & Distributed Processing (PDF). pp. 1–8. doi:10.1109/IPDPS.2009.5160922. ISBN   978-1-4244-3751-1. ISSN   1530-2075. S2CID   15677970.
  17. Markoff, John (29 September 2009). "Wanted: Home Computers to Join in Research on Artificial Life". The New York Times. Retrieved 26 June 2012.