List of Folding@home cores

Last updated

The distributed-computing project Folding@home uses scientific computer programs, referred to as "cores" or "fahcores", to perform calculations. [1] [2] Folding@home's cores are based on modified and optimized versions of molecular simulation programs for calculation, including TINKER, GROMACS, AMBER, CPMD, SHARPEN, ProtoMol, and Desmond. [1] [3] [4] These variants are each given an arbitrary identifier (Core xx). While the same core can be used by various versions of the client, separating the core from the client enables the scientific methods to be updated automatically as needed without a client update. [1]

Contents

Active cores

These cores listed below are currently used by the project. [1]

GROMACS

GPU

Cores for the Graphics Processing Unit use the graphics chip of modern video cards to do molecular dynamics. The GPU Gromacs core is not a true port of Gromacs, but rather key elements from Gromacs were taken and enhanced for GPU capabilities. [8]

GPU3

These are the third generation GPU cores, and are based on OpenMM, Pande Group's own open library for molecular simulation. Although based on the GPU2 code, this adds stability and new capabilities. [9]

  • core 22 (last core to use old style numbering convention)
    • v0.0.16 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.5.1
    • v0.0.17 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.5.1
    • v0.0.18 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.6.0 [10]
    • v0.0.20 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 7.7.0, which provides performance improvements and many new science features [11]
  • core 23
    • v8.0.3 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 8.0.0, which provides performance improvements, particularly to CUDA, and many new science features [12]
  • core 24
    • v8.1.3 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 8.1.1, which includes some major bug fixes.
  • core 25
    • Not publicly released
  • core 26
    • v8.2 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 8.2, which includes some major bug fixes, including OpenCL and GLIBC.
  • core 27
    • v8.2.1 Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL and CUDA, if available. It uses OpenMM 8.2, which includes some major bug fixes, including OpenCL and GLIBC.

Inactive cores

These cores are not currently used by the project, as they are either retired due to becoming obsolete, or are not yet ready for general release. [1]

TINKER

TINKER is a computer software application for molecular dynamics simulation with a complete and general package for molecular mechanics and molecular dynamics, with some special features for biopolymers. [13]

GROMACS

CPMD

Short for Car–Parrinello Molecular Dynamics, this core performs ab-initio quantum mechanical molecular dynamics. Unlike classical molecular dynamics calculations which use a force field approach, CPMD includes the motion of electrons in the calculations of energy, forces and motion. [41] [42] Quantum chemical calculations have the possibility to yield a very reliable potential energy surface, and can naturally incorporate multi-body interactions. [42]

SHARPEN

Desmond

The software for this core was developed at D. E. Shaw Research. Desmond performs high-speed molecular dynamics simulations of biological systems on conventional computer clusters. [48] [49] [50] [51] The code uses novel parallel algorithms [52] and numerical techniques [53] to achieve high performance on platforms containing a large number of processors, [54] but may also be executed on a single computer. Desmond and its source code are available without cost for non-commercial use by universities and other not-for-profit research institutions.

AMBER

Short for Assisted Model Building with Energy Refinement, AMBER is a family of force fields for molecular dynamics, as well as the name for the software package that simulates these force fields. [56] AMBER was originally developed by Peter Kollman at the University of California, San Francisco, and is currently maintained by professors at various universities. [57] The double-precision AMBER core is not currently optimized with SSE nor SSE2, [58] [59] but AMBER is significantly faster than Tinker cores and adds some functionality which cannot be performed using Gromacs cores. [59]

ProtoMol

ProtoMol is an object-oriented, component based, framework for molecular dynamics (MD) simulations. ProtoMol offers high flexibility, easy extendibility and maintenance, and high performance demands, including parallelization. [60] In 2009, the Pande Group was working on a complementary new technique called Normal Mode Langevin Dynamics which had the possibility to greatly speed simulations while maintaining the same accuracy. [9] [61]

GPU

GPU2

These are the second generation GPU cores. Unlike the retired GPU1 cores, these variants are for ATI CAL-enabled 2xxx/3xxx or later series and NVIDIA CUDA-enabled NVIDIA 8xxx or later series GPUs. [63]

  • GPU2 (Core 11)
    • Available for x86 Windows clients only. [63] Supported until approximately September 1, 2011 due to AMD/ATI dropping support for the utilized Brook programming language and moving to OpenCL. This forced F@h to rewrite its ATI GPU core code in OpenCL, the result of which is Core 16. [64]
  • GPU2 (Core 12)
    • Available for x86 Windows clients only. [63]
  • GPU2 (Core 13)
    • Available for x86 Windows clients only. [63]
  • GPU2 (Core 14)
    • Available for x86 Windows clients only, [63] this core was officially released Mar 02, 2009. [65]

GPU3

These are the third generation GPU cores, and are based on OpenMM, Pande Group's own open library for molecular simulation. Although based on the GPU2 code, this adds stability and new capabilities. [9]

  • GPU3 (core 15)
    • Available to x86 Windows only. [66]
  • GPU3 (core 16)
    • Available to x86 Windows only. [66] Released alongside the new v7 client, this is a rewrite of Core 11 in OpenCL. [64]
  • GPU3 (core 17)
    • Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL. Much better performance because of OpenMM 5.1 [67]
  • GPU3 (core 18)
    • Available to Windows for AMD and NVIDIA GPUs using OpenCL. This core was developed to address some critical scientific issues in Core17 [68] and uses the latest technology from OpenMM [69] 6.0.1. There are currently issues regarding the stability and performance of this core on some AMD and NVIDIA Maxwell GPUs. This is why assignment of work units running on this core has been temporarily stopped for some GPUs. [70]
  • GPU3 (core 21)
    • Available to Windows and Linux for AMD and NVIDIA GPUs using OpenCL. It uses OpenMM 6.2 and fixes the Core 18 AMD/NVIDIA performance issues. [71]

References

  1. 1 2 3 4 5 6 "Folding@home Project Summary" . Retrieved 2019-09-15.
  2. Zagen30 (2011). "Re: Lucid Virtu and Foldig At Home" . Retrieved 2011-08-30.{{cite web}}: CS1 maint: numeric names: authors list (link)
  3. Vijay Pande (2005-10-16). "Folding@home with QMD core FAQ" (FAQ). Stanford University. Retrieved 2006-12-03.The site indicates that Folding@home uses a modification of CPMD allowing it to run on the supercluster environment.
  4. Vijay Pande (2009-06-17). "Folding@home: How does FAH code development and sysadmin get done?" . Retrieved 2009-06-25.
  5. "CPU FAH core with AVX support? Mentioned a while back?". 2016-11-07. Retrieved 2017-02-18.
  6. "New Client with ARM Support". 24 November 2020. Archived from the original on 2024-06-19.
  7. Project 18803 on FAH (CPU, 0xa9), 2024-05-20, retrieved 2025-04-08
  8. Vijay Pande (2011). "ATI FAQ: Are these WUs compatible with other fahcores?". Archived from the original (FAQ) on 2012-10-28. Retrieved 2011-08-23.
  9. 1 2 3 4 Vijay Pande (2009). "Update on new FAH cores and clients" . Retrieved 2011-08-23.
  10. "GPU CORE22 0.0.2 coming to ADVANCED" . Retrieved 2020-02-14.
  11. "core22 0.0.20 limited testing with project 17110" . Retrieved 2021-01-14.
  12. "New OpenMM core Core23 available for public use".
  13. "TINKER Home Page" . Retrieved 2012-08-24.
  14. "Tinker Core". 2011. Archived from the original on December 18, 2005. Retrieved 2012-08-24.
  15. 1 2 3 "Folding@home on ATI's GPUs: a major step forward". 2011. Archived from the original on 2012-10-28. Retrieved 2011-08-28.
  16. "GPU core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-28.
  17. 1 2 3 4 5 6 7 8 "Gromacs FAQ". 2007. Archived from the original (FAQ) on 2012-07-17. Retrieved 2011-09-03.
  18. "SMP FAQ". 2011. Archived from the original (FAQ) on 2012-09-22. Retrieved 2011-08-22.
  19. "Gromacs SMP core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-28.
  20. "Gromacs CVS SMP core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-28.
  21. "New release: extra-large work units". 2011. Retrieved 2011-08-28.
  22. "PS3 Screenshot". 2007. Retrieved 2011-08-24.
  23. "PS3 Client". 2008. Archived from the original on May 5, 2007. Retrieved 2011-08-28.
  24. "PS3 FAQ". 2009. Archived from the original on 2008-09-12. Retrieved 2011-08-28.
  25. "Gromacs Core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-21.
  26. "Gromacs 33 Core". 2011. Archived from the original on January 8, 2013. Retrieved 2011-08-21.
  27. "Gromacs SREM Core". 2011. Archived from the original on January 8, 2013. Retrieved 2011-08-24.
  28. Sugita, Yuji; Okamoto, Yuko (1999). "Replica-exchange molecular dynamics method for protein folding". Chemical Physics Letters. 314 (1–2): 141–151. Bibcode:1999CPL...314..141S. doi:10.1016/S0009-2614(99)01123-9.
  29. "Gromacs Simulated Tempering core". 2011. Archived from the original on January 8, 2013. Retrieved 2011-08-24.
  30. "Double Gromacs Core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-22.
  31. "Double Gromacs B Core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-22.
  32. "Double Gromacs C Core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-22.
  33. "GB Gromacs". 2011. Archived from the original on January 8, 2013. Retrieved 2011-08-22.
  34. 1 2 "Folding Forum • View topic - Public Release of New A4 Cores".
  35. "Folding Forum • View topic - Project 7600 Adv -> Full FAH".
  36. "Project 10412 now on advanced". 2010. Retrieved 2011-09-03.
  37. "Gromacs CVS SMP2 Core". 2011. Archived from the original on January 8, 2013. Retrieved 2011-08-22.
  38. kasson (2011-10-11). "Re: Project:6099 run:3 clone:4 gen:0 - Core needs updating" . Retrieved 2011-10-11.
  39. "Gromacs CVS SMP2 bigadv Core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-22.
  40. "Introduction of a new SMP core, changes to bigadv". 2011. Retrieved 2011-08-24.
  41. R. Car & M. Parrinello (1985). "Unified Approach for Molecular Dynamics and Density-Functional Theory". Phys. Rev. Lett. 55 (22): 2471–2474. Bibcode:1985PhRvL..55.2471C. doi: 10.1103/PhysRevLett.55.2471 . PMID   10032153.
  42. 1 2 3 4 5 6 7 "QMD FAQ" (FAQ). 2007. Retrieved 2011-08-28.
  43. "QMD Core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-24.
  44. "FAH & QMD & AMD64 & SSE2". Archived from the original on January 12, 2006.
  45. "SHARPEN". Archived from the original on December 2, 2008.
  46. "SHARPEN: Systematic Hierarchical Algorithms for Rotamers and Proteins on an Extended Network (deadlink)". Archived from the original (About) on December 1, 2008.
  47. "Re: SHARPEN". 2010. Retrieved 2011-08-29.
  48. Kevin J. Bowers; Edmond Chow; Huafeng Xu; Ron O. Dror; Michael P. Eastwood; Brent A. Gregersen; John L. Klepeis; István Kolossváry; Mark A. Moraes; Federico D. Sacerdoti; John K. Salmon; Yibing Shan & David E. Shaw (2006). "Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters" (PDF). ACM/IEEE SC 2006 Conference (SC'06). ACM. p. 43. doi:10.1109/SC.2006.54. ISBN   0-7695-2700-0.
  49. Morten Ø. Jensen; David W. Borhani; Kresten Lindorff-Larsen; Paul Maragakis; Vishwanath Jogini; Michael P. Eastwood; Ron O. Dror & David E. Shaw (2010). "Principles of Conduction and Hydrophobic Gating in K+ Channels". Proceedings of the National Academy of Sciences of the United States of America. 107 (13). PNAS: 5833–5838. Bibcode:2010PNAS..107.5833J. doi: 10.1073/pnas.0911691107 . PMC   2851896 . PMID   20231479.
  50. Ron O. Dror; Daniel H. Arlow; David W. Borhani; Morten Ø. Jensen; Stefano Piana & David E. Shaw (2009). "Identification of Two Distinct Inactive Conformations of the ß2-Adrenergic Receptor Reconciles Structural and Biochemical Observations". Proceedings of the National Academy of Sciences of the United States of America. 106 (12). PNAS: 4689–4694. Bibcode:2009PNAS..106.4689D. doi: 10.1073/pnas.0811065106 . PMC   2650503 . PMID   19258456.
  51. Yibing Shan; Markus A. Seeliger; Michael P. Eastwood; Filipp Frank; Huafeng Xu; Morten Ø. Jensen; Ron O. Dror; John Kuriyan & David E. Shaw (2009). "A Conserved Protonation-Dependent Switch Controls Drug Binding in the Abl Kinase". Proceedings of the National Academy of Sciences of the United States of America. 106 (1). PNAS: 139–144. Bibcode:2009PNAS..106..139S. doi: 10.1073/pnas.0811223106 . PMC   2610013 . PMID   19109437.
  52. Kevin J. Bowers; Ron O. Dror & David E. Shaw (2006). "The Midpoint Method for Parallelization of Particle Simulations". Journal of Chemical Physics. 124 (18). J. Chem. Phys.: 184109:1–11. Bibcode:2006JChPh.124r4109B. doi: 10.1063/1.2191489 . PMID   16709099.
  53. Ross A. Lippert; Kevin J. Bowers; Ron O. Dror; Michael P. Eastwood; Brent A. Gregersen; John L. Klepeis; István Kolossváry & David E. Shaw (2007). "A Common, Avoidable Source of Error in Molecular Dynamics Integrators". Journal of Chemical Physics. 126 (4). J. Chem. Phys.: 046101:1–2. Bibcode:2007JChPh.126d6101L. doi: 10.1063/1.2431176 . PMID   17286520.
  54. Edmond Chow; Charles A. Rendleman; Kevin J. Bowers; Ron O. Dror; Douglas H. Hughes; Justin Gullingsrud; Federico D. Sacerdoti & David E. Shaw (2008). "Desmond Performance on a Cluster of Multicore Processors". D. E. Shaw Research Technical Report DESRES/TR--2008-01, July 2008.{{cite journal}}: Cite journal requires |journal= (help)
  55. "Desmond core". Archived from the original on December 18, 2005. Retrieved 2011-08-24.
  56. "Amber". 2011. Retrieved 2011-08-23.
  57. "Amber Developers". 2011. Retrieved 2011-08-23.
  58. 1 2 "AMBER Core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-23.
  59. 1 2 "Folding@Home with AMBER FAQ" (FAQ). 2004. Retrieved 2011-08-23.
  60. "ProtoMol" . Retrieved 2011-08-24.
  61. "Folding@home - About" (FAQ). 2010-07-26.
  62. "ProtoMol core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-24.
  63. 1 2 3 4 5 "GPU2 Core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-23.
  64. 1 2 "FAH Support for ATI GPUs". 2011. Retrieved 2011-08-31.
  65. ihaque (Pande Group member) (2009). "Folding Forum: Announcing project 5900 and Core_14 on advmethods" . Retrieved 2011-08-23.
  66. 1 2 "GPU3 Core". 2011. Archived from the original on December 18, 2005. Retrieved 2011-08-23.
  67. "GPU Core 17". 2014. Retrieved 2014-07-12.
  68. "Core 18 and Maxwell" . Retrieved 19 February 2015.
  69. "Core18 Projects 10470-10473 to FAH" . Retrieved 19 February 2015.
  70. "New Core18 (login required)" . Retrieved 19 February 2015.
  71. "Core 21 v0.0.11 moving to FAH with p9704, p9712" . Retrieved 2019-09-18.