List of Folding@home cores

The distributed-computing project Folding@home uses scientific computer programs, referred to as "cores" or "fahcores", to perform calculations.[1][2] Folding@home's cores are based on modified and optimized versions of molecular simulation programs for calculation, including TINKER, GROMACS, AMBER, CPMD, SHARPEN, ProtoMol and Desmond.[1][3][4] These variants are each given an arbitrary identifier (Core xx). While the same core can be used by various versions of the client, separating the core from the client enables the scientific methods to be updated automatically as needed without a client update.[1]

Active cores

These cores listed below are currently used by the project.[1]

GROMACS

GROMACS, short for GROningen MAchine for Chemical Simulations, is an GPL open source molecular dynamics simulation package originally developed in the University of Groningen, and currently maintained by several universities and institutions worldwide.[5][6] Gromacs is extremely optimized,[7] with built-in consistency checking and continual ETA estimates, and is primarily designed for biochemical molecules with complicated bonding interactions such as proteins, lipids and nucleic acids.[8] Gromacs calculations are single precision, but it is used extensively throughout the project.[9] Folding@home has been granted a non-commercial, non-GPL license for Gromacs, and is thus not required to release its source code.[7][10] All variants use SIMD optimizations including SSE on Pentium processors, 3DNow+ on AMD chips and AltiVec on Macs.[9] This allows for a very significant speed increase over TINKER-based cores.[7]

Double Precision

Variants of Gromacs with double precision instead of single precision.[15]

GB

This form of Gromacs uses a Generalized Born implicit solvent model. These support SSE instruction optimizations.[1]

SMP

Uses Symmetric Multiprocessing on multiprocessor/multicore systems for faster calculations.[22][23]

GPU

Cores for the Graphics Processing Unit use the graphics chip of modern video cards to do molecular dynamics. The GPU Gromacs core is not a true port of Gromacs, but rather key elements from Gromacs were taken and enhanced for GPU capabilities.[28]

GPU2

These are the second generation GPU cores. Unlike the retired GPU1 cores, these variants are for ATI CAL-enabled 2xxx/3xxx or later series and nVidia CUDA-enabled nVidia 8xxx or later series GPUs.[29]

GPU3

These are the third generation GPU cores, and are based on OpenMM, Pande Group's own open library for molecular simulation. Although based on the GPU2 code, this adds stability and new capabilities.[32]

AMBER

Short for Assisted Model Building with Energy Refinement, AMBER is a family of force fields for molecular dynamics, as well as the name for the software package that simulates these force fields.[38] AMBER was originally developed by Peter Kollman at the University of California, San Francisco, and is currently maintained by professors at various universities.[39] The double-precision AMBER core is not currently optimized with SSE nor SSE2,[40][41] but AMBER is significantly faster than Tinker cores and adds some functionality which cannot be performed using Gromacs cores.[41]

ProtoMol

ProtoMol is an object-oriented, component based, framework for molecular dynamics (MD) simulations. ProtoMol offers high flexibility, easy extendibility and maintenance, and high performance demands, including parallelization.[42] In 2009, the Pande Group was working on a complementary new technique called Normal Mode Langevin Dynamics which had the possibility to greatly speed simulations while maintaining the same accuracy.[32][43]

Inactive cores

These cores are not currently used by the project, as they are either retired due to becoming obsolete, or are not yet ready for general release.[1]

TINKER

TINKER is a computer software application for molecular dynamics simulation with a complete and general package for molecular mechanics and molecular dynamics, with some special features for biopolymers.[45]

GROMACS

CPMD

Short for Car–Parrinello Molecular Dynamics, this core performs ab-initio quantum mechanical molecular dynamics. Unlike classical molecular dynamics calculations which use a force field approach, CPMD includes the motion of electrons in the calculations of energy, forces and motion.[55][56] Quantum chemical calculations have the possibility to yield a very reliable potential energy surface, and can naturally incorporate multi-body interactions.[56]

SHARPEN

Desmond

The software for this core was developed at D. E. Shaw Research. Desmond performs high-speed molecular dynamics simulations of biological systems on conventional computer clusters.[62][63][64][65] The code uses novel parallel algorithms[66] and numerical techniques[67] to achieve high performance on platforms containing a large number of processors,[68] but may also be executed on a single computer. Desmond and its source code are available without cost for non-commercial use by universities and other not-for-profit research institutions.

References

  1. 1 2 3 4 5 6 7 "Folding@home Cores". Retrieved 2007-11-06.
  2. Zagen30 (2011). "Re: Lucid Virtu and Foldig At Home". Retrieved 2011-08-30.
  3. Vijay Pande (2005-10-16). "Folding@home with QMD core FAQ" (FAQ). Stanford University. Retrieved 2006-12-03. The site indicates that Folding@home uses a modification of CPMD allowing it to run on the supercluster environment.
  4. Vijay Pande (2009-06-17). "Folding@home: How does FAH code development and sysadmin get done?". Retrieved 2009-06-25.
  5. Van Der Spoel D, Lindahl E, Hess B, Groenhof G, Mark AE, Berendsen HJ (2005). "GROMACS: fast, flexible, and free". J Comput Chem. 26 (16): 1701–18. doi:10.1002/jcc.20291. PMID 16211538.
  6. Hess B, Kutzner C, Van Der Spoel D, Lindahl E (2008). "GROMACS 4: Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation". J Chem Theory Comput. 4 (2): 435. doi:10.1021/ct700301q.
  7. 1 2 3 4 5 6 7 8 9 10 11 "Gromacs FAQ" (FAQ). 2007. Retrieved 2011-09-03.
  8. "About Gromacs". Retrieved 2011-08-24.
  9. 1 2 3 "Gromacs Core". 2011. Retrieved 2011-08-21.
  10. "Folding@home Open Source FAQ". 2010. Retrieved 2011-08-21.
  11. "Gromacs 33 Core". 2011. Retrieved 2011-08-21.
  12. "Gromacs SREM Core". 2011. Retrieved 2011-08-24.
  13. "Replica-exchange molecular dynamics method for protein folding". 1999. Retrieved 2011-08-24.
  14. "Gromacs Simulated Tempering core". 2011. Retrieved 2011-08-24.
  15. 1 2 "Double Gromacs Core". 2011. Retrieved 2011-08-22.
  16. "Double Gromacs B Core". 2011. Retrieved 2011-08-22.
  17. "Double Gromacs C Core". 2011. Retrieved 2011-08-22.
  18. "GB Gromacs". 2011. Retrieved 2011-08-22.
  19. 1 2 http://foldingforum.org/viewtopic.php?f=24&t=17528
  20. http://foldingforum.org/viewtopic.php?f=24&t=18887#p189345
  21. "Project 10412 now on advanced". 2010. Retrieved 2011-09-03.
  22. 1 2 "SMP FAQ" (FAQ). 2011. Retrieved 2011-08-22.
  23. "Gromacs SMP Core". 2011. Retrieved 2011-08-22.
  24. "Gromacs CVS SMP2 Core". 2011. Retrieved 2011-08-22.
  25. kasson (2011-10-11). "Re: Project:6099 run:3 clone:4 gen:0 - Core needs updating". Retrieved 2011-10-11.
  26. "Gromacs CVS SMP2 bigadv Core". 2011. Retrieved 2011-08-22.
  27. "Introduction of a new SMP core, changes to bigadv". 2011. Retrieved 2011-08-24.
  28. Vijay Pande (2011). "ATI FAQ: Are these WUs compatible with other fahcores?" (FAQ). Retrieved 2011-08-23.
  29. 1 2 3 4 5 "GPU2 Core". 2011. Retrieved 2011-08-23.
  30. 1 2 "FAH Support for ATI GPUs". 2011. Retrieved 2011-08-31.
  31. ihaque (Pande Group member) (2009). "Folding Forum: Announcing project 5900 and Core_14 on advmethods". Retrieved 2011-08-23.
  32. 1 2 3 Vijay Pande (2009). "Update on new FAH cores and clients". Retrieved 2011-08-23.
  33. 1 2 "GPU3 Core". 2011. Retrieved 2011-08-23.
  34. "GPU Core 17". 2014. Retrieved 2014-07-12.
  35. "Core 18 and Maxwell". Retrieved 19 February 2015.
  36. "Core18 Projects 10470-10473 to FAH". Retrieved 19 February 2015.
  37. "New Core18 (login required)". Retrieved 19 February 2015.
  38. "Amber". 2011. Retrieved 2011-08-23.
  39. "Amber Developers". 2011. Retrieved 2011-08-23.
  40. 1 2 "AMBER Core". 2011. Retrieved 2011-08-23.
  41. 1 2 "Folding@Home with AMBER FAQ" (FAQ). 2004. Retrieved 2011-08-23.
  42. "ProtoMol". Retrieved 2011-08-24.
  43. "Folding@home - About" (FAQ).
  44. "ProtoMol core". 2011. Retrieved 2011-08-24.
  45. "TINKER Home Page". Retrieved 2012-08-24.
  46. "Tinker Core". 2011. Retrieved 2012-08-24.
  47. 1 2 3 "Folding@home on ATI's GPUs: a major step forward". 2011. Retrieved 2011-08-28.
  48. "GPU core". 2011. Retrieved 2011-08-28.
  49. "Gromacs SMP core". 2011. Retrieved 2011-08-28.
  50. "Gromacs CVS SMP core". 2011. Retrieved 2011-08-28.
  51. "New release: extra-large work units". 2011. Retrieved 2011-08-28.
  52. "PS3 Screenshot". 2007. Retrieved 2011-08-24.
  53. "PS3 Client". 2008. Retrieved 2011-08-28.
  54. "PS3 FAQ". 2009. Retrieved 2011-08-28.
  55. R. Car & M. Parrinello (1985). "Unified Approach for Molecular Dynamics and Density-Functional Theory". Phys. Rev. Lett. 55 (22): 2471–2474. Bibcode:1985PhRvL..55.2471C. doi:10.1103/PhysRevLett.55.2471. PMID 10032153.
  56. 1 2 3 4 5 6 7 "QMD FAQ" (FAQ). 2007. Retrieved 2011-08-28.
  57. "QMD Core". 2011. Retrieved 2011-08-24.
  58. "FAH & QMD & AMD64 & SSE2" (FAQ).
  59. "SHARPEN". Archived from the original on December 2, 2008.
  60. "SHARPEN: Systematic Hierarchical Algorithms for Rotamers and Proteins on an Extended Network (deadlink)". Archived from the original (About) on December 1, 2008.
  61. "Re: SHARPEN". 2010. Retrieved 2011-08-29.
  62. Kevin J. Bowers; Edmond Chow; Huafeng Xu; Ron O. Dror; Michael P. Eastwood; Brent A. Gregersen; John L. Klepeis; István Kolossváry; Mark A. Moraes; Federico D. Sacerdoti; John K. Salmon; Yibing Shan & David E. Shaw (2006). "Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters" (PDF). Proceedings of the ACM/IEEE Conference on Supercomputing (SC06), Tampa, Florida, November 11–17, 2006. ACM. ISBN 0-7695-2700-0.
  63. Morten Ø. Jensen; David W. Borhani; Kresten Lindorff-Larsen; Paul Maragakis; Vishwanath Jogini; Michael P. Eastwood; Ron O. Dror & David E. Shaw (2010). "Principles of Conduction and Hydrophobic Gating in K+ Channels". Proceedings of the National Academy of Sciences of the United States of America. PNAS. 107 (13): 5833–5838. Bibcode:2010PNAS..107.5833J. doi:10.1073/pnas.0911691107. PMC 2851896Freely accessible. PMID 20231479.
  64. Ron O. Dror; Daniel H. Arlow; David W. Borhani; Morten Ø. Jensen; Stefano Piana & David E. Shaw (2009). "Identification of Two Distinct Inactive Conformations of the ß2-Adrenergic Receptor Reconciles Structural and Biochemical Observations". Proceedings of the National Academy of Sciences of the United States of America. PNAS. 106 (12): 4689–4694. Bibcode:2009PNAS..106.4689D. doi:10.1073/pnas.0811065106. PMC 2650503Freely accessible. PMID 19258456.
  65. Yibing Shan; Markus A. Seeliger; Michael P. Eastwood; Filipp Frank; Huafeng Xu; Morten Ø. Jensen; Ron O. Dror; John Kuriyan & David E. Shaw (2009). "A Conserved Protonation-Dependent Switch Controls Drug Binding in the Abl Kinase". Proceedings of the National Academy of Sciences of the United States of America. PNAS. 106 (1): 139–144. Bibcode:2009PNAS..106..139S. doi:10.1073/pnas.0811223106. PMC 2610013Freely accessible. PMID 19109437.
  66. Kevin J. Bowers; Ron O. Dror & David E. Shaw (2006). "The Midpoint Method for Parallelization of Particle Simulations". Journal of Chemical Physics. J. Chem. Phys. 124 (18): 184109:1–11. Bibcode:2006JChPh.124r4109B. doi:10.1063/1.2191489. PMID 16709099.
  67. Ross A. Lippert; Kevin J. Bowers; Ron O. Dror; Michael P. Eastwood; Brent A. Gregersen; John L. Klepeis; István Kolossváry & David E. Shaw (2007). "A Common, Avoidable Source of Error in Molecular Dynamics Integrators". Journal of Chemical Physics. J. Chem. Phys. 126 (4): 046101:1–2. Bibcode:2007JChPh.126d6101L. doi:10.1063/1.2431176. PMID 17286520.
  68. Edmond Chow; Charles A. Rendleman; Kevin J. Bowers; Ron O. Dror; Douglas H. Hughes; Justin Gullingsrud; Federico D. Sacerdoti & David E. Shaw (2008). "Desmond Performance on a Cluster of Multicore Processors". D. E. Shaw Research Technical Report DESRES/TR--2008-01, July 2008.
  69. "Desmond core". Retrieved 2011-08-24.

External links

This article is issued from Wikipedia - version of the 9/19/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.