Software:tinkergpu: Difference between revisions

From biowiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:
Tinker GPU simulations should use Tinker9. As of 2021, we switched to Tinker9 for GPU simulations, which is similar/consistent to Tinker CPU in terms of usage/setup, is faster than OpenMM for AMOEBA and some fixed charged force fields, and has all our new development for AMOEBA+.
Tinker GPU simulations should use Tinker9. As of 2021, we switched to Tinker9 for GPU simulations, which is similar/consistent to Tinker CPU in terms of usage/setup, is faster than OpenMM for AMOEBA and some fixed charged force fields, and has all our new development for AMOEBA+.


For single molecule or cluster without PBC, please CPU code.
'''For single molecule or cluster without PBC, please CPU code. Use openmp-threads to speed up
 
'''
The setup and key files used by Tinker CPU (on this wiki page) can be directly applied to Tinker9. Set "openmp-threads 1" in the key file (this is no longer needed).
The setup and key files used by Tinker CPU (on this wiki page) can be directly applied to Tinker9. Set "openmp-threads 1" in the key file (this is no longer needed).



Revision as of 17:10, 17 May 2022

Tinker9 GPU molecular dynamics

Tinker GPU simulations should use Tinker9. As of 2021, we switched to Tinker9 for GPU simulations, which is similar/consistent to Tinker CPU in terms of usage/setup, is faster than OpenMM for AMOEBA and some fixed charged force fields, and has all our new development for AMOEBA+.

For single molecule or cluster without PBC, please CPU code. Use openmp-threads to speed up The setup and key files used by Tinker CPU (on this wiki page) can be directly applied to Tinker9. Set "openmp-threads 1" in the key file (this is no longer needed).

Source code

https://github.com/TinkerTools/tinker

Free to academic and nonprofit org

Compiling

About the compilation of Tinker9, please refer to the successful build on the GitHub site (https://github.com/TinkerTools/tinker9/discussions/121)

!!! Note: as of today (Feb 22, 2021), tinker9 has to be compiled on an older machine (CPU) in order to generally run on newer machines. An alternative is to build an executable on each machine. 

Example

Example setup, xyz and key files for protein simulations: https://github.com/TinkerTools/tinker9/blob/master/example/

Recommend to use RESPA integrator with 2fs time step and write out less frequent (e.g. every 2 ps). On RTX3070, you should be able to achieve ~40ns/day for DHFR. Use MonteCarlo or Langvin piston for pressure control in NPT More details see Tutorials

Manual

https://tinkerdoc.readthedocs.io/en/latest/

Run

To run tinker9 is almost the same as to run the canonical tinker. After compilation, an executable called tinker9 exists in the build directory. Here is the help information if you directly execute ./tinker9
 SYNOPSIS
       tinker9 PROGRAM [ args... ]

 PROGRAMS AVAILABLE
       analyze
       bar
       dynamic
       info
       minimize
       testgrad
       help

So it is just adding tinker9 in front of the canonical tinker run commands.

Here are the compiled executables ready to use on our clusters

(update: bash; source /home/liuchw/.bashrc.tinker9 (this will set up the right build on each node).
/home/liuchw/Softwares/tinkers/Tinker9-latest/build_cuda11.2/tinker9 #this one runs on 3070/3080 cards
/home/liuchw/Softwares/tinkers/Tinker9-latest/build_cuda10.2/tinker9 #this one runs on the rest of the cards (not all of the cards have been tested)

 

An example MD command line: /home/liuchw/Documents/Github.leucinw/Tinker9/build/tinker9 dynamic box.xyz 500000 2 2 2 298 >& mdlog&

Tinker9 Run scripts:

Option 1: manually select the build according to the GPU card (RTX 30xx vs others)

#!/bin/bash
export TINKER9=/home/liuchw/Softwares/tinkers/Tinker9-latest/
export CUDA_DEVICE_ORDER=PCI_BUS_ID
export CUDA_VISIBLE_DEVICES=0 # device number; can use 1 or 2 if there are multiple GPU cards
# for 3070/3080/3090 nodes (check use `nvidia-smi` )
$TINKER9/build_cuda11.2/tinker9 dynamic your.xyz -k your.key 1000 2.0 2.0 4 298.15 1.0
# for other GPU nodes (we tested on 1070/1080/1080Ti/2070/2080)
#$TINKER9/build_cuda10.2/tinker9 dynamic your.xyz -k your.key 1000 2.0 2.0 4 298.15 1.0

Option 2: automatically select the compatible build from multiple builds. Either source the following lines or paste them in your run script will work. Aliases such as dynamic_gpu, analyze_gpu, and bar_gpu are made for convenience. 

#!/usr/bin/bash
VAL=`nvidia-smi &> /dev/null; echo $?`
# check existence
if [ $VAL != 0 ]; then
  echo -e "\e[101mCUDA utility not installed on `hostname`\e[0m"
else
  export TINKER9=/home/liuchw/Documents/Github.leucinw/Tinker9
  VERSION=`nvidia-smi | grep "CUDA Version" | cut -c70-73`
  #check CUDA Version 
  if [ $VERSION == 10.2 ]; then
    export TINKER9EXE=$TINKER9/build_1
    #make aliases
    alias dynamic_gpu=$TINKER9EXE/dynamic9.sh
    alias analyze_gpu=$TINKER9EXE/analyze9.sh
    alias bar_gpu=$TINKER9EXE/bar9.sh
  elif [ $VERSION == 11.2 ]; then
    export TINKER9EXE=$TINKER9/build
    #make aliases
    alias dynamic_gpu=$TINKER9EXE/dynamic9.sh
    alias analyze_gpu=$TINKER9EXE/analyze9.sh
    alias bar_gpu=$TINKER9EXE/bar9.sh
  else
    echo -e "\e[101mTinker9 not supported for CUDA $VERSION\e[0m"
  fi
fi