Software:tinkergpu
Tinker9 GPU Molecular Dynamics
Tinker9 is the next generation of Tinker software with GPU accelaration. As of 2021, Ren lab switched to Tinker9 for GPU simulations, which is similar/consistent to Tinker CPU in terms of usage/setup, is faster than OpenMM for AMOEBA and some fixed charged force fields, and has all our new development for AMOEBA+.
For single molecule or cluster without PBC, please CPU code. Use openmp-threads to speed up if necessary
The setup and key files used by Tinker CPU (on this wiki page) can be directly applied to Tinker9.
Source code
https://github.com/TinkerTools/tinker9
Free to academic and nonprofit org
Compile Tinker9
Please read Tinker9 Github site for how to setup the environmental variables and compile Tinker9 code (https://github.com/TinkerTools/tinker9). Successful builds on various hardware and cuda version can also be found at https://github.com/TinkerTools/tinker9/discussions/121.
Note that Tinker9 required "matching" canonical Tinker 8, which should be automated by CMAKE script now.https://github.com/TinkerTools/tinker9/blob/master/README.md
As of May 23, 2022, Ren lab clusters equiped with GPU cards all have cuda version 11 with compatitable cuda driver. Here is a script file to compile tinker9
#!/bin/bash
# build on bme-sugar
# Chengwen Liu
rm -fr *.sh src* *Make* cmake-* tinker9
export CUDAHOME=/usr/local/cuda-11.2
export CUDACXX=$CUDAHOME/bin/nvcc
export FC=/usr/bin/gfortran
export CXX=/usr/bin/g++
export ACC=/home/liuchw/shared/nvidia/hpc_sdk/Linux_x86_64/21.1/compilers/bin/nvc++
export opt=release
export host=0
export prec=m
export compute_capability=60,70,75,80
export cuda_dir=$CUDAHOME
export CMAKEHOME=/home/liuchw/shared/cmake3.21/bin/
$CMAKEHOME/cmake ..
make -j
This is to say, after downloading Tinker9, make a build directory in tinker9 home directory. Then running the above should do everything.
2024 Tinker compiling
To build tinker9 on Rocky 9.3 Linux cuda 11.8, gcc 11.4, cmake 3.28
git clone https://github.com/TinkerTools/tinker9 tinker9_R93
cd tinker9_R93 && mkdir build
cd build
cp /home/pren/tinker9/run.sh.r9 .
./run.sh.r9
in run.sh.r9:
#!/bin/bash
# build on bme-sugar
# Chengwen Liu
rm -fr *.sh src* *Make* cmake-* tinker9
export CUDAHOME=/usr/local/cuda-11.8
export CUDACXX=$CUDAHOME/bin/nvcc
export FC=gfortran compute_capability=70 gpu_lang=cuda cmake ..
export cuda_dir=$CUDAHOME
export CMAKEHOME=/home/pren/Software/cmake-3.28.0/bin
$CMAKEHOME/cmake ..
make -j 6
Example
Example setup, xyz and key files for protein simulations: https://github.com/TinkerTools/tinker9/blob/master/example/
Recommend to use RESPA integrator with 2fs time step and write out less frequent (e.g. every 2 ps). On RTX3070, you should be able to achieve ~40ns/day for DHFR. Use MonteCarlo or Langvin piston for pressure control in NPT More details see Tutorials
Manual
https://tinkerdoc.readthedocs.io/en/latest/
Run Script
Here is a simple script to run molecular dynamics
#!/bin/bash # set the GPU device export CUDA_DEVICE_ORDER=PCI_BUS_ID export CUDA_VISIBLE_DEVICES=0 # set to 1,2, etc to specify the one you want # add tinker9 executables to PATH export TINKER9=/home/liuchw/Softwares/tinkers/Tinker9/2205/build_cuda11 export PATH=$PATH:$TINKER9 # option 1 to run dynamic nohup dynamic9 your.xyz -k your.key 1000 2.0 2.0 4 298.15 1.0 >your.log 2>&1 & # option 2 to run dynamic nohup tinker9 dynamic your.xyz -k your.key 1000 2.0 2.0 4 298.15 1.0 >your.log 2>&1 &
To use Tinker 9 on RTX4090 or A6000, node15x & node 16x, or 30x0/20x0 with older Xeon processors (eg node 10x, node4x):
#!/bin/bash export TINKER9=/home/pren/tinker9/tinker9_xeon/build/ export CUDA_DEVICE_ORDER=PCI_BUS_ID export CUDA_VISIBLE_DEVICES=0 # device number if there are multiple GPU cards $TINKER9/tinker9 dynamic bench7.xyz -k bench7.key 5000 2.0 400.0 4 298.15 1.0 N > out &
The above also tested on RTX 30x0 with older xeon processors (tacc node, see "lscpu")