This example demonstrates a real life case of simulating the Lysozyme protein in water. It uses the HCLS blueprint to run a multi-step GPU enabled GROMACS run. This example was featured in this YouTube Video.
This example has been adapted with changes from tutorials by:
- Justin Lemhul (http://www.mdtutorials.com) - licensed under CC-BY-4.0
- Alessandra Villa (https://tutorials.gromacs.org/) - licensed under CC-BY-4.0
Note This example has not been optimized for performance and is meant to demonstrate feasibility of a real world example.
The Lysozyme Example only deploys one GPU VM from the blueprint, as such you will only need quota for:
- GPU: 12
A2 CPUs
and 1NVIDIA A100 GPUs
Note that these quotas are in addition to the quota requirements for the slurm
login node (2x N2 CPUs
) and slurm controller VM (4x C2 CPUs
). The
spack-builder
VM should have completed and stopped, freeing its CPU quota
usage, before the computational VMs are deployed.
-
Deploy the HCLS blueprint
Full instructions are found here.
-
SSH into the Slurm login node
Go to the
VM instances page and you
should see a VM with login
in the name. SSH into this VM by clicking the SSH
button or by any other means.
-
Create a submission directory
mkdir lysozyme_run01 && cd lysozyme_run01
-
Copy the contents of this directory into the submission directory
git clone https://github.com/GoogleCloudPlatform/cluster-toolkit.git cp -r cluster-toolkit/docs/videos/healthcare-and-life-sciences/lysozyme-example/* .
-
Copy the Lysozyme protein into the submission directory
cp /data_input/protein_data_bank/1AKI.pdb .
-
Submit the job
Your current directory should now contain
- the
1AKI.pdb
protein file - a
submit.sh
Slurm sbatch script - a
config/
directory containing configs used by the run
The
submit.sh
script contains several steps that are annotated with comments. To submit the job call the following command:sbatch submit.sh
- the
-
Monitor the job
Use the following command to see the status of the job:
squeue
The job state (
ST
) will showCF
while the job is being configured. Once the state switches toR
the job is running.If you refresh the VM instances page you will see an
a2-highgpu-1g
machine that has been auto-scaled up to run this job. It will have a name likehcls01-gpu-ghpc-0
.Once the job is in the running state you can track progress with the following command:
tail -f slurm-*.out
-
Visualize the results
-
Access the remote desktop using the Chrome Remote Desktop page under Remote devices. If you have not yet set up the remote desktop, follow these instructions.
-
Open a terminal in the remote desktop window.
-
Navigate to the attached outputs bucket.
cd /data_output/
-
Launch VMD.
vmd 1AKI_newbox.gro 1AKI_md.xtc
-
Update the graphics options:
- From the VMD main menu, select
Graphics
>Representations...
and configure the following options in theGraphical Representations
menu that is opened. - Set
Coloring Method
toSecondary Structure
. - Set
Drawing Method
toNewCartoon
. - Select the
Trajectory
tab. - Set
Trajectory Smoothing Window Size
to1
. - Close the
Graphical Representations
menu.
- From the VMD main menu, select
-
Hit play button in the lower right hand corner of the VMD main.
-