Skip to content

Latest commit

 

History

History
129 lines (91 loc) · 3.98 KB

File metadata and controls

129 lines (91 loc) · 3.98 KB

Lysozyme Example

This example demonstrates a real life case of simulating the Lysozyme protein in water. It uses the HCLS blueprint to run a multi-step GPU enabled GROMACS run. This example was featured in this YouTube Video.

This example has been adapted with changes from tutorials by:

Note This example has not been optimized for performance and is meant to demonstrate feasibility of a real world example.

Quota Requirements

The Lysozyme Example only deploys one GPU VM from the blueprint, as such you will only need quota for:

  • GPU: 12 A2 CPUs and 1 NVIDIA A100 GPUs

Note that these quotas are in addition to the quota requirements for the slurm login node (2x N2 CPUs) and slurm controller VM (4x C2 CPUs). The spack-builder VM should have completed and stopped, freeing its CPU quota usage, before the computational VMs are deployed.

Instructions

  1. Deploy the HCLS blueprint

    Full instructions are found here.

  2. SSH into the Slurm login node

Go to the VM instances page and you should see a VM with login in the name. SSH into this VM by clicking the SSH button or by any other means.

  1. Create a submission directory

    mkdir lysozyme_run01 && cd lysozyme_run01
  2. Copy the contents of this directory into the submission directory

    git clone https://github.com/GoogleCloudPlatform/cluster-toolkit.git
    cp -r cluster-toolkit/docs/videos/healthcare-and-life-sciences/lysozyme-example/* .
  3. Copy the Lysozyme protein into the submission directory

    cp /data_input/protein_data_bank/1AKI.pdb .
  4. Submit the job

    Your current directory should now contain

    • the 1AKI.pdb protein file
    • a submit.sh Slurm sbatch script
    • a config/ directory containing configs used by the run

    The submit.sh script contains several steps that are annotated with comments. To submit the job call the following command:

    sbatch submit.sh
  5. Monitor the job

    Use the following command to see the status of the job:

    squeue

    The job state (ST) will show CF while the job is being configured. Once the state switches to R the job is running.

    If you refresh the VM instances page you will see an a2-highgpu-1g machine that has been auto-scaled up to run this job. It will have a name like hcls01-gpu-ghpc-0.

    Once the job is in the running state you can track progress with the following command:

    tail -f slurm-*.out
  6. Visualize the results

    1. Access the remote desktop using the Chrome Remote Desktop page under Remote devices. If you have not yet set up the remote desktop, follow these instructions.

    2. Open a terminal in the remote desktop window.

    3. Navigate to the attached outputs bucket.

      cd /data_output/
    4. Launch VMD.

      vmd 1AKI_newbox.gro 1AKI_md.xtc
    5. Update the graphics options:

      1. From the VMD main menu, select Graphics > Representations... and configure the following options in the Graphical Representations menu that is opened.
      2. Set Coloring Method to Secondary Structure.
      3. Set Drawing Method to NewCartoon.
      4. Select the Trajectory tab.
      5. Set Trajectory Smoothing Window Size to 1.
      6. Close the Graphical Representations menu.
    6. Hit play button in the lower right hand corner of the VMD main.