Skip to content

Commit

Permalink
Merge pull request #7 from Computational-Plant-Science/v0.0.3
Browse files Browse the repository at this point in the history
Release 0.0.3
  • Loading branch information
wpbonelli authored Dec 31, 2022
2 parents 7e5a500 + a9f4d69 commit 8cbd62a
Show file tree
Hide file tree
Showing 23 changed files with 762 additions and 480 deletions.
15 changes: 15 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,18 @@
### Version 0.0.3

#### New features

* [feat(pre)](https://github.com/Computational-Plant-Science/slappt/commit/413405d278b726dd3a76db9a1f58f1cff57ad0de): Support pre-commands (e.g., for loading modules). Committed by w-bonelli on 2022-12-31.

#### Bug fixes

* [fix(parallelism)](https://github.com/Computational-Plant-Science/slappt/commit/5ef343c73cbfe75f46b224459bf16593ebc9c932): Fix iteration/parameter sweep support. Committed by w-bonelli on 2022-12-31.
* [fix(options)](https://github.com/Computational-Plant-Science/slappt/commit/95b79b0d1b4170f4eac6aa2f5a6c546b32c82cef): Fix --singularity flag. Committed by w-bonelli on 2022-12-31.

#### Refactoring

* [refactor(submit)](https://github.com/Computational-Plant-Science/slappt/commit/dc4b67b4bf30e09c41271a50b51641368b81d518): Introduce --submit & associated options. Committed by w-bonelli on 2022-12-31.

### Version 0.0.2

#### New features
Expand Down
46 changes: 34 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
[Slurm](https://slurm.schedmd.com/overview.html) scripts for [Apptainer](http://apptainer.org) jobs

[![PyPI Version](https://img.shields.io/pypi/v/slappt.png)](https://pypi.python.org/pypi/slappt)
[![PyPI Versions](https://img.shields.io/pypi/pyversions/slappt.png)](https://pypi.python.org/pypi/slappt)
[![PyPI Status](https://img.shields.io/pypi/status/slappt.png)](https://pypi.python.org/pypi/slappt)
[![PyPI Versions](https://img.shields.io/pypi/pyversions/slappt.png)](https://pypi.python.org/pypi/slappt)

[![CI](https://github.com/Computational-Plant-Science/slappt/actions/workflows/ci.yml/badge.svg)](https://github.com/Computational-Plant-Science/slappt/actions/workflows/ci.yml)
[![Documentation Status](https://readthedocs.org/projects/slappt/badge/?version=latest)](https://slappt.readthedocs.io/en/latest/?badge=latest)
Expand All @@ -22,20 +22,25 @@
- [Requirements](#requirements)
- [Installation](#installation)
- [Quickstart](#quickstart)
- [Caveats](#caveats)
- [Shell selection](#shell-selection)
- [Singularity support](#singularity-support)
- [Pre-commands](#pre-commands)
- [Documentation](#documentation)
- [Disclaimer](#disclaimer)

<!-- END doctoc generated TOC please keep comment here to allow auto update -->

## Overview

`slappt` composes Slurm job scripts for [Apptainer](https://apptainer.org/docs/user/main/) workflows from config files or CLI args.
`slappt` generates and submits Slurm job scripts for [Apptainer](https://apptainer.org/docs/user/main/) workflows. Jobs can be configured in YAML or via CLI.


## Requirements

`slappt` requires only Python3.8+.
`slappt` requires Python3.8+ and a few core dependencies, including `click`, `pyaml`, `paramiko`, and `requests`, among others.

`slappt` can also submit scripts to remote clusters. To submit a job script, the host machine must be able to connect via key- or password-authenticated SSH to the target cluster.
To submit a job script, the host machine must either run `slurmctld` with standard commands available, or must be able to connect via key- or password-authenticated SSH to the target cluster.

## Installation

Expand All @@ -47,7 +52,7 @@ pip install slappt

## Quickstart

Say you have a Slurm cluster with `apptainer` installed, and you have permission to submit to the `batch` partition.
Say you have access to a Slurm cluster with `apptainer` installed, and you have permission to submit to the `batch` partition.

Copy the `hello.yaml` file from the `examples` directory to your current working directory, then run:

Expand All @@ -64,8 +69,6 @@ slappt --image docker://alpine \
--entrypoint "echo 'hello world'" > hello.sh
```

**Note:** for most image definitions, specifying the `shell` is likely not necessary, as the default is `bash`. However, for images that don't have `bash` installed (e.g., `alphine` only has `sh`) a different shell must be selected.

Your `hello.sh` script should now contain:

```shell
Expand All @@ -82,16 +85,35 @@ Your `hello.sh` script should now contain:
apptainer exec docker://alpine sh -c "echo 'hello world'"
```

If already on the cluster, just submit it as usual:
If already on the cluster, use the `--submit` flag to submit the job directly. (Standard Slurm commands must be available for this to work.) In this case the job ID is shown if submission was successful.

You can provide authentication information to submit the script to remote clusters over SSH. For instance, assuming you have key authentication set up and your key is `~/.ssh/id_rsa`:

```shell
sbatch hello.sh
slappt ... --host <cluster IP or FQDN> --username <username>
```

If you're on a different machine, you can use the extra `sshlurm` command to submit the script over SSH. For instance, assuming you have key authentication set up for the cluster, and your key is `~/.ssh/id_rsa`:
### Caveats

```shell
sshlurm hello.sh --host <cluster IP or FQDN> --username <username>
There are a few things to note about the example above.

#### Shell selection

For most image definitions, specifying the `shell` is likely not necessary, as the default is `bash`. However, for images that don't have `bash` installed (e.g., `alphine` only has `sh`) a different shell must be selected.

#### Singularity support

If your cluster still uses `singularity`, pass the `--singularity` flag (or set the `singularity` key in the configuration file to `true`) to substitute `singularity` for `apptainer` in the command wrapping your workflow entrypoint.

#### Pre-commands

**Note:** if `apptainer` or `singularity` are not available by default on your cluster's compute nodes, you may need to add `--pre` commands (or a `pre` section to the configuration file), for instance `--pre "module load apptainer"`, or:

```yaml
...
pre:
- module load apptainer
...
```

## Documentation
Expand Down
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

project = 'slappt'
author = 'Computational Plant Science Lab'
release = '0.0.2'
release = '0.0.3'

# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
Expand Down
5 changes: 2 additions & 3 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
slappt: slurm scripts for apptainer jobs
slurm scripts for apptainer jobs
========================================

.. toctree::
Expand All @@ -19,8 +19,7 @@ slappt: slurm scripts for apptainer jobs
:caption: Usage

md/usage.md
md/commands.md
md/sshlurm.md
md/spec.md


Indices and tables
Expand Down
21 changes: 0 additions & 21 deletions docs/md/commands.md

This file was deleted.

10 changes: 10 additions & 0 deletions docs/md/install.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,17 @@
# Installation

## Official release

`slappt` is [available on the Python Package Index](https://pypi.org/project/slappt/) and can be installed with `pip`:

```shell
pip install slappt
```

## Development version

The latest development version of `slappt` can be installed from GitHub:

```shell
pip install git+https://github.com/Computational-Plant-Science/slappt.git
```
55 changes: 50 additions & 5 deletions docs/md/quickstart.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,64 @@
# Quickstart

Say you're on a Slurm cluster with an active Python3.8+ environment and permission to submit to the `batch` partition. First make sure `slappt` is installed:
Say you have access to a Slurm cluster with `apptainer` installed, and you have permission to submit to the `batch` partition.

Copy the `hello.yaml` file from the `examples` directory to your current working directory, then run:

```shell
pip install slappt
slappt hello.yaml > job.sh
```

Then we're off to the races:
Alternatively, without the configuration file:

```shell
slappt --image docker://alpine \
--shell sh \
--partition batch \
--entrypoint "echo 'hello world'"
--entrypoint "echo 'hello world'" > hello.sh
```

Your `hello.sh` script should now contain:

```shell
#!/bin/bash
#SBATCH --job-name=0477f4b9-e119-4354-8384-f50d7a96adad
#SBATCH --output=slappt.0477f4b9-e119-4354-8384-f50d7a96adad.%j.out
#SBATCH --error=slappt.0477f4b9-e119-4354-8384-f50d7a96adad.%j.err
#SBATCH --partition=batch
#SBATCH -c 1
#SBATCH -N 1
#SBATCH --ntasks=1
#SBATCH --time=01:00:00
#SBATCH --mem=1GB
module load apptainer # only if you need, some clusters
apptainer exec docker://alpine sh -c "echo 'hello world'"
```

**Note:** for most image definitions, specifying the `shell` is likely not necessary &mdash; the default is `bash`. However, for images that don't have `bash` installed (`alphine` only has `sh`) you'll need to specify a different shell.
If already on the cluster, use the `--submit` flag to submit the job directly. (Standard Slurm commands must be available for this to work.) In this case the job ID is shown if submission was successful.

You can provide authentication information to submit the script to remote clusters over SSH. For instance, assuming you have key authentication set up and your key is `~/.ssh/id_rsa`:

```shell
slappt ... --host <cluster IP or FQDN> --username <username>
```

## Caveats

There are a few things to note about the example above.

### Shell

For most image definitions, specifying the `shell` is likely not necessary, as the default is `bash`. However, for images that don't have `bash` installed (e.g., `alphine` only has `sh`) a different shell must be selected.

### Singularity support

If your cluster still uses `singularity`, pass the `--singularity` flag (or set the `singularity` key in the configuration file to `true`) to substitute `singularity` for `apptainer` in the command wrapping your workflow entrypoint.

### Pre-commands

**Note:** if `apptainer` or `singularity` are not available by default on your cluster's compute nodes, you may need to add `--pre` commands (or a `pre` section to the configuration file), for instance `--pre "module load apptainer"`, or:

```yaml
pre:
- module load apptainer
```
36 changes: 36 additions & 0 deletions docs/md/spec.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# YAML Specification

`slappt` supports declarative YAML configuration to make container workflows reusable.

```yaml
# standard attributes
image: # the container image definition to use, e.g. docker://alpine (registry prefix is required)
shell: # the shell to use (default: bash)
partition: # the cluster partition to submit to
entrypoint: # the command to run inside the container
workdir: # the working directory to use
email: # the email address to send notifications to
name: # the name of the job (default: slappt.<guid>)
pre: # a list of commands to run before invoking the container (e.g. loading modules)
inputs: # a text file containing a newline-separated list of input files
environment: # a dictionary of environment variables to set
bind_mounts: # a list of bind mounts to use, in format <host path>:<container path>
no_cache: # don't use the apptainer/singularity cache, force a rebuild of the image (default: false)
gpus: # the number of GPUs to request
time: # the job's walltime
account: # the account name to associate the job with
mem: # the amount of memory to request (default: 1GB)
nodes: # the number of nodes to request (default: 1)
cores: # the number of cores to request (default: 1)
tasks: # the number of tasks to request (default: 1)
header_skip: # a list of header lines to skip when parsing the input file (can be useful e.g. for clusters which have virtual memory and reject --mem headers)
singularity: # whether to invoke singularity instead of apptainer (default: false)
# submission attributes
host: # the hostname, IP or FQDN of the remote cluster to submit to
port: # the port to use for the SSH connection (default: 22)
username: # the username to use for the SSH connection
password: # the password to use for SSH authentication
pkey: # the path to the private key to use for SSH authentication
allow_stderr: # don't raise an error if sshlurm encounters stderr output (default: false)
timeout: # the timeout for the SSH connection (default: 10)
```
26 changes: 0 additions & 26 deletions docs/md/sshlurm.md

This file was deleted.

Loading

0 comments on commit 8cbd62a

Please sign in to comment.