Skip to content

Commit

Permalink
Merge pull request #460 from mlcommons/small_documentation_clarificat…
Browse files Browse the repository at this point in the history
…ions

documentation updates
  • Loading branch information
priyakasimbeg authored Jul 27, 2023
2 parents 9520e18 + 5651abb commit 0db8dbb
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 11 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -172,7 +172,7 @@ docker run -t -d \
--gpus all \
--ipc=host \
<docker_image_name> \
-b <debug_mode>
-b true
```

# Submitting PRs
Expand Down
13 changes: 7 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,8 @@
[MLCommons Algorithmic Efficiency](https://mlcommons.org/en/groups/research-algorithms/) is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models. This repository holds the [competition rules](RULES.md) and the benchmark code to run it. For a detailed description of the benchmark design, see our [paper](https://arxiv.org/abs/2306.07179).

# Table of Contents
- [Table of Contents](#table-of-contents)
- [AlgoPerf Benchmark Workloads](#algoperf-benchmark-workloads)
- [Installation](#installation)
- [Python Virtual Environment](#python-virtual-environment)
- [Docker](#docker)
- [Getting Started](#getting-started)
- [Rules](#rules)
Expand All @@ -51,7 +50,7 @@ You can install this package and dependences in a [python virtual environment](#
pip3 install -e '.[pytorch_gpu]' -f 'https://download.pytorch.org/whl/torch_stable.html'
pip3 install -e '.[full]'
```
## Virtual environment
## Python virtual environment
Note: Python minimum requirement >= 3.8

To set up a virtual enviornment and install this repository
Expand All @@ -74,7 +73,7 @@ To set up a virtual enviornment and install this repository

<details>
<summary>
Additional Details
Per workload installations
</summary>
You can also install the requirements for individual workloads, e.g. via

Expand Down Expand Up @@ -113,6 +112,7 @@ See instructions [here](https://github.com/NVIDIA/nvidia-docker).


### Running Docker Container (Interactive)
To use the Docker container as an interactive virtual environment, you can run a container mounted to your local data and code directories and execute the `bash` program. This may be useful if you are in the process of developing a submission.
1. Run detached Docker Container
```bash
docker run -t -d \
Expand All @@ -123,6 +123,7 @@ See instructions [here](https://github.com/NVIDIA/nvidia-docker).
--gpus all \
--ipc=host \
<docker_image_name>
-b true
```
This will print out a container id.
2. Open a bash terminal
Expand All @@ -131,14 +132,14 @@ See instructions [here](https://github.com/NVIDIA/nvidia-docker).
```

### Running Docker Container (End-to-end)
To run a submission end-to-end in a container see [Getting Started Document](./getting_started.md#run-your-submission-in-a-docker-container).
To run a submission end-to-end in a containerized environment see [Getting Started Document](./getting_started.md#run-your-submission-in-a-docker-container).

# Getting Started
For instructions on developing and scoring your own algorithm in the benchmark see [Getting Started Document](./getting_started.md).
## Running a workload
To run a submission directly by running a Docker container, see [Getting Started Document](./getting_started.md#run-your-submission-in-a-docker-container).

Alternatively from a your virtual environment or interactively running Docker container `submission_runner.py` run:
From your virtual environment or interactively running Docker container run:

**JAX**

Expand Down
8 changes: 4 additions & 4 deletions getting_started.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
# Getting Started

Table of Contents:
- [Set up and installation](#workspace-set-up-and-installation)
- [Set up and installation](#set-up-and-installation)
- [Download the data](#download-the-data)
- [Develop your submission](#develop-your-submission)
- [Run your submission](#run-your-submission)
- [Docker](#run-your-submission-in-a-docker-container)
- [Score your submission](#score-your-submission)

## Workspace set up and installation
## Set up and installation
To get started you will have to make a few decisions and install the repository along with its dependencies. Specifically:
1. Decide if you would like to develop your submission in either Pytorch or Jax.
2. Set up your workstation or VM. We recommend to use a setup similar to the [benchmarking hardware](https://github.com/mlcommons/algorithmic-efficiency/blob/main/RULES.md#benchmarking-hardware).
The specs on the benchmarking machines are:
2. Set up your workstation or VM. We recommend to use a setup similar to the [benchmarking hardware](https://github.com/mlcommons/algorithmic-efficiency/blob/main/RULES.md#benchmarking-hardware).
The specs on the benchmarking machines are:
- 8 V100 GPUs
- 240 GB in RAM
- 2 TB in storage (for datasets).
Expand Down

0 comments on commit 0db8dbb

Please sign in to comment.