Skip to content

Commit

Permalink
Feature: Benchmarks (#67)
Browse files Browse the repository at this point in the history
* Setup benchmark infrastructure.
* Add first benchmark results to documentation.
  • Loading branch information
Samuel Burbulla authored Mar 19, 2024
1 parent cf575a4 commit ff2f682
Show file tree
Hide file tree
Showing 36 changed files with 666 additions and 38 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
- Change `dataptation` dependency from "==3.1" to ">=3.1,<4.0".
- Change `optuna` dependency from "3.5.0" to ">=3.5.0,<4.0.0".
- Add `FourierLayer` and `FourierNeuralOperator` with example.
- Add `benchmarks` infrastructure.

## 0.0.0 (2024-02-22)

Expand Down
22 changes: 22 additions & 0 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Benchmarks

Run the `run_all.py` script to run all benchmarks.

```bash
python run_all.py
```

If you only want to evaluate a single benchmark, adopt the `run_single.py` script
and run

```bash
python run_single.py
```

## Visualize

In order to visualize the benchmark runs in your database, run

```bash
python process.py
```
Empty file added benchmarks/html/img/.gitkeep
Empty file.
Binary file added benchmarks/html/img/SineRegular_BelNet.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/html/img/SineRegular_DNO.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/html/img/SineRegular_DeepONet.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/html/img/SineRegular_FNO.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/html/img/SineUniform_BelNet.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/html/img/SineUniform_DNO.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/html/img/SineUniform_DeepONet.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added benchmarks/html/img/SineUniform_FNO.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
48 changes: 48 additions & 0 deletions benchmarks/html/style.css
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
.container {
width: 100%;
min-height: 100vh;
background: #ffffff;
display: flex;
align-items: center;
justify-content: center;
flex-wrap: wrap;
padding: 33px 30px;
}

.benchmark-table {
border-collapse: collapse;
background: #fff;
color: #000;
overflow: hidden;
width: 100%;
margin: 0 auto;
position: relative;
box-shadow: 0px 5px 15px #000;
}

.benchmark-table td, table th {
padding-left: 12px;
border: 0;
}

.benchmark-table thead tr {
height: 40px;
background: #232323;
color: #fff;
}

.benchmark-table tbody tr {
height: 50px;
font-size: 15px;
color: 000;
line-height: 1.2;
}

.benchmark-table td, table th {
text-align: left;
vertical-align: middle;
}

.benchmark-table td:nth-of-type(2n-1) {
background-color: #dbdbdb;
}
25 changes: 25 additions & 0 deletions benchmarks/html/table.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
<link rel="stylesheet" href="style.css">
<h2><a href="../api/continuity/benchmarks/#continuity.benchmarks.SineRegular">SineRegular</a></h2>
<table class="benchmark-table">
<thead>
<tr><th>Operator</th><th>Params</th><th>Learning Curve</th><th>loss/train</th><th>loss/test</th></tr>
</thead>
<tbody>
<tr><th><a href="../api/continuity/operators/#continuity.operators.FourierNeuralOperator">FNO</a></th><td>505</td><td width="150px"><img height="60px" src="img/SineRegular_FNO.png"></td><td>1.22e-07</td><td><b>1.4e-07</b></td></tr>
<tr><th><a href="../api/continuity/operators/#continuity.operators.DeepNeuralOperator">DNO</a></th><td>4257</td><td width="150px"><img height="60px" src="img/SineRegular_DNO.png"></td><td>8.66e-07</td><td>1.17e-06</td></tr>
<tr><th><a href="../api/continuity/operators/#continuity.operators.DeepONet">DeepONet</a></th><td>5872</td><td width="150px"><img height="60px" src="img/SineRegular_DeepONet.png"></td><td>3.21e-06</td><td>3.14e-06</td></tr>
<tr><th><a href="../api/continuity/operators/#continuity.operators.BelNet">BelNet</a></th><td>14056</td><td width="150px"><img height="60px" src="img/SineRegular_BelNet.png"></td><td>1.22e-05</td><td>9.91e-06</td></tr>
</tbody>
</table>
<h2><a href="../api/continuity/benchmarks/#continuity.benchmarks.SineUniform">SineUniform</a></h2>
<table class="benchmark-table">
<thead>
<tr><th>Operator</th><th>Params</th><th>Learning Curve</th><th>loss/train</th><th>loss/test</th></tr>
</thead>
<tbody>
<tr><th><a href="../api/continuity/operators/#continuity.operators.DeepNeuralOperator">DNO</a></th><td>4257</td><td width="150px"><img height="60px" src="img/SineUniform_DNO.png"></td><td>0.000169</td><td><b>0.000333</b></td></tr>
<tr><th><a href="../api/continuity/operators/#continuity.operators.BelNet">BelNet</a></th><td>14056</td><td width="150px"><img height="60px" src="img/SineUniform_BelNet.png"></td><td>2.78e-05</td><td>0.000425</td></tr>
<tr><th><a href="../api/continuity/operators/#continuity.operators.DeepONet">DeepONet</a></th><td>5872</td><td width="150px"><img height="60px" src="img/SineUniform_DeepONet.png"></td><td>0.00214</td><td>0.00642</td></tr>
<tr><th><a href="../api/continuity/operators/#continuity.operators.FourierNeuralOperator">FNO</a></th><td>505</td><td width="150px"><img height="60px" src="img/SineUniform_FNO.png"></td><td>0.191</td><td>0.238</td></tr>
</tbody>
</table>
12 changes: 12 additions & 0 deletions benchmarks/process.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
from continuity.benchmarks.table import BenchmarkTable
from continuity.benchmarks.database import BenchmarkDatabase


if __name__ == "__main__":
db = BenchmarkDatabase()

if len(db) == 0:
quit()

table = BenchmarkTable(db)
table.write_html()
56 changes: 56 additions & 0 deletions benchmarks/run_all.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
from typing import List
from continuity.benchmarks.runner import BenchmarkRunner, RunConfig
from continuity.benchmarks.database import BenchmarkDatabase
from continuity.benchmarks import SineRegular, SineUniform
from continuity.operators import (
DeepONet,
BelNet,
FourierNeuralOperator,
DeepNeuralOperator,
)


def all_runs():
runs: List[RunConfig] = []

# Benchmarks
benchmarks = {
"SineRegular": lambda: SineRegular(),
"SineUniform": lambda: SineUniform(),
}

# Operators
operators = {
"DeepONet": lambda s: DeepONet(s),
"FNO": lambda s: FourierNeuralOperator(s),
"BelNet": lambda s: BelNet(s),
"DNO": lambda s: DeepNeuralOperator(s),
}

# Seeds
num_seeds = 1

# Generate all combinations
for benchmark_name, benchmark_factory in benchmarks.items():
for operator_name, operator_factory in operators.items():
for seed in range(num_seeds):
run = RunConfig(
benchmark_name,
benchmark_factory,
operator_name,
operator_factory,
seed,
)
runs.append(run)

return runs


if __name__ == "__main__":
db = BenchmarkDatabase()
runner = BenchmarkRunner()

for i, run in enumerate(all_runs()):
print(f"Running {i+1}/{len(all_runs())}")
stats = runner.run(run)
db.add_run(stats)
23 changes: 23 additions & 0 deletions benchmarks/run_single.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
from continuity.benchmarks.runner import BenchmarkRunner, RunConfig
from continuity.benchmarks.database import BenchmarkDatabase
from continuity.benchmarks import SineRegular
from continuity.operators import DeepONet

run = RunConfig(
benchmark_name="Sine",
benchmark_factory=lambda: SineRegular(),
operator_name="DeepONet",
operator_factory=lambda s: DeepONet(s),
seed=0,
lr=1e-4,
tol=1e-3,
max_epochs=1000,
device="cpu",
)

if __name__ == "__main__":
db = BenchmarkDatabase()
runner = BenchmarkRunner()

stats = runner.run(run)
db.add_run(stats)
46 changes: 46 additions & 0 deletions build_scripts/copy_benchmarks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
import logging
import os
from pathlib import Path

import mkdocs.plugins

logger = logging.getLogger(__name__)

root_dir = Path(__file__).parent.parent
docs_benchmarks_dir = root_dir / "docs" / "benchmarks"
benchmarks_dir = root_dir / "benchmarks" / "html"


@mkdocs.plugins.event_priority(100)
def on_pre_build(config):
logger.info("Temporarily copying benchmark results to docs directory")
docs_benchmarks_dir.mkdir(parents=True, exist_ok=True)
filepaths = list(benchmarks_dir.glob("*"))

for file in filepaths:
target_filepath = docs_benchmarks_dir / file.name

try:
if os.path.getmtime(file) <= os.path.getmtime(target_filepath):
logger.info(f"File '{os.fspath(file)}' hasn't been updated, skipping.")
continue
except FileNotFoundError:
pass
logger.info(
f"Creating symbolic link for '{os.fspath(file)}' "
f"at '{os.fspath(target_filepath)}'"
)
target_filepath.symlink_to(file)

logger.info("Finished copying notebooks to examples directory")


@mkdocs.plugins.event_priority(-100)
def on_shutdown():
logger.info("Removing temporary examples directory")
for file in docs_benchmarks_dir.glob("*.html"):
file.unlink()
for file in docs_benchmarks_dir.glob("*.css"):
file.unlink()
for file in docs_benchmarks_dir.glob("*.png"):
file.unlink()
3 changes: 3 additions & 0 deletions docs/benchmarks/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
table.html
style.css
img/*.png
1 change: 1 addition & 0 deletions docs/benchmarks/img
7 changes: 7 additions & 0 deletions docs/benchmarks/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
This is an overview of some benchmark results to compare the performance
of different operator architectures on various problems.

The benchmarks are implemented in the `benchmarks` directory and we refer to
this directory for more information on how the benchmarks are run.

{% include 'benchmarks/table.html' %}
2 changes: 2 additions & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ hooks:
- build_scripts/copy_notebooks.py
- build_scripts/copy_changelog.py
- build_scripts/copy_contributing.py
- build_scripts/copy_benchmarks.py

plugins:
- autorefs
Expand Down Expand Up @@ -185,6 +186,7 @@ nav:
- FNO: examples/fno.ipynb
- Meshes: examples/meshes.ipynb
- Self-supervised: examples/selfsupervised.ipynb
- Benchmarks: benchmarks/index.md
- Code:
- API: api/continuity/
- Changelog: CHANGELOG.md
Expand Down
3 changes: 1 addition & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ dependencies = [
# neptune-client
# mlflow
# comet-ml
"tensorboard",
# tensorboard

# --------- linters --------- #
"pre-commit", # hooks for applying linters on commit
Expand All @@ -47,7 +47,6 @@ dependencies = [

# --------- dependencies --------- #
"torch>=2.1.0,<3.0.0",
"dadaptation>=3.1,<4.0",
"matplotlib",
"pandas",
"optuna>=3.5.0,<4.0.0",
Expand Down
3 changes: 2 additions & 1 deletion src/continuity/benchmarks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,6 @@
"""

from .benchmark import Benchmark
from .sine import SineRegular, SineUniform

__all__ = ["Benchmark"]
__all__ = ["Benchmark", "SineRegular", "SineUniform"]
31 changes: 31 additions & 0 deletions src/continuity/benchmarks/database.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import time
import pickle

# TODO: Use sqlite


class BenchmarkDatabase:
def __init__(self, file: str = "benchmarks.db"):
self.file = file
self.all_runs = []
self.load()

def add_run(self, stats: dict):
stats["timestamp"] = time.time()
self.all_runs.append(stats)
self.save()

def __len__(self):
return len(self.all_runs)

def load(self):
try:
with open(self.file, "rb") as f:
self.all_runs = pickle.load(f)
print(f"Load database with {len(self.all_runs)} entries.")
except FileNotFoundError:
pass

def save(self):
with open(self.file, "wb") as f:
pickle.dump(self.all_runs, f)
Loading

0 comments on commit ff2f682

Please sign in to comment.