Skip to content

This code is part of the paper ''A Large Scale Benchmark to Validate Sleep-Wake Scoring Algorithms'' currently under review.

Notifications You must be signed in to change notification settings

joaopalotti/sleep_awake_benchmark

Repository files navigation

Sleep Awake Benchmark

This code is part of the paper ''A Large Scale Benchmark to Validate Sleep-Wake Scoring Algorithms'' currently under review.

How To Use it

  1. Download MESA datasets from sleepdata.org (https://sleepdata.org/datasets/mesa)
  2. Process dataset with notebook -- generate_mesa_dataset_task.ipynb
  3. In general, all scripts will need to preprocess the data (i.e., get the mean activity value in a win of size X). To save time, we use implemented a preprocessing script (sleep_processdataset.py) which the only parameter is the task number. It outputs a file called hdf_task. Run with:
$ python sleep_processdataset.py <taskid>
  1. Run formulas (automatically reads dataset from hdf_task) and process all formulas:
$ python sleep_formulas.py <taskid>
  1. Run machine learning models:
$ python sleep_ml.py <taskid>
  1. Run LSTM/CNN:
$ python sleep_nn.py <taskid> <seq_len:20,50,100> <kind:LSTM,CNN>
  1. Geting the results. 7.1. After running all scripts, a bunch of intermediary result files was created (e.g., task1_formulas.csv). The script ship.sh collect them all and send them to their expected diretory (e.g., result or summary). 7.2. sleep_summary.py generates the summaries files, which are the input for the ipython/jupyter notebooks (result_analysis.ipynb, sleep_auc_plots.ipynb and sleep_plot345.ipynb). Notebooks provide an easy way to visualize and analyze the results, as well as generate the plots used in the manuscript.

About

This code is part of the paper ''A Large Scale Benchmark to Validate Sleep-Wake Scoring Algorithms'' currently under review.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published