Skip to content

lilanger/RL-SHEMS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RL-SHEMS

This repository belongs to a publication recently accepted by Applied Energy, the publication will be linked here once it is online.

The publication is closely linked to another publication/repository of mine: https://https://github.com/lilanger/SHEMS

Langer, Lissy, and Thomas Volling. "An optimal home energy management system for modulating heat pumps and photovoltaic systems." Applied Energy 278 (2020): 115661. https://doi.org/10.1016/j.apenergy.2020.115661

Preprint available here: https://arxiv.org/abs/2009.02349

This repository takes the model predictive control (MPC) implementation of the above Smart Home Energy Management (SHEMS) system and translates it into a reinforcement learning (RL) environment. The different environments tested are implemented using the Julia package Reinforce.jl which is very light-weight.

The RL environment is solved using the deep deterministic policy gradient (DDPG) algorithm implemented using the Julia package Flux.jl. You will find some other algorithms in the repository but most of them will not work in their current state.

HOW IT WORKS (some hints)

Loading the right environment

The repository contains a Manifest and Project files, so that the same Julia package versions can be installed. Julia is used in version 1.6.1.

On a cluster

When running the model on a cluster the job files can be used, default is for a gpu (jobfile_ddpg_v12) but there is also a cpu version (jobfile_ddpg_v12_cpu). For example, 40 parallel model runs can then be started using the bash command: qsub -t 1-40:1 jobfile_ddpg_v12.job

Work flow

  • In general, all input data is fed from the input.jl file, there are some templates for the different algorithms available. The input files of previous runs are saved in out/input.
  • The general workflow is defined in DDPG_reinforce_v12_nf.
  • The folder algorithms contains the code for the DDPG implementation.
  • The folder Reinforce.jl... contains the RL environments and the file to embed them in the Reinforce package. The environments used in the paper are Case A: H10 , CaseB: H9 and Case C: U8.
  • The Analysis-cases folder contains the result analysis of the cases run and the results of the main runs illustrated in the paper.
  • The data folder contains the input data of the RL environment.
  • The out folder contains the results of the model runs, I have, however, not uploaded all results but just the most recent ones to save on space.

I tried to add some comments in the code, so other people would be able to understand what is going on. I hope I was somewhat successful. If you have questions, just raise an issue and I will try to help.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published