diff --git a/_talks/tutorial-1.md b/_talks/tutorial-1.md index 5743a1d..ad626a0 100644 --- a/_talks/tutorial-1.md +++ b/_talks/tutorial-1.md @@ -7,5 +7,108 @@ categories: permalink: /:collection/:categories/Tutorial 1 --- -# Abstract -TBD \ No newline at end of file +

Description

+

The CityLearn tutorial at RLEM'23 will help participants get acquainted with the CityLearn OpenAI Gym environment, developed for easy implementation and benchmarking of control algorithms, e.g., rule-based control, model predictive control or deep reinforcement learning control in the demand response, building energy and grid-interactive community domain. By the end of the tutorial, participants will learn how to design their own simple or advanced control algorithms to provide energy flexibility, and acquire familiarity with the CityLearn environment for extended use in personal projects.

+ +

Learning Outcomes

+

The primary learning outcome for participants is to gain familiarity with CityLearn environment, its application programming interface (API) and dataset offerings for extended use in academic research or personal projects. Other secondary outcomes are to:

+ +
    +
  1. Understand how electrification, distributed energy resources e.g. batteries, photovoltaic (PV) systems and smart controls provide a promising pathway to decarbonization and energy flexibility.
  2. +
  3. Learn how to design and optimize their own rule-based control (RBC) agent for battery management using readily available knowledge of a building's energy use.
  4. +
  5. Identify the challenges surrounding the generalizability of an RBC agent and how reinforcement learning (RL) can mitigate these challenges.
  6. +
  7. Train their own RL Tabular Q-Learning algorithm.
  8. +
  9. Evaluate the performance of a standard model-free deep RL algorithm in optimizing key performance indicators (KPIs) that are targeted at quantifying energy flexibility, environmental and economic costs.
  10. +
  11. Learn the effect of different control algorithms and their parameters in improving these KPIs.
  12. +
  13. Make submission to The CityLearn Challenge 2023.
  14. +
+ +

Target Audience

+

The target audience for this tutorial includes the following:

+ + + +

Prerequisites

+ +

The CityLearn tutorial has a fairly low entry level and participants do not need to have prior experience in reinforcement learning (RL) nor use of a Gym environment. However, participants need to have at least, beginner knowledge in Python or other similar high-level scripting language. Also, participants should have a computer that is able to launch a Google Colab notebook in the browser or a Jupyter notebook locally.

+ +

Program

+

The workshop is divided into the following parts (note that durations are flexible):

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
DurationDescription
5mOverview, Learning Outcomes, Climate Impact, Target Audience and Prerequisites
20mBackground on Grid-Interactive Efficient Buildings, Energy Flexibility and CityLearn
15mOverview of Hands-On Experiments, Setting up Development Environment, Dataset Description and Key Performance Indicators for Evaluation
5mCoffee Break
10mExperiment 1: Build your Custom Rule-Based Controller
10mExperiment 2: An Introduction to Tabular Q-Learning Algorithm as an Adaptive Controller
10mExperiment 3.1: Optimize a Soft-Actor Critic Reinforcement Learning Controller
15mExperiment 3.2: Tune your Soft-Actor Critic Agent
5mCoffee Break
20mMake Submission to The CityLearn Challenge 2023 Control and Forecast Tracks
5mNext Steps and Concluding Remarks
+ +

References

+ \ No newline at end of file