diff --git a/_talks/tutorial-1.md b/_talks/tutorial-1.md index 5743a1d..ad626a0 100644 --- a/_talks/tutorial-1.md +++ b/_talks/tutorial-1.md @@ -7,5 +7,108 @@ categories: permalink: /:collection/:categories/Tutorial 1 --- -# Abstract -TBD \ No newline at end of file +
The CityLearn tutorial at RLEM'23 will help participants get acquainted with the CityLearn OpenAI Gym environment, developed for easy implementation and benchmarking of control algorithms, e.g., rule-based control, model predictive control or deep reinforcement learning control in the demand response, building energy and grid-interactive community domain. By the end of the tutorial, participants will learn how to design their own simple or advanced control algorithms to provide energy flexibility, and acquire familiarity with the CityLearn environment for extended use in personal projects.
+ +The primary learning outcome for participants is to gain familiarity with CityLearn environment, its application programming interface (API) and dataset offerings for extended use in academic research or personal projects. Other secondary outcomes are to:
+ +The target audience for this tutorial includes the following:
+ +The CityLearn tutorial has a fairly low entry level and participants do not need to have prior experience in reinforcement learning (RL) nor use of a Gym environment. However, participants need to have at least, beginner knowledge in Python or other similar high-level scripting language. Also, participants should have a computer that is able to launch a Google Colab notebook in the browser or a Jupyter notebook locally.
+ +The workshop is divided into the following parts (note that durations are flexible):
+ + +Duration | +Description | +
---|---|
5m | +Overview, Learning Outcomes, Climate Impact, Target Audience and Prerequisites | +
20m | +Background on Grid-Interactive Efficient Buildings, Energy Flexibility and CityLearn | +
15m | +Overview of Hands-On Experiments, Setting up Development Environment, Dataset Description and Key Performance Indicators for Evaluation | +
5m | +Coffee Break | +
10m | +Experiment 1: Build your Custom Rule-Based Controller | +
10m | +Experiment 2: An Introduction to Tabular Q-Learning Algorithm as an Adaptive Controller | +
10m | +Experiment 3.1: Optimize a Soft-Actor Critic Reinforcement Learning Controller | +
15m | +Experiment 3.2: Tune your Soft-Actor Critic Agent | +
5m | +Coffee Break | +
20m | +Make Submission to The CityLearn Challenge 2023 Control and Forecast Tracks | +
5m | +Next Steps and Concluding Remarks | +