Skip to main content

Distributed Adaptive Control Tutorial

Current Methods in Neurotechnology

Course details
Non-credit or Credit course
Non-credit course
University
Teacher
Ismael T. Freire and Adrián F. Amil
Category
Computational neuroscience
Dimension
Empirical neuroscience & Clinical neuroscience
Level
Introductory
Includes
  • Video lectures
  • Tutorials
  • Exercises

Can you solve the Cliff Walking problem?

Let's explore how can we use memories of past experiences to make decisions by building AI models!

This tutorial introduces the Sequential Episodic Control (SEC) algorithm to solve a classical Reinforcement Learning (RL) challenge: the Cliff Walking problem.

 

Course Features
Study the main components of SEC and how they are computationally implemented in a RL context.
Understand how the classical RL problem presented by the "Cliff World" is a simple benchmark for building an adaptive agent.
Explore the roles of memory and the reward function of SEC's in an agent's behavior and performance.
View the course in Learn Gala Platform