Deep Reinforcement Learning key papers

Reinforcement Learning (RL) combined with Deep Learning has been termed Deep Reinforcement Learning (DRL). Deep learning provides function approximation techniques that can handle large and complex state and/or action spaces, making it possible to tackle problems that were infeasible with traditional RL techniques. This line of research led to transformers and LLMs. Here’s a brief timeline of key insights and breakthroughs in Deep Reinforcement Learning over the past decade:

1. 2013Playing Atari with Deep Reinforcement Learning:
  • Organization: DeepMind
  • Breakthrough: This was perhaps the first major work that combined deep learning with Q-learning, resulting in a Deep Q-Network (DQN). The DQN was able to play several Atari 2600 games at or above human-level performance.
  • Key Insights: Experience replay and fixed Q-targets were used to stabilize learning. The experience replay helped in breaking the temporal correlations, and fixed Q-targets reduced the moving target problem in Q-learning.
2. 2015Human-level control through deep reinforcement learning:
  • Organization: DeepMind
  • Breakthrough: An extension of the 2013 DQN work, this presented a more robust DQN that achieved human-level performance across a broad range of Atari games.
  • Key Insights: Further stabilization and scaling of DQNs.
3. 2015Continuous control with deep reinforcement learning (DDPG):
  • Organization: DeepMind
  • Breakthrough: Introduced the Deep Deterministic Policy Gradient (DDPG) algorithm for continuous action spaces.
  • Key Insights: It utilized actor-critic architecture where the actor produces a deterministic policy, and the critic evaluates it. The Ornstein-Uhlenbeck process was used to add exploration noise.
4. 2016Asynchronous Methods for Deep Reinforcement Learning (A3C):
  • Organization: DeepMind
  • Breakthrough: Introduced the Asynchronous Advantage Actor-Critic (A3C) algorithm which combined the actor-critic approach with asynchronous updates.
  • Key Insights: Multiple agents, each with its own set of model parameters, explored different parts of the environment simultaneously, leading to faster and more robust policy learning. The asynchronous nature also helped in stabilizing learning.
5. 2017Proximal Policy Optimization (PPO):
  • Organization: OpenAI
  • Breakthrough: Introduced a simpler and more robust method for policy gradient optimization, making training more stable.
  • Key Insights: PPO constrains the policy updates to ensure the new policy isn’t too different from the old policy, thereby avoiding extreme policy updates that can destabilize training. PPO balances the benefits of both Policy Gradient methods (link) and Trust Region Policy Optimization methods (TRPO link). PPO achieves this by using a clipped surrogate objective that prevents large updates during training, enhancing stability and performance. In the context ofPPO, the term “surrogate objective” refers to an approximation used in place of the actual objective function during optimization. This surrogate function is easier to optimize and ensures more stable and reliable updates to the policy. The clip function ensures that the probability ratiodoes not deviate too far from 1 by clipping it to the range [1−ϵ,1+ϵ][1−ϵ,1+ϵ]. This prevents excessively large policy updates.
6. 2018Soft Actor-Critic (SAC):
  • Organization: UC Berkeley
  • Breakthrough: SAC is an off-policy actor-critic deep RL algorithm based on the maximum entropy RL framework.
  • Key Insights: SAC seeks policies that maximize both expected return and entropy, leading to more exploration, smoother policy updates, and generally better performance on continuous control tasks.
7. 2019 and beyond:

Subsequent years have seen the evolution of these methods and the introduction of new algorithms, improvements in sample efficiency, stability, and scalability. Also, there has been a focus on:

  • Transfer Learning: Using pre-trained models to improve sample efficiency in RL.
  • Meta-learning: Training agents that can quickly adapt to new tasks.
  • Model-based RL: Incorporating learned models of the environment dynamics to improve sample efficiency and policy learning.

Leave a comment