Model-free RL algorithms learn optimal policies directly from interaction data without explicit environment models. Five popular model-free RL algorithms are highlighted with key ideas, mathematic objectives, and engineering applications.
Overview: Q-Learning learns the optimal action-value function `Q^*(s,a)` through experience. It is an off-policy, value-based method, capable of learning from exploratory actions.
Update Rule: $$ Q_{new}(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha \left[r_{t+1} + \gamma \max_{a'} Q(s_{t+1},a') - Q(s_t,a_t)\right] $$
Applications: Equipment scheduling, discrete process control (e.g., valves, simple robotics).
Overview: DQN extends Q-learning with neural networks for approximating Q-values, suitable for large-scale state spaces.
Key Innovations:
Loss Function: $$ L(\theta) = \left(Q_\theta(s,a) - [r + \gamma \max_{a'}Q_{\theta^-}(s',a')]\right)^2 $$
Applications: Autonomous driving, discrete robotic control, power systems management.
Overview: Policy gradient methods directly optimize the policy parameters `\theta` by maximizing expected rewards.
REINFORCE Update: $$ \theta \leftarrow \theta + \alpha \nabla_{\theta}\log \pi_\theta(a_t|s_t)G_t $$
Variance Reduction: Use baseline `b(s)` or advantage `A(s,a)` to improve stability.
Applications: Continuous control in robotics, process parameter tuning, network optimization.
Overview: PPO improves policy gradient stability by preventing overly aggressive updates.
Clipped Objective: $$ J^{CLIP}(\theta) = \mathbb{E}_t\left[\min\left(r_t(\theta)\hat{A}_t,\text{clip}\{r_t(\theta),1-\epsilon,1+\epsilon\}\hat{A}_t\right)\right] $$
Applications: Robotic locomotion, drone control, continuous industrial parameter optimization.
Overview: DDPG combines policy gradients and Q-learning for continuous action spaces using actor-critic methods.
Updates:
$$ L(\theta_Q) = \left(Q_{\theta_Q}(s,a) - [r + \gamma Q_{\theta_Q^-}(s',\mu_{\theta_\mu^-}(s'))]\right)^2 $$
$$ \nabla_{\theta_\mu} J \approx \mathbb{E}_{s}\left[\nabla_a Q_{\theta_Q}(s,a)|_{a=\mu(s)}\nabla_{\theta_\mu}\mu_{\theta_\mu}(s)\right] $$
Applications: Robotics (pendulum balancing), chemical process control, portfolio management, voltage regulation in power systems.