本专栏按照 https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html 顺序进行总结 。
文章目录
- 原理解析
-
- 概述
- 原理细节
- 算法实现
-
- 总体流程
- 代码实现
A2C\color{red}A2CA2C :[ paper | code ]
原理解析
概述
A2C 是 A3C 的同步版本;即 A3C第一个 A(异步) 被移除。在A3C中,每个 agent 都独立地与全局参数通信,因此有时 某个 agent 可能会使用不同版本的策略,因此聚合的更新可能不是最优的。为了解决不一致性,A2C中的 coordinator 会在更新全局参数之前等待所有并行的 actors 完成他们的工作,然后在下一个迭代中并行的 actors 使用相同的策略。梯度同步更新使训练更有凝聚力,有可能使收敛速度更快。
A2C已被证明能够更有效地利用gpu,并在大批量下更好地工作,同时实现与A3C相同或更好的性能。
原理细节
回顾 Actor-Critic
在PG策略中,如果我们用Q函数来代替R,同时我们创建一个Critic网络来计算Q函数值,那么我们就得到了Actor-Critic方法。Actor参数的梯度变为:
此时的Critic根据估计的Q值和实际Q值的平方误差进行更新,对Critic来说,其loss为:
A2C
我们常常给 QQQ 值增加一个基线,使得反馈有正有负,这里的基线通常用状态的价值函数来表示,因此梯度就变为了:
但是,这样的话我们需要有两个网络分别计算状态-动作价值 QQQ 和状态价值 VVV,因此我们做这样的转换:
这样会是增加一定的方差,不过可以忽略不计,这样我们就得到了Advantage Actor-Critic方法,此时的Critic变为估计状态价值V的网络。因此Critic网络的损失变为实际的状态价值和估计的状态价值的平方损失:
算法实现
A2C的统一学习 和 A3C每个Worker的训练学习,采样数据的Policy 与 当前学习的Policy参数是一致的,即on-policy学习。
总体流程
- 开启多个线程(Worker),从Global Network同步最新的网络参数;
- 每个Worker独立地进行采样;
- 当数据总量达到mini-batch size时,全部停止采样;
- Global Network根据mini-batch的数据统一训练学习;
- 每个Worker更新Global Network的参数
- 重复2~5
代码实现
详细见 github:
https://github.com/sweetice/Deep-reinforcement-learning-with-pytorch/blob/master/Char04%20A2C/A2C.py
import math
import randomimport gym
import numpy as npimport torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Categoricalimport matplotlib.pyplot as pltuse_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")from multiprocessing_env import SubprocVecEnvnum_envs = 8
env_name = "CartPole-v0"def make_env():def _thunk():env = gym.make(env_name)return envreturn _thunkplt.ion()
envs = [make_env() for i in range(num_envs)]
envs = SubprocVecEnv(envs) # 8 envenv = gym.make(env_name) # a single envclass ActorCritic(nn.Module):def __init__(self, num_inputs, num_outputs, hidden_size, std=0.0):super(ActorCritic, self).__init__()self.critic = nn.Sequential(nn.Linear(num_inputs, hidden_size),nn.ReLU(),nn.Linear(hidden_size, 1))self.actor = nn.Sequential(nn.Linear(num_inputs, hidden_size),nn.ReLU(),nn.Linear(hidden_size, num_outputs),nn.Softmax(dim=1),)def forward(self, x):value = self.critic(x)probs = self.actor(x)dist = Categorical(probs)return dist, valuedef test_env(vis=False):state = env.reset()if vis: env.render()done = Falsetotal_reward = 0while not done:state = torch.FloatTensor(state).unsqueeze(0).to(device)dist, _ = model(state)next_state, reward, done, _ = env.step(dist.sample().cpu().numpy()[0])state = next_stateif vis: env.render()total_reward += rewardreturn total_rewarddef compute_returns(next_value, rewards, masks, gamma=0.99):R = next_valuereturns = []for step in reversed(range(len(rewards))):R = rewards[step] + gamma * R * masks[step]returns.insert(0, R)return returnsdef plot(frame_idx, rewards):plt.plot(rewards,'b-')plt.title('frame %s. reward: %s' % (frame_idx, rewards[-1]))plt.pause(0.0001)num_inputs = envs.observation_space.shape[0]
num_outputs = envs.action_space.n#Hyper params:
hidden_size = 256
lr = 1e-3
num_steps = 5model = ActorCritic(num_inputs, num_outputs, hidden_size).to(device)
optimizer = optim.Adam(model.parameters())max_frames = 20000
frame_idx = 0
test_rewards = []state = envs.reset()while frame_idx < max_frames:log_probs = []values = []rewards = []masks = []entropy = 0# rollout trajectoryfor _ in range(num_steps):state = torch.FloatTensor(state).to(device)dist, value = model(state)action = dist.sample()next_state, reward, done, _ = envs.step(action.cpu().numpy())log_prob = dist.log_prob(action)entropy += dist.entropy().mean()log_probs.append(log_prob)values.append(value)rewards.append(torch.FloatTensor(reward).unsqueeze(1).to(device))masks.append(torch.FloatTensor(1 - done).unsqueeze(1).to(device))state = next_stateframe_idx += 1if frame_idx % 100 == 0:test_rewards.append(np.mean([test_env() for _ in range(10)]))plot(frame_idx, test_rewards)next_state = torch.FloatTensor(next_state).to(device)_, next_value = model(next_state)returns = compute_returns(next_value, rewards, masks)log_probs = torch.cat(log_probs)returns = torch.cat(returns).detach()values = torch.cat(values)advantage = returns - valuesactor_loss = -(log_probs * advantage.detach()).mean()critic_loss = advantage.pow(2).mean()loss = actor_loss + 0.5 * critic_loss - 0.001 * entropyoptimizer.zero_grad()loss.backward()optimizer.step()#test_env(True)