tf2-rl-cb-merge-1

61 阅读44分钟

TensorFlow2 强化学习秘籍(二)

原文:annas-archive.org/md5/ae4f6c3ed954fce75003dcfcae0c4977

译者:飞龙

协议:CC BY-NC-SA 4.0

第三章:第三章:实现高级 RL 算法

本章提供了简短而清晰的食谱,帮助你使用 TensorFlow 2.x 从零开始实现先进的 强化学习RL)算法和代理。包括构建 深度 Q 网络DQN)、双重和决斗深度 Q 网络DDQNDDDQN)、深度递归 Q 网络DRQN)、异步优势演员-评论家A3C)、近端策略优化PPO)和 深度确定性策略梯度DDPG)的食谱。

本章将讨论以下食谱:

  • 实现深度 Q 学习算法、DQN 和 Double-DQN 代理

  • 实现决斗 DQN 代理

  • 实现决斗双重 DQN 算法和 DDDQN 代理

  • 实现深度递归 Q 学习算法和 DRQN 代理

  • 实现异步优势演员-评论家算法和 A3C 代理

  • 实现近端策略优化算法(Proximal Policy Optimization)和 PPO 代理

  • 实现深度确定性策略梯度算法和 DDPG 代理

技术要求

书中的代码在 Ubuntu 18.04 和 Ubuntu 20.04 上经过广泛测试,并且如果安装了 Python 3.6+,应该可以在之后版本的 Ubuntu 上运行。安装 Python 3.6+ 以及之前章节所列的必要 Python 包后,代码也应该能够在 Windows 和 Mac OS X 上正常运行。建议创建并使用名为 tf2rl-cookbook 的 Python 虚拟环境来安装包并运行本书中的代码。推荐使用 Miniconda 或 Anaconda 安装 Python 虚拟环境进行管理。

每个章节中每个食谱的完整代码可以在此处获取:github.com/PacktPublishing/Tensorflow-2-Reinforcement-Learning-Cookbook

实现深度 Q 学习算法、DQN 和 Double-DQN 代理

DQN 代理使用深度神经网络学习 Q 值函数。DQN 已经证明自己是离散动作空间环境和问题中一种强大的算法,并且被认为是深度强化学习历史上的一个重要里程碑,当 DQN 掌握了 Atari 游戏时,成为了一个标志性的成果。

Double-DQN 代理使用两个相同的深度神经网络,它们的更新方式不同,因此权重也不同。第二个神经网络是从过去某一时刻(通常是上一轮)复制的主神经网络。

在本章节结束时,你将从零开始使用 TensorFlow 2.x 实现一个完整的 DQN 和 Double-DQN 代理,能够在任何离散动作空间的强化学习环境中进行训练。

让我们开始吧。

准备工作

要完成这个食谱,你首先需要激活 tf2rl-cookbook Conda Python 虚拟环境,并运行 pip install -r requirements.txt。如果以下导入语句没有问题,那么你就可以开始了!

import argparse
from datetime import datetime
import os
import random
from collections import deque
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input

现在我们可以开始了。

如何实现……

DQN 智能体包含几个组件,分别是DQN类、Agent类和train方法。执行以下步骤,从零开始实现这些组件,构建一个完整的 DQN 智能体,使用 TensorFlow 2.x:

  1. 首先,让我们创建一个参数解析器来处理脚本的配置输入:

            parser = argparse.ArgumentParser(prog="TFRL-Cookbook-Ch3-DQN")
    parser.add_argument("--env , default="CartPole-v0")
    parser.add_argument("--lr", type=float, default=0.005)
    parser.add_argument("--batch_size", type=int, default=256)
    parser.add_argument("--gamma", type=float, default=0.95)
    parser.add_argument("--eps", type=float, default=1.0)
    parser.add_argument("--eps_decay", type=float, default=0.995)
    parser.add_argument("--eps_min", type=float, default=0.01)
    parser.add_argument("--logdir", default="logs")
    args = parser.parse_args()
    
  2. 现在,让我们创建一个 Tensorboard 日志记录器,用于在智能体训练过程中记录有用的统计数据:

    logdir = os.path.join(
        args.logdir, parser.prog, args.env, 
        datetime.now().strftime("%Y%m%d-%H%M%S")
    )
    print(f"Saving training logs to:{logdir}")
    writer = tf.summary.create_file_writer(logdir)
    
  3. 接下来,让我们实现一个ReplayBuffer类:

    class ReplayBuffer:
        def __init__(self, capacity=10000):
            self.buffer = deque(maxlen=capacity)
        def store(self, state, action, reward, next_state,
        done):
            self.buffer.append([state, action, reward, 
            next_state, done])
        def sample(self):
            sample = random.sample(self.buffer, 
                                   args.batch_size)
            states, actions, rewards, next_states, done = \
                                map(np.asarray, zip(*sample))
            states = np.array(states).reshape(
                                        args.batch_size, -1)
            next_states = np.array(next_states).\
                                reshape(args.batch_size, -1)
            return states, actions, rewards, next_states,
            done
        def size(self):
            return len(self.buffer)
    
  4. 现在是时候实现 DQN 类了,该类定义了 TensorFlow 2.x 中的深度神经网络:

    class DQN:
        def __init__(self, state_dim, aciton_dim):
            self.state_dim = state_dim
            self.action_dim = aciton_dim
            self.epsilon = args.eps
            self.model = self.nn_model()
        def nn_model(self):
            model = tf.keras.Sequential(
                [
                    Input((self.state_dim,)),
                    Dense(32, activation="relu"),
                    Dense(16, activation="relu"),
                    Dense(self.action_dim),
                ]
            )
            model.compile(loss="mse", 
                          optimizer=Adam(args.lr))
            return model
    
  5. 为了从 DQN 获取预测和动作,让我们实现predictget_action方法:

        def predict(self, state):
            return self.model.predict(state)
        def get_action(self, state):
            state = np.reshape(state, [1, self.state_dim])
            self.epsilon *= args.eps_decay
            self.epsilon = max(self.epsilon, args.eps_min)
            q_value = self.predict(state)[0]
            if np.random.random() < self.epsilon:
                return random.randint(0, self.action_dim - 1)
            return np.argmax(q_value)
        def train(self, states, targets):
            self.model.fit(states, targets, epochs=1)
    
  6. 实现了其他组件后,我们可以开始实现我们的Agent类:

    class Agent:
        def __init__(self, env):
            self.env = env
            self.state_dim = \
                self.env.observation_space.shape[0]
            self.action_dim = self.env.action_space.n
            self.model = DQN(self.state_dim, self.action_dim)
            self.target_model = DQN(self.state_dim, 
                                    self.action_dim)
            self.update_target()
            self.buffer = ReplayBuffer()
        def update_target(self):
            weights = self.model.model.get_weights()
            self.target_model.model.set_weights(weights)
    
  7. 深度 Q 学习算法的核心是 q 学习更新和经验回放。让我们接下来实现它:

        def replay_experience(self):
            for _ in range(10):
                states, actions, rewards, next_states, done=\
                    self.buffer.sample()
                targets = self.target_model.predict(states)
                next_q_values = self.target_model.\
                             predict(next_states).max(axis=1)
                targets[range(args.batch_size), actions] = (
                    rewards + (1 - done) * next_q_values * \
                    args.gamma
                )
                self.model.train(states, targets)
    
  8. 下一步至关重要的是实现train函数来训练智能体:

    def train(self, max_episodes=1000):
            with writer.as_default():  # Tensorboard logging
                for ep in range(max_episodes):
                    done, episode_reward = False, 0
                    observation = self.env.reset()
                    while not done:
                        action = \
                           self.model.get_action(observation)
                        next_observation, reward, done, _ = \
                           self.env.step(action)
                        self.buffer.store(
                            observation, action, reward * \
                            0.01, next_observation, done
                        )
                        episode_reward += reward
                        observation = next_observation
                    if self.buffer.size() >= args.batch_size:
                        self.replay_experience()
                    self.update_target()
                    print(f"Episode#{ep} Reward:{
                                            episode_reward}")
                    tf.summary.scalar("episode_reward",
                                     episode_reward, step=ep)
                    writer.flush()
    
  9. 最后,让我们创建主函数以开始训练智能体:

    if __name__ == "__main__":
        env = gym.make("CartPole-v0")
        agent = Agent(env)
        agent.train(max_episodes=20000)
    
  10. 要在默认环境(CartPole-v0)中训练 DQN 智能体,请执行以下命令:

    python ch3-deep-rl-agents/1_dqn.py
    
  11. 你还可以使用命令行参数在任何 OpenAI Gym 兼容的离散动作空间环境中训练 DQN 智能体:

    python ch3-deep-rl-agents/1_dqn.py –env "MountainCar-v0"
    
  12. 现在,为了实现 Double DQN 智能体,我们必须修改replay_experience方法,以使用 Double Q 学习的更新步骤,如下所示:

        def replay_experience(self):
            for _ in range(10):
                states, actions, rewards, next_states, done=\
                    self.buffer.sample()
                targets = self.target_model.predict(states)
                next_q_values = \
                    self.target_model.predict(next_states)[
                    range(args.batch_size),
                    np.argmax(self.model.predict(
                                       next_states), axis=1),
                ]
                targets[range(args.batch_size), actions] = (
                    rewards + (1 - done) * next_q_values * \
                        args.gamma
                )
                self.model.train(states, targets)
    
  13. 最后,为了训练 Double DQN 智能体,保存并运行脚本,更新后的replay_experience方法,或者使用作为本书源代码一部分提供的脚本:

    python ch3-deep-rl-agents/1_double_dqn.py
    

让我们看看它是如何工作的。

它是如何工作的...

DQN 中的权重更新按以下 Q 学习方程进行:

这里, 是 DQN 参数(权重)的变化,s 是当前状态,a 是当前动作,s' 是下一个状态,w 代表 DQN 的权重, 是折扣因子, 是学习率, 表示由 DQN 预测的给定状态(s)和动作(a)的 Q 值,权重为

为了理解 DQN 智能体与 Double-DQN 智能体的区别,请对比第 8 步(DQN)和第 13 步(Double DQN)中的replay_experience方法。你会注意到,关键区别在于计算next_q_values。DQN 智能体使用预测的 Q 值的最大值(这可能是高估的),而 Double DQN 智能体使用两个不同神经网络的预测 Q 值。这种方法是为了避免 DQN 智能体高估 Q 值的问题。

实现对抗性 DQN 智能体

对抗性 DQN 智能体通过修改的网络架构显式地估计两个量:

  • 状态值,V(s)

  • 优势值,A(s, a)

状态值估计了处于状态 s 时的价值,优势值表示在状态 s 中采取行动 a 的优势。通过显式和独立地估计这两个数量,Dueling DQN 相较于 DQN 表现得更好。这个配方将带你逐步实现一个从零开始的 Dueling DQN 智能体,使用 TensorFlow 2.x。

准备工作

要完成这个配方,首先需要激活 tf2rl-cookbook Conda Python 虚拟环境,并运行 pip install -r requirements.txt。如果以下导入语句没有问题,则说明可以开始了!

import argparse
import os
import random
from collections import deque
from datetime import datetime
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Add, Dense, Input
from tensorflow.keras.optimizers import Adam

现在我们可以开始了。

如何实现…

Dueling DQN 智能体由几个组件组成,即 DuelingDQN 类、Agent 类和 train 方法。按照以下步骤,从零开始实现这些组件,利用 TensorFlow 2.x 构建一个完整的 Dueling DQN 智能体:

  1. 作为第一步,让我们创建一个参数解析器,用于处理脚本的命令行配置输入:

    parser = argparse.ArgumentParser(prog="TFRL-Cookbook-Ch3-DuelingDQN")
    parser.add_argument("--env", default="CartPole-v0")
    parser.add_argument("--lr", type=float, default=0.005)
    parser.add_argument("--batch_size", type=int, default=64)
    parser.add_argument("--gamma", type=float, default=0.95)
    parser.add_argument("--eps", type=float, default=1.0)
    parser.add_argument("--eps_decay", type=float, default=0.995)
    parser.add_argument("--eps_min", type=float, default=0.01)
    parser.add_argument("--logdir", default="logs")
    args = parser.parse_args()
    
  2. 为了在智能体训练过程中记录有用的统计信息,让我们创建一个 TensorBoard 日志记录器:

    logdir = os.path.join(
        args.logdir, parser.prog, args.env, 
        datetime.now().strftime("%Y%m%d-%H%M%S")
    )
    print(f"Saving training logs to:{logdir}")
    writer = tf.summary.create_file_writer(logdir)
    
  3. 接下来,让我们实现一个 ReplayBuffer 类:

    class ReplayBuffer:
        def __init__(self, capacity=10000):
            self.buffer = deque(maxlen=capacity)
        def store(self, state, action, reward, next_state,
        done):
            self.buffer.append([state, action, reward, 
                                next_state, done])
        def sample(self):
            sample = random.sample(self.buffer, 
                                   args.batch_size)
            states, actions, rewards, next_states, done = \
                                map(np.asarray, zip(*sample))
            states = np.array(states).reshape(
                                         args.batch_size, -1)
            next_states = np.array(next_states).reshape(
                                         args.batch_size, -1)
            return states, actions, rewards, next_states,
            done
        def size(self):
            return len(self.buffer)
    
  4. 现在是时候实现 DuelingDQN 类,该类在 TensorFlow 2.x 中定义深度神经网络了:

    class DuelingDQN:
        def __init__(self, state_dim, aciton_dim):
            self.state_dim = state_dim
            self.action_dim = aciton_dim
            self.epsilon = args.eps
            self.model = self.nn_model()
        def nn_model(self):
            backbone = tf.keras.Sequential(
                [
                    Input((self.state_dim,)),
                    Dense(32, activation="relu"),
                    Dense(16, activation="relu"),
                ]
            )
            state_input = Input((self.state_dim,))
            backbone_1 = Dense(32, activation="relu")\
                              (state_input)
            backbone_2 = Dense(16, activation="relu")\
                              (backbone_1)
            value_output = Dense(1)(backbone_2)
            advantage_output = Dense(self.action_dim)\
                                    (backbone_2)
            output = Add()([value_output, advantage_output])
            model = tf.keras.Model(state_input, output)
            model.compile(loss="mse", 
                          optimizer=Adam(args.lr))
            return model
    
  5. 为了从 Dueling DQN 获取预测和动作,让我们实现 predictget_actiontrain 方法:

            def predict(self, state):
            return self.model.predict(state)
        def get_action(self, state):
            state = np.reshape(state, [1, self.state_dim])
            self.epsilon *= args.eps_decay
            self.epsilon = max(self.epsilon, args.eps_min)
            q_value = self.predict(state)[0]
            if np.random.random() < self.epsilon:
                return random.randint(0, self.action_dim - 1)
            return np.argmax(q_value)
        def train(self, states, targets):
            self.model.fit(states, targets, epochs=1)
    
  6. 现在我们可以开始实现 Agent 类:

    class Agent:
        def __init__(self, env):
            self.env = env
            self.state_dim = \
                self.env.observation_space.shape[0]
            self.action_dim = self.env.action_space.n
            self.model = DuelingDQN(self.state_dim, 
                                    self.action_dim)
            self.target_model = DuelingDQN(self.state_dim,
                                           self.action_dim)
            self.update_target()
            self.buffer = ReplayBuffer()
        def update_target(self):
            weights = self.model.model.get_weights()
            self.target_model.model.set_weights(weights)
    
  7. Dueling Deep Q-learning 算法的关键在于 q-learning 更新和经验回放。接下来,让我们实现这些:

        def replay_experience(self):
            for _ in range(10):
                states, actions, rewards, next_states, done=\
                    self.buffer.sample()
                targets = self.target_model.predict(states)
                next_q_values = self.target_model.\
                             predict(next_states).max(axis=1)
                targets[range(args.batch_size), actions] = (
                    rewards + (1 - done) * next_q_values * \
                    args.gamma
                )
                self.model.train(states, targets)
    
  8. 下一个关键步骤是实现 train 函数来训练智能体:

    def train(self, max_episodes=1000):
            with writer.as_default():
                for ep in range(max_episodes):
                    done, episode_reward = False, 0
                    state = self.env.reset()
                    while not done:
                        action = self.model.get_action(state)
                        next_state, reward, done, _ = \
                                        self.env.step(action)
                        self.buffer.put(state, action, \
                                        reward * 0.01, \
                                        next_state, done)
                        episode_reward += reward
                        state = next_state
                    if self.buffer.size() >= args.batch_size:
                        self.replay_experience()
                    self.update_target()
                    print(f"Episode#{ep} \
                          Reward:{episode_reward}")
                    tf.summary.scalar("episode_reward",\
                                     episode_reward, step=ep)
    
  9. 最后,让我们创建主函数来启动智能体的训练:

    if __name__ == "__main__":
        env = gym.make("CartPole-v0")
        agent = Agent(env)
        agent.train(max_episodes=20000)
    
  10. 要在默认环境(CartPole-v0)中训练 Dueling DQN 智能体,请执行以下命令:

    python ch3-deep-rl-agents/2_dueling_dqn.py
    
  11. 你也可以在任何与 OpenAI Gym 兼容的离散动作空间环境中训练 DQN 智能体,使用命令行参数:

    python ch3-deep-rl-agents/2_dueling_dqn.py –env "MountainCar-v0"
    

让我们来看它是如何工作的。

它是如何工作的…

Dueling-DQN 智能体在神经网络架构上与 DQN 智能体有所不同。

这些差异在下图中进行了总结:

图 3.1 – DQN 和 Dueling-DQN 比较

图 3.1 – DQN 和 Dueling-DQN 的比较

DQN(图的上半部分)具有线性架构,预测一个单一的数量(Q(s, a)),而 Dueling-DQN 在最后一层有一个分叉,预测多个数量。

实现 Dueling Double DQN 算法和 DDDQN 智能体

Dueling Double DQNDDDQN)结合了 Double Q-learning 和 Dueling 架构的优势。Double Q-learning 修正了 DQN 过高估计动作值的问题。Dueling 架构使用修改后的架构,分别学习状态值函数(V)和优势函数(A)。这种显式分离使算法能够更快学习,特别是在有许多动作可选且动作之间非常相似的情况下。Dueling 架构使智能体即使在一个状态下只采取了一个动作时也能进行学习,因为它可以更新和估计状态值函数,这与 DQN 智能体不同,后者无法从尚未采取的动作中学习。在完成这个食谱后,你将拥有一个完整的 DDDQN 智能体实现。

准备好了吗?

要完成这个食谱,你首先需要激活 tf2rl-cookbook Conda Python 虚拟环境并运行 pip install -r requirements.txt。如果以下导入语句没有问题,那么你就可以开始了!

import argparse
from datetime import datetime
import os
import random
from collections import deque
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Add, Dense, Input
from tensorflow.keras.optimizers import Adam

我们准备好了,开始吧!

如何做……

DDDQN 智能体结合了 DQN、Double DQN 和 Dueling DQN 中的思想。执行以下步骤,从头开始实现这些组件,以便使用 TensorFlow 2.x 构建一个完整的 Dueling Double DQN 智能体:

  1. 首先,让我们创建一个参数解析器来处理脚本的配置输入:

    parser = argparse.ArgumentParser(prog="TFRL-Cookbook-Ch3-DuelingDoubleDQN")
    parser.add_argument("--env", default="CartPole-v0")
    parser.add_argument("--lr", type=float, default=0.005)
    parser.add_argument("--batch_size", type=int, default=256)
    parser.add_argument("--gamma", type=float, default=0.95)
    parser.add_argument("--eps", type=float, default=1.0)
    parser.add_argument("--eps_decay", type=float, default=0.995)
    parser.add_argument("--eps_min", type=float, default=0.01)
    parser.add_argument("--logdir", default="logs")
    args = parser.parse_args()
    
  2. 接下来,让我们创建一个 Tensorboard 日志记录器,用于记录智能体训练过程中的有用统计数据:

    logdir = os.path.join(
        args.logdir, parser.prog, args.env, \
        datetime.now().strftime("%Y%m%d-%H%M%S")
    )
    print(f"Saving training logs to:{logdir}")
    writer = tf.summary.create_file_writer(logdir)
    
  3. 现在,让我们实现一个 ReplayBuffer

    class ReplayBuffer:
        def __init__(self, capacity=10000):
            self.buffer = deque(maxlen=capacity)
        def store(self, state, action, reward, next_state, done):
            self.buffer.append([state, action, reward, \
            next_state, done])
        def sample(self):
            sample = random.sample(self.buffer, \
                                   args.batch_size)
            states, actions, rewards, next_states, done = \
                                map(np.asarray, zip(*sample))
            states = np.array(states).reshape(
                                         args.batch_size, -1)
            next_states = np.array(next_states).\
                                 reshape(args.batch_size, -1)
            return states, actions, rewards, next_states, \
            done
        def size(self):
            return len(self.buffer)
    
  4. 现在是时候实现 Dueling DQN 类了,它将按照 Dueling 架构定义神经网络,后续我们会在此基础上添加 Double DQN 更新:

    class DuelingDQN:
        def __init__(self, state_dim, aciton_dim):
            self.state_dim = state_dim
            self.action_dim = aciton_dim
            self.epsilon = args.eps
            self.model = self.nn_model()
        def nn_model(self):
            state_input = Input((self.state_dim,))
            fc1 = Dense(32, activation="relu")(state_input)
            fc2 = Dense(16, activation="relu")(fc1)
            value_output = Dense(1)(fc2)
            advantage_output = Dense(self.action_dim)(fc2)
            output = Add()([value_output, advantage_output])
            model = tf.keras.Model(state_input, output)
            model.compile(loss="mse", \
                          optimizer=Adam(args.lr))
            return model
    
  5. 为了从 Dueling DQN 获取预测和动作,让我们实现 predictget_action 方法:

        def predict(self, state):
            return self.model.predict(state)
        def get_action(self, state):
            state = np.reshape(state, [1, self.state_dim])
            self.epsilon *= args.eps_decay
            self.epsilon = max(self.epsilon, args.eps_min)
            q_value = self.predict(state)[0]
            if np.random.random() < self.epsilon:
                return random.randint(0, self.action_dim - 1)
            return np.argmax(q_value)
        def train(self, states, targets):
            self.model.fit(states, targets, epochs=1)
    
  6. 其他组件实现完成后,我们可以开始实现 Agent 类:

    class Agent:
        def __init__(self, env):
            self.env = env
            self.state_dim = \
                self.env.observation_space.shape[0]
            self.action_dim = self.env.action_space.n
            self.model = DuelingDQN(self.state_dim, 
                                    self.action_dim)
            self.target_model = DuelingDQN(self.state_dim,
                                           self.action_dim)
            self.update_target()
            self.buffer = ReplayBuffer()
        def update_target(self):
            weights = self.model.model.get_weights()
            self.target_model.model.set_weights(weights)
    
  7. Dueling Double Deep Q-learning 算法的主要元素是 Q-learning 更新和经验回放。接下来我们将实现这些:

        def replay_experience(self):
            for _ in range(10):
                states, actions, rewards, next_states, done=\
                                         self.buffer.sample()
                targets = self.target_model.predict(states)
                next_q_values = \
                    self.target_model.predict(next_states)[
                    range(args.batch_size),
                    np.argmax(self.model.predict(
                                       next_states), axis=1),
                ]
                targets[range(args.batch_size), actions] = (
                    rewards + (1 - done) * next_q_values * \
                    args.gamma
                )
                self.model.train(states, targets)
    
  8. 下一个关键步骤是实现 train 函数来训练智能体:

    def train(self, max_episodes=1000):
            with writer.as_default():
                for ep in range(max_episodes):
                    done, episode_reward = False, 0
                    observation = self.env.reset()
                    while not done:
                        action = \
                           self.model.get_action(observation)
                        next_observation, reward, done, _ = \
                            self.env.step(action)
                        self.buffer.store(
                            observation, action, reward * \
                                                              0.01, next_observation, done
                        )
                        episode_reward += reward
                        observation = next_observation
                    if self.buffer.size() >= args.batch_size:
                        self.replay_experience()
                    self.update_target()
                    print(f"Episode#{ep} \
                          Reward:{episode_reward}")
                    tf.summary.scalar("episode_reward", 
                                       episode_reward, 
                                       step=ep)
    
  9. 最后,让我们创建主函数来开始训练智能体:

    if __name__ == "__main__":
        env = gym.make("CartPole-v0")
        agent = Agent(env)
        agent.train(max_episodes=20000)
    
  10. 要在默认环境(CartPole-v0)中训练 DQN 智能体,请执行以下命令:

    python ch3-deep-rl-agents/3_dueling_double_dqn.py
    
  11. 你还可以在任何兼容 OpenAI Gym 的离散动作空间环境中使用命令行参数训练 Dueling Double DQN 智能体:

    python ch3-deep-rl-agents/3_dueling_double_dqn.py –env "MountainCar-v0"
    

它是如何工作的……

Dueling Double DQN 架构结合了 Double DQN 和 Dueling 架构引入的进展。

实现深度递归 Q-learning 算法和 DRQN 智能体

DRQN 使用递归神经网络来学习 Q 值函数。DRQN 更适合在部分可观察环境中进行强化学习。DRQN 中的递归网络层允许智能体通过整合时间序列的观察信息来进行学习。例如,DRQN 智能体可以推测环境中移动物体的速度,而无需任何输入的变化(例如,不需要帧堆叠)。在完成这个配方后,您将拥有一个完整的 DRQN 智能体,准备在您选择的强化学习环境中进行训练。

准备开始

要完成这个配方,您首先需要激活tf2rl-cookbook Conda Python 虚拟环境,并运行pip install -r requirements.txt。如果以下导入语句没有问题,那么您就准备好开始了!

import tensorflow as tf
from datetime import datetime
import os
from tensorflow.keras.layers import Input, Dense, LSTM
from tensorflow.keras.optimizers import Adam
import gym
import argparse
import numpy as np
from collections import deque
import random

让我们开始吧!

怎么做…

对抗双重 DQN 智能体结合了 DQN、双重 DQN 和对抗 DQN 的理念。执行以下步骤,从零开始实现这些组件,以使用 TensorFlow 2.x 构建完整的 DRQN 智能体:

  1. 首先,创建一个参数解析器来处理脚本的配置输入:

    parser = argparse.ArgumentParser(prog="TFRL-Cookbook-Ch3-DRQN")
    parser.add_argument("--env", default="CartPole-v0")
    parser.add_argument("--lr", type=float, default=0.005)
    parser.add_argument("--batch_size", type=int, default=64)
    parser.add_argument("--time_steps", type=int, default=4)
    parser.add_argument("--gamma", type=float, default=0.95)
    parser.add_argument("--eps", type=float, default=1.0)
    parser.add_argument("--eps_decay", type=float, default=0.995)
    parser.add_argument("--eps_min", type=float, default=0.01)
    parser.add_argument("--logdir", default="logs")
    args = parser.parse_args()
    
  2. 让我们在智能体的训练过程中使用 Tensorboard 记录有用的统计信息:

    logdir = os.path.join(
        args.logdir, parser.prog, args.env, \
        datetime.now().strftime("%Y%m%d-%H%M%S")
    )
    print(f"Saving training logs to:{logdir}")
    writer = tf.summary.create_file_writer(logdir)
    
  3. 接下来,让我们实现一个ReplayBuffer

    class ReplayBuffer:
        def __init__(self, capacity=10000):
            self.buffer = deque(maxlen=capacity)
        def store(self, state, action, reward, next_state,\
        done):
            self.buffer.append([state, action, reward, \
                                next_state, done])
        def sample(self):
            sample = random.sample(self.buffer, 
                                   args.batch_size)
            states, actions, rewards, next_states, done = \
                map(np.asarray, zip(*sample))
            states = np.array(states).reshape(
                                         args.batch_size, -1)
            next_states = np.array(next_states).reshape(
                                         args.batch_size, -1)
            return states, actions, rewards, next_states, \
            done
        def size(self):
            return len(self.buffer)
    
  4. 现在是时候实现定义深度神经网络的 DRQN 类了,使用的是 TensorFlow 2.x:

    class DRQN:
        def __init__(self, state_dim, action_dim):
            self.state_dim = state_dim
            self.action_dim = action_dim
            self.epsilon = args.eps
            self.opt = Adam(args.lr)
            self.compute_loss = \
                tf.keras.losses.MeanSquaredError()
            self.model = self.nn_model()
        def nn_model(self):
            return tf.keras.Sequential(
                [
                    Input((args.time_steps, self.state_dim)),
                    LSTM(32, activation="tanh"),
                    Dense(16, activation="relu"),
                    Dense(self.action_dim),
                ]
            )
    
  5. 为了从 DRQN 获取预测和动作,让我们实现predictget_action方法:

        def predict(self, state):
            return self.model.predict(state)
        def get_action(self, state):
            state = np.reshape(state, [1, args.time_steps,
                                       self.state_dim])
            self.epsilon *= args.eps_decay
            self.epsilon = max(self.epsilon, args.eps_min)
            q_value = self.predict(state)[0]
            if np.random.random() < self.epsilon:
                return random.randint(0, self.action_dim - 1)
            return np.argmax(q_value)
        def train(self, states, targets):
            targets = tf.stop_gradient(targets)
            with tf.GradientTape() as tape:
                logits = self.model(states, training=True)
                assert targets.shape == logits.shape
                loss = self.compute_loss(targets, logits)
            grads = tape.gradient(loss, 
                              self.model.trainable_variables)
            self.opt.apply_gradients(zip(grads, 
                             self.model.trainable_variables))
    
  6. 实现了其他组件后,我们可以开始实现我们的Agent类:

    class Agent:
        def __init__(self, env):
            self.env = env
            self.state_dim = \
                self.env.observation_space.shape[0]
            self.action_dim = self.env.action_space.n
            self.states = np.zeros([args.time_steps, 
                                    self.state_dim])
            self.model = DRQN(self.state_dim, 
                              self.action_dim)
            self.target_model = DRQN(self.state_dim, 
                                     self.action_dim)
            self.update_target()
            self.buffer = ReplayBuffer()
        def update_target(self):
            weights = self.model.model.get_weights()
            self.target_model.model.set_weights(weights)
    
  7. 除了我们在第 6 步中实现的 DRQN 类中的train方法外,深度递归 Q 学习算法的核心是 Q 学习更新和经验回放。接下来,让我们实现这一部分:

        def replay_experience(self):
            for _ in range(10):
                states, actions, rewards, next_states, done=\
                    self.buffer.sample()
                targets = self.target_model.predict(states)
                next_q_values = self.target_model.\
                             predict(next_states).max(axis=1)
                targets[range(args.batch_size), actions] = (
                    rewards + (1 - done) * next_q_values * \
                    args.gamma
                )
                self.model.train(states, targets)
    
  8. 由于 DRQN 智能体使用递归状态,让我们实现update_states方法来更新智能体的递归状态:

        def update_states(self, next_state):
            self.states = np.roll(self.states, -1, axis=0)
            self.states[-1] = next_state
    
  9. 下一个关键步骤是实现train函数来训练智能体:

    def train(self, max_episodes=1000):
            with writer.as_default():
                for ep in range(max_episodes):
                    done, episode_reward = False, 0
                    self.states = np.zeros([args.time_steps, 
                                            self.state_dim])
                    self.update_states(self.env.reset())
                    while not done:
                        action = self.model.get_action(
                                                 self.states)
                        next_state, reward, done, _ = \
                                        self.env.step(action)
                        prev_states = self.states
                        self.update_states(next_state)
                        self.buffer.store(
                            prev_states, action, reward * \
                            0.01, self.states, done
                        )
                        episode_reward += reward
                    if self.buffer.size() >= args.batch_size:
                        self.replay_experience()
                    self.update_target()
                    print(f"Episode#{ep} \
                          Reward:{episode_reward}")
                    tf.summary.scalar("episode_reward", episode_reward, step=ep)
    
  10. 最后,让我们为智能体创建主要的训练循环:

    if __name__ == "__main__":
        env = gym.make("Pong-v0")
        agent = Agent(env)
        agent.train(max_episodes=20000)
    
  11. 要在默认环境(CartPole-v0)中训练 DRQN 智能体,请执行以下命令:

    python ch3-deep-rl-agents/4_drqn.py
    
  12. 您还可以使用命令行参数在任何 OpenAI Gym 兼容的离散动作空间环境中训练 DQN 智能体:

    python ch3-deep-rl-agents/4_drqn.py –env "MountainCar-v0"
    

它是如何工作的…

DRQN 智能体使用 LSTM 层,这为智能体增加了递归学习能力。LSTM 层在配方的第 5 步中添加到智能体的网络中。配方中的其他步骤与 DQN 智能体类似。

实现异步优势演员评论家算法和 A3C 智能体

A3C 算法在 Actor-Critic 类算法的基础上构建,通过使用神经网络来逼近演员(actor)和评论家(critic)。演员使用深度神经网络学习策略函数,而评论家则估计价值函数。算法的异步性质使得智能体能够从状态空间的不同部分进行学习,从而实现并行学习和更快的收敛。与使用经验回放记忆的 DQN 智能体不同,A3C 智能体使用多个工作线程来收集更多样本进行学习。在本配方的结尾,你将拥有一个完整的脚本,可以用来训练一个适用于任何连续动作值环境的 A3C 智能体!

准备开始

要完成这个配方,你首先需要激活 tf2rl-cookbook Conda Python 虚拟环境并运行 pip install -r requirements.txt。如果以下的导入语句没有问题,那就说明你已经准备好开始了!

import argparse
import os
from datetime import datetime
from multiprocessing import cpu_count
from threading import Thread
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Lambda

现在我们可以开始了。

如何实现…

我们将通过利用 Python 的多进程和多线程功能实现一个 异步优势演员评论家(A3C) 算法。以下步骤将帮助你从零开始使用 TensorFlow 2.x 实现一个完整的 A3C 智能体:

  1. 首先,让我们创建一个参数解析器,用来处理脚本的配置输入:

    parser = argparse.ArgumentParser(prog="TFRL-Cookbook-Ch3-A3C")
    parser.add_argument("--env", default="MountainCarContinuous-v0")
    parser.add_argument("--actor-lr", type=float, default=0.001)
    parser.add_argument("--critic-lr", type=float, default=0.002)
    parser.add_argument("--update-interval", type=int, default=5)
    parser.add_argument("--gamma", type=float, default=0.99)
    parser.add_argument("--logdir", default="logs")
    args = parser.parse_args()
    
  2. 现在让我们创建一个 Tensorboard 日志记录器,以便在智能体训练过程中记录有用的统计信息:

    logdir = os.path.join(
        args.logdir, parser.prog, args.env, \
           datetime.now().strftime("%Y%m%d-%H%M%S")
    )
    print(f"Saving training logs to:{logdir}")
    writer = tf.summary.create_file_writer(logdir)
    
  3. 为了统计全局回合数,让我们定义一个全局变量:

    GLOBAL_EPISODE_NUM = 0
    
  4. 现在我们可以集中精力实现 Actor 类,它将包含一个基于神经网络的策略来在环境中执行动作:

    class Actor:
        def __init__(self, state_dim, action_dim, 
        action_bound, std_bound):
            self.state_dim = state_dim
            self.action_dim = action_dim
            self.action_bound = action_bound
            self.std_bound = std_bound
            self.model = self.nn_model()
            self.opt = tf.keras.optimizers.Adam(
                                               args.actor_lr)
            self.entropy_beta = 0.01
        def nn_model(self):
            state_input = Input((self.state_dim,))
            dense_1 = Dense(32, activation="relu")\
                            (state_input)
            dense_2 = Dense(32, activation="relu")(dense_1)
            out_mu = Dense(self.action_dim, \
                           activation="tanh")(dense_2)
            mu_output = Lambda(lambda x: x * \
                               self.action_bound)(out_mu)
            std_output = Dense(self.action_dim, 
                              activation="softplus")(dense_2)
            return tf.keras.models.Model(state_input, 
                                     [mu_output, std_output])
    
  5. 为了在给定状态下从演员获取动作,让我们定义 get_action 方法:

        def get_action(self, state):
            state = np.reshape(state, [1, self.state_dim])
            mu, std = self.model.predict(state)
            mu, std = mu[0], std[0]
            return np.random.normal(mu, std, 
                                    size=self.action_dim)
    
  6. 接下来,为了计算损失,我们需要计算策略(概率)密度函数的对数:

        def log_pdf(self, mu, std, action):
            std = tf.clip_by_value(std, self.std_bound[0],
                                   self.std_bound[1])
            var = std ** 2
            log_policy_pdf = -0.5 * (action - mu) ** 2 / var\
                              - 0.5 * tf.math.log(
                var * 2 * np.pi
            )
            return tf.reduce_sum(log_policy_pdf, 1,
                                 keepdims=True)
    
  7. 现在让我们使用 log_pdf 方法来计算演员损失:

        def compute_loss(self, mu, std, actions, advantages):
            log_policy_pdf = self.log_pdf(mu, std, actions)
            loss_policy = log_policy_pdf * advantages
            return tf.reduce_sum(-loss_policy)
    
  8. 作为 Actor 类实现的最后一步,让我们定义 train 方法:

        def train(self, states, actions, advantages):
            with tf.GradientTape() as tape:
                mu, std = self.model(states, training=True)
                loss = self.compute_loss(mu, std, actions,
                                         advantages)
            grads = tape.gradient(loss, 
                             self.model.trainable_variables)
            self.opt.apply_gradients(zip(grads, 
                             self.model.trainable_variables))
            return loss
    
  9. 定义好 Actor 类后,我们可以继续定义 Critic 类:

    class Critic:
        def __init__(self, state_dim):
            self.state_dim = state_dim
            self.model = self.nn_model()
            self.opt = tf.keras.optimizers.Adam\
                                (args.critic_lr)
        def nn_model(self):
            return tf.keras.Sequential(
                [
                    Input((self.state_dim,)),
                    Dense(32, activation="relu"),
                    Dense(32, activation="relu"),
                    Dense(16, activation="relu"),
                    Dense(1, activation="linear"),
                ]
            )
    
  10. 接下来,让我们定义 train 方法和一个 compute_loss 方法来训练评论家:

        def compute_loss(self, v_pred, td_targets):
            mse = tf.keras.losses.MeanSquaredError()
            return mse(td_targets, v_pred)
        def train(self, states, td_targets):
            with tf.GradientTape() as tape:
                v_pred = self.model(states, training=True)
                assert v_pred.shape == td_targets.shape
                loss = self.compute_loss(v_pred, \
                                tf.stop_gradient(td_targets))
            grads = tape.gradient(loss, \
                             self.model.trainable_variables)
            self.opt.apply_gradients(zip(grads, 
                             self.model.trainable_variables))
            return loss
    
  11. 是时候基于 Python 的线程接口实现 A3CWorker 类了:

    class A3CWorker(Thread):
        def __init__(self, env, global_actor, global_critic,
        max_episodes):
            Thread.__init__(self)
            self.env = env
            self.state_dim = \
                self.env.observation_space.shape[0]
            self.action_dim = self.env.action_space.shape[0]
            self.action_bound = self.env.action_space.high[0]
            self.std_bound = [1e-2, 1.0]
            self.max_episodes = max_episodes
            self.global_actor = global_actor
            self.global_critic = global_critic
            self.actor = Actor(
                self.state_dim, self.action_dim, 
                self.action_bound, self.std_bound
            )
            self.critic = Critic(self.state_dim)
            self.actor.model.set_weights(
                self.global_actor.model.get_weights())
            self.critic.model.set_weights(
                self.global_critic.model.get_weights())
    
  12. 我们将使用 n 步时间差分 (TD) 学习更新。因此,让我们定义一个方法来计算 n 步 TD 目标:

        def n_step_td_target(self, rewards, next_v_value,
        done):
            td_targets = np.zeros_like(rewards)
            cumulative = 0
            if not done:
                cumulative = next_v_value
            for k in reversed(range(0, len(rewards))):
                cumulative = args.gamma * cumulative + \
                             rewards[k]
                td_targets[k] = cumulative
            return td_targets
    
  13. 我们还需要计算优势值。优势值的最简单形式很容易实现:

        def advantage(self, td_targets, baselines):
            return td_targets - baselines
    
  14. 我们将把 train 方法的实现分为以下两个步骤。首先,让我们实现外部循环:

        def train(self):
            global GLOBAL_EPISODE_NUM
            while self.max_episodes >= GLOBAL_EPISODE_NUM:
                state_batch = []
                action_batch = []
                reward_batch = []
                episode_reward, done = 0, False
                state = self.env.reset()
                while not done:
                    # self.env.render()
                    action = self.actor.get_action(state)
                    action = np.clip(action, 
                                     -self.action_bound,
                                     self.action_bound)
                    next_state, reward, done, _ = \
                        self.env.step(action)
                    state = np.reshape(state, [1, 
                                            self.state_dim])
                    action = np.reshape(action, [1, 1])
                    next_state = np.reshape(next_state, 
                                         [1, self.state_dim])
                    reward = np.reshape(reward, [1, 1])
                    state_batch.append(state)
                    action_batch.append(action)
                    reward_batch.append(reward)
    
  15. 在这一步,我们将完成 train 方法的实现:

                    if len(state_batch) >= args.update_\
                    interval or done:
                        states = np.array([state.squeeze() \
                                   for state in state_batch])
                        actions = np.array([action.squeeze()\
                                 for action in action_batch])
                        rewards = np.array([reward.squeeze()\
                                 for reward in reward_batch])
                        next_v_value = self.critic.model.\
                                          predict(next_state)
                        td_targets = self.n_step_td_target(
                            (rewards + 8) / 8, next_v_value,
                             done
                        )
                        advantages = td_targets - \
                            self.critic.model.predict(states)
                        actor_loss = self.global_actor.train(
                                states, actions, advantages)
                        critic_loss = self.global_critic.\
                                train(states, td_targets)
                        self.actor.model.set_weights(self.\
                            global_actor.model.get_weights())
                        self.critic.model.set_weights(
                            self.global_critic.model.\
                            get_weights()
                        )
                        state_batch = []
                        action_batch = []
                        reward_batch = []
                    episode_reward += reward[0][0]
                    state = next_state[0]
                print(f"Episode#{GLOBAL_EPISODE_NUM}\
                      Reward:{episode_reward}")
                tf.summary.scalar("episode_reward", 
                                   episode_reward,
                                   step=GLOBAL_EPISODE_NUM)
                GLOBAL_EPISODE_NUM += 1
    
  16. A3CWorker 线程的 run 方法将是以下内容:

        def run(self):
            self.train()
    
  17. 接下来,让我们实现 Agent 类:

    class Agent:
        def __init__(self, env_name, 
                     num_workers=cpu_count()):
            env = gym.make(env_name)
            self.env_name = env_name
            self.state_dim = env.observation_space.shape[0]
            self.action_dim = env.action_space.shape[0]
            self.action_bound = env.action_space.high[0]
            self.std_bound = [1e-2, 1.0]
            self.global_actor = Actor(
                self.state_dim, self.action_dim, 
                self.action_bound, self.std_bound
            )
            self.global_critic = Critic(self.state_dim)
            self.num_workers = num_workers
    
  18. A3C 智能体利用多个并发工作线程。为了更新每个工作线程以更新 A3C 智能体,以下代码是必要的:

    def train(self, max_episodes=20000):
            workers = []
            for i in range(self.num_workers):
                env = gym.make(self.env_name)
                workers.append(
                    A3CWorker(env, self.global_actor, 
                            self.global_critic, max_episodes)
                )
            for worker in workers:
                worker.start()
            for worker in workers:
                worker.join()
    
  19. 这样,我们的 A3C 智能体实现就完成了,接下来我们准备定义我们的主函数:

    if __name__ == "__main__":
        env_name = "MountainCarContinuous-v0"
        agent = Agent(env_name, args.num_workers)
        agent.train(max_episodes=20000)
    

它是如何工作的…

简单来说,A3C 算法的核心可以通过以下步骤总结,每个迭代中都会执行这些步骤:

图 3.2 – A3C 智能体学习迭代中的更新步骤

图 3.2 – A3C 智能体学习迭代中的更新步骤

步骤会从上到下重复进行,直到收敛。

实现邻近策略优化算法和 PPO 智能体

邻近策略优化PPO)算法是在信任域策略优化TRPO)的基础上发展而来的,通过将新策略限制在旧策略的信任区域内。PPO 通过使用一个剪切的替代目标函数简化了这一核心思想的实现,这个目标函数更容易实现,但仍然非常强大和高效。它是最广泛使用的强化学习算法之一,尤其适用于连续控制问题。在完成本教程后,你将构建一个 PPO 智能体,并能在你选择的强化学习环境中进行训练。

准备开始

为了完成本教程,你需要先激活tf2rl-cookbook Conda Python 虚拟环境并运行pip install -r requirements.txt。如果以下导入语句没有问题,你就准备好开始了!

import argparse
import os
from datetime import datetime
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input, Lambda 

我们已经准备好开始了。

如何做...

以下步骤将帮助你从头开始使用 TensorFlow 2.x 实现一个完整的 PPO 智能体:

  1. 首先,创建一个参数解析器来处理脚本的配置输入:

    parser = argparse.ArgumentParser(prog="TFRL-Cookbook-Ch3-PPO")
    parser.add_argument("--env", default="Pendulum-v0")
    parser.add_argument("--update-freq", type=int, default=5)
    parser.add_argument("--epochs", type=int, default=3)
    parser.add_argument("--actor-lr", type=float, default=0.0005)
    parser.add_argument("--critic-lr", type=float, default=0.001)
    parser.add_argument("--clip-ratio", type=float, default=0.1)
    parser.add_argument("--gae-lambda", type=float, default=0.95)
    parser.add_argument("--gamma", type=float, default=0.99)
    parser.add_argument("--logdir", default="logs")
    args = parser.parse_args()
    
  2. 接下来,我们创建一个 Tensorboard 日志记录器,以便在智能体训练过程中记录有用的统计信息:

    logdir = os.path.join(
        args.logdir, parser.prog, args.env, 
        datetime.now().strftime("%Y%m%d-%H%M%S")
    )
    print(f"Saving training logs to:{logdir}")
    writer = tf.summary.create_file_writer(logdir)
    
  3. 我们现在可以集中精力实现Actor类,它将包含一个基于神经网络的策略来执行动作:

    class Actor:
        def __init__(self, state_dim, action_dim, 
        action_bound, std_bound):
            self.state_dim = state_dim
            self.action_dim = action_dim
            self.action_bound = action_bound
            self.std_bound = std_bound
            self.model = self.nn_model()
            self.opt = \
                tf.keras.optimizers.Adam(args.actor_lr)
    
        def nn_model(self):
            state_input = Input((self.state_dim,))
            dense_1 = Dense(32, activation="relu")\
                           (state_input)
            dense_2 = Dense(32, activation="relu")\
                           (dense_1)
            out_mu = Dense(self.action_dim,
                           activation="tanh")(dense_2)
            mu_output = Lambda(lambda x: x * \
                               self.action_bound)(out_mu)
            std_output = Dense(self.action_dim, 
                              activation="softplus")(dense_2)
            return tf.keras.models.Model(state_input, 
                                     [mu_output, std_output])
    
  4. 为了从演员那里获得一个动作给定一个状态,先定义get_action方法:

        def get_action(self, state):
            state = np.reshape(state, [1, self.state_dim])
            mu, std = self.model.predict(state)
            action = np.random.normal(mu[0], std[0], 
                                      size=self.action_dim)
            action = np.clip(action, -self.action_bound, 
                             self.action_bound)
            log_policy = self.log_pdf(mu, std, action)
            return log_policy, action
    
  5. 接下来,为了计算损失,我们需要计算策略(概率)密度函数的对数:

        def log_pdf(self, mu, std, action):
            std = tf.clip_by_value(std, self.std_bound[0], 
                                   self.std_bound[1])
            var = std ** 2
            log_policy_pdf = -0.5 * (action - mu) ** 2 / var\
                              - 0.5 * tf.math.log(
                var * 2 * np.pi
            )
            return tf.reduce_sum(log_policy_pdf, 1,
                                 keepdims=True)
    
  6. 现在我们使用log_pdf方法来计算演员损失:

        def compute_loss(self, log_old_policy, 
                         log_new_policy, actions, gaes):
            ratio = tf.exp(log_new_policy - \
                           tf.stop_gradient(log_old_policy))
            gaes = tf.stop_gradient(gaes)
            clipped_ratio = tf.clip_by_value(
                ratio, 1.0 - args.clip_ratio, 1.0 + \
                args.clip_ratio
            )
            surrogate = -tf.minimum(ratio * gaes, \
                                    clipped_ratio * gaes)
            return tf.reduce_mean(surrogate)
    
  7. 作为Actor类实现的最后一步,让我们定义train方法:

        def train(self, log_old_policy, states, actions,
        gaes):
            with tf.GradientTape() as tape:
                mu, std = self.model(states, training=True)
                log_new_policy = self.log_pdf(mu, std,
                                              actions)
                loss = self.compute_loss(log_old_policy, 
                              log_new_policy, actions, gaes)
            grads = tape.gradient(loss, 
                              self.model.trainable_variables)
            self.opt.apply_gradients(zip(grads, 
                             self.model.trainable_variables))
            return loss
    
  8. 定义好Actor类后,我们可以继续定义Critic类:

    class Critic:
        def __init__(self, state_dim):
            self.state_dim = state_dim
            self.model = self.nn_model()
            self.opt = tf.keras.optimizers.Adam(
                                             args.critic_lr)
        def nn_model(self):
            return tf.keras.Sequential(
                [
                    Input((self.state_dim,)),
                    Dense(32, activation="relu"),
                    Dense(32, activation="relu"),
                    Dense(16, activation="relu"),
                    Dense(1, activation="linear"),
                ]
            )
    
  9. 接下来,定义train方法和compute_loss方法来训练评论员:

        def compute_loss(self, v_pred, td_targets):
            mse = tf.keras.losses.MeanSquaredError()
            return mse(td_targets, v_pred)
        def train(self, states, td_targets):
            with tf.GradientTape() as tape:
                v_pred = self.model(states, training=True)
                assert v_pred.shape == td_targets.shape
                loss = self.compute_loss(v_pred, 
                                tf.stop_gradient(td_targets))
            grads = tape.gradient(loss, 
                             self.model.trainable_variables)
            self.opt.apply_gradients(zip(grads, 
                             self.model.trainable_variables))
            return loss
    
  10. 现在是时候实现 PPO Agent类了:

    class Agent:
        def __init__(self, env):
            self.env = env
            self.state_dim = \
                self.env.observation_space.shape[0]
            self.action_dim = self.env.action_space.shape[0]
            self.action_bound = self.env.action_space.high[0]
            self.std_bound = [1e-2, 1.0]
            self.actor_opt = \
                tf.keras.optimizers.Adam(args.actor_lr)
            self.critic_opt = \
                tf.keras.optimizers.Adam(args.critic_lr)
            self.actor = Actor(
                self.state_dim, self.action_dim, 
                self.action_bound, self.std_bound
            )
            self.critic = Critic(self.state_dim)    
    
  11. 我们将使用广义优势估计GAE)。让我们实现一个方法来计算 GAE 目标值:

        def gae_target(self, rewards, v_values, next_v_value,
        done):
            n_step_targets = np.zeros_like(rewards)
            gae = np.zeros_like(rewards)
            gae_cumulative = 0
            forward_val = 0
            if not done:
                forward_val = next_v_value
            for k in reversed(range(0, len(rewards))):
                delta = rewards[k] + args.gamma * \
                        forward_val - v_values[k]
                gae_cumulative = args.gamma * \
                    args.gae_lambda * gae_cumulative + delta
                gae[k] = gae_cumulative
                forward_val = v_values[k]
                n_step_targets[k] = gae[k] + v_values[k]
            return gae, n_step_targets
    
  12. 现在我们将拆分train方法的实现。首先,让我们实现外部循环:

        def train(self, max_episodes=1000):
            with writer.as_default():
                for ep in range(max_episodes):
                    state_batch = []
                    action_batch = []
                    reward_batch = []
                    old_policy_batch = []
                    episode_reward, done = 0, False
                    state = self.env.reset()
    
  13. 在这一步,我们将开始内循环(每个回合)实现,并在接下来的几个步骤中完成它:

                    while not done:
                        # self.env.render()
                        log_old_policy, action = \
                            self.actor.get_action(state)
                        next_state, reward, done, _ = \
                                       self.env.step(action)
                        state = np.reshape(state, [1, 
                                             self.state_dim])
                        action = np.reshape(action, [1, 1])
                        next_state = np.reshape(next_state, 
                                         [1, self.state_dim])
                        reward = np.reshape(reward, [1, 1])
                        log_old_policy = \
                           np.reshape(log_old_policy, [1, 1])
                        state_batch.append(state)
                        action_batch.append(action)
                        reward_batch.append((reward + 8) / 8)
                        old_policy_batch.append(log_old_policy)
    
  14. 在这一步,我们将使用 PPO 算法所做的价值预测来为策略更新过程做准备:

        if len(state_batch) >= args.update_freq or done:
                            states = np.array([state.\
                                       squeeze() for state \
                                       in state_batch])
                            actions = np.array(
                                [action.squeeze() for action\
                                 in action_batch]
                            )
                            rewards = np.array(
                                [reward.squeeze() for reward\
                                 in reward_batch]
                            )
                            old_policies = np.array(
                                [old_pi.squeeze() for old_pi\
                                 in old_policy_batch]
                            )
                            v_values = self.critic.model.\
                                        predict(states)
                            next_v_value =self.critic.model.\
                                          predict(next_state)
                            gaes, td_targets = \
                                 self.gae_target(
                                     rewards, v_values, \
                                     next_v_value, done
                            )
    
  15. 在这一步,我们将实现 PPO 算法的策略更新步骤。这些步骤发生在内循环中,每当足够的智能体轨迹信息以采样经验批次的形式可用时:

                            actor_losses, critic_losses=[],[]
                            for epoch in range(args.epochs):
                                actor_loss =self.actor.train(
                                    old_policies, states,\
                                    actions, gaes
                                )
                                actor_losses.append(
                                                 actor_loss)
                                critic_loss = self.critic.\
                                    train(states, td_targets)
                                critic_losses.append(
                                                 critic_loss)
                            # Plot mean actor & critic losses 
                            # on every update
                            tf.summary.scalar("actor_loss", 
                              np.mean(actor_losses), step=ep)
                            tf.summary.scalar(
                                "critic_loss", 
                                 np.mean(critic_losses), 
                                 step=ep
                            )
    
  16. 作为 train 方法的最后一步,我们将重置中间变量,并打印出智能体获得的每个回合奖励的总结:

                            state_batch = []
                            action_batch = []
                            reward_batch = []
                            old_policy_batch = []
                        episode_reward += reward[0][0]
                        state = next_state[0]
                    print(f"Episode#{ep} \
                            Reward:{episode_reward}")
                    tf.summary.scalar("episode_reward", \
                                       episode_reward, \
                                       step=ep)
    
  17. 有了这些,我们的 PPO 智能体实现就完成了,接下来我们可以定义主函数来开始训练!

    if __name__ == "__main__":
        env_name = "Pendulum-v0"
        env = gym.make(env_name)
        agent = Agent(env)
        agent.train(max_episodes=20000)
    

它是如何工作的……

PPO 算法使用截断来形成一个替代损失函数,并根据策略更新使用多个周期的 随机梯度下降/上升SGD)优化。PPO 引入的截断减少了对策略的有效变化,从而提高了策略在学习过程中的稳定性。

PPO 智能体使用演员(Actor)根据最新的策略参数从环境中收集样本。第 15 步中定义的循环会从经验中采样一个小批量,并使用截断的替代目标函数在 n 个周期(通过 --epoch 参数传递给脚本)中训练网络。然后使用新的经验样本重复此过程。

实现深度确定性策略梯度算法和 DDPG 智能体

确定性策略梯度(DPG) 是一种演员-评论家强化学习算法,使用两个神经网络:一个用于估计动作价值函数,另一个用于估计最优目标策略。深度确定性策略梯度DDPG)智能体建立在 DPG 的基础上,并且由于使用了确定性动作策略,相较于普通的演员-评论家智能体,它在效率上更高。通过完成这个食谱,你将获得一个强大的智能体,可以在多种强化学习环境中高效训练。

准备工作

要完成这个食谱,首先需要激活 tf2rl-cookbook Conda Python 虚拟环境,并运行 pip install -r requirements.txt。如果以下导入语句没有问题,那么你已经准备好开始了!

import argparse
import os
import random
from collections import deque
from datetime import datetime
import gym
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input, Lambda, concatenate

现在我们可以开始了。

如何做到这一点……

以下步骤将帮助你从零开始使用 TensorFlow 2.x 实现一个完整的 DDPG 智能体:

  1. 首先创建一个参数解析器来处理脚本的命令行配置输入:

    parser = argparse.ArgumentParser(prog="TFRL-Cookbook-Ch3-DDPG")
    parser.add_argument("--env", default="Pendulum-v0")
    parser.add_argument("--actor_lr", type=float, default=0.0005)
    parser.add_argument("--critic_lr", type=float, default=0.001)
    parser.add_argument("--batch_size", type=int, default=64)
    parser.add_argument("--tau", type=float, default=0.05)
    parser.add_argument("--gamma", type=float, default=0.99)
    parser.add_argument("--train_start", type=int, default=2000)
    parser.add_argument("--logdir", default="logs")
    args = parser.parse_args()
    
  2. 让我们创建一个 Tensorboard 日志记录器,用来记录在智能体训练过程中的有用统计信息:

    logdir = os.path.join(
        args.logdir, parser.prog, args.env, \
        datetime.now().strftime("%Y%m%d-%H%M%S")
    )
    print(f"Saving training logs to:{logdir}")
    writer = tf.summary.create_file_writer(logdir)
    
  3. 现在让我们实现一个经验回放内存:

    class ReplayBuffer:
        def __init__(self, capacity=10000):
            self.buffer = deque(maxlen=capacity)
        def store(self, state, action, reward, next_state,
                  done):
            self.buffer.append([state, action, reward, 
                                next_state, done])
        def sample(self):
            sample = random.sample(self.buffer, 
                                   args.batch_size)
            states, actions, rewards, next_states, done = \
                                map(np.asarray, zip(*sample))
            states = np.array(states).reshape(
                                         args.batch_size, -1)
            next_states = np.array(next_states).\
                              reshape(args.batch_size, -1)
            return states, actions, rewards, next_states, \
            done
        def size(self):
            return len(self.buffer)
    
  4. 我们现在可以集中精力实现 Actor 类,它将包含一个基于神经网络的策略进行操作:

    class Actor:
        def __init__(self, state_dim, action_dim, 
        action_bound):
            self.state_dim = state_dim
            self.action_dim = action_dim
            self.action_bound = action_bound
            self.model = self.nn_model()
            self.opt = tf.keras.optimizers.Adam(args.actor_lr)
        def nn_model(self):
            return tf.keras.Sequential(
                [
                    Input((self.state_dim,)),
                    Dense(32, activation="relu"),
                    Dense(32, activation="relu"),
                    Dense(self.action_dim, 
                          activation="tanh"),
                    Lambda(lambda x: x * self.action_bound),
                ]
            )
    
  5. 为了根据状态从演员获取动作,让我们定义 get_action 方法:

        def get_action(self, state):
            state = np.reshape(state, [1, self.state_dim])
            return self.model.predict(state)[0]
    
  6. 接下来,我们将实现一个预测函数来返回演员网络的预测结果:

        def predict(self, state):
            return self.model.predict(state)
    
  7. 作为 Actor 类实现的最后一步,让我们定义 train 方法:

        def train(self, states, q_grads):
            with tf.GradientTape() as tape:
                grads = tape.gradient(
                    self.model(states), 
                    self.model.trainable_variables, -q_grads
                )
            self.opt.apply_gradients(zip(grads, 
                             self.model.trainable_variables))
    
  8. 定义了 Actor 类后,我们可以继续定义 Critic 类:

    class Critic:
        def __init__(self, state_dim, action_dim):
            self.state_dim = state_dim
            self.action_dim = action_dim
            self.model = self.nn_model()
            self.opt = \
                tf.keras.optimizers.Adam(args.critic_lr)
        def nn_model(self):
            state_input = Input((self.state_dim,))
            s1 = Dense(64, activation="relu")(state_input)
            s2 = Dense(32, activation="relu")(s1)
            action_input = Input((self.action_dim,))
            a1 = Dense(32, activation="relu")(action_input)
            c1 = concatenate([s2, a1], axis=-1)
            c2 = Dense(16, activation="relu")(c1)
            output = Dense(1, activation="linear")(c2)
            return tf.keras.Model([state_input, 
                                   action_input], output)
    
  9. 在此步骤中,我们将实现一个方法来计算 Q 函数的梯度:

        def q_gradients(self, states, actions):
            actions = tf.convert_to_tensor(actions)
            with tf.GradientTape() as tape:
                tape.watch(actions)
                q_values = self.model([states, actions])
                q_values = tf.squeeze(q_values)
            return tape.gradient(q_values, actions)
    
  10. 作为一个便捷方法,我们还可以定义一个 predict 函数来返回评论家网络的预测结果:

        def predict(self, inputs):
            return self.model.predict(inputs)
    
  11. 接下来,让我们定义 train 方法和 compute_loss 方法来训练评论家:

        def train(self, states, actions, td_targets):
            with tf.GradientTape() as tape:
                v_pred = self.model([states, actions],
                                      training=True)
                assert v_pred.shape == td_targets.shape
                loss = self.compute_loss(v_pred, 
                                tf.stop_gradient(td_targets))
            grads = tape.gradient(loss, 
                              self.model.trainable_variables)
            self.opt.apply_gradients(zip(grads, 
                             self.model.trainable_variables))
            return loss
    
  12. 现在是时候实现 DDPG 的Agent类了:

    class Agent:
        def __init__(self, env):
            self.env = env
            self.state_dim = \
                self.env.observation_space.shape[0]
            self.action_dim = self.env.action_space.shape[0]
            self.action_bound = self.env.action_space.high[0]
            self.buffer = ReplayBuffer()
            self.actor = Actor(self.state_dim, \
                               self.action_dim, 
                               self.action_bound)
            self.critic = Critic(self.state_dim, 
                                 self.action_dim)
            self.target_actor = Actor(self.state_dim, 
                                      self.action_dim, 
                                      self.action_bound)
            self.target_critic = Critic(self.state_dim, 
                                       self.action_dim)
            actor_weights = self.actor.model.get_weights()
            critic_weights = self.critic.model.get_weights()
            self.target_actor.model.set_weights(
                                               actor_weights)
            self.target_critic.model.set_weights(
                                              critic_weights)
    
  13. 现在让我们实现update_target方法,用目标网络的权重来更新演员和评论员网络的权重:

        def update_target(self):
            actor_weights = self.actor.model.get_weights()
            t_actor_weights = \
                self.target_actor.model.get_weights()
            critic_weights = self.critic.model.get_weights()
            t_critic_weights = \
                self.target_critic.model.get_weights()
            for i in range(len(actor_weights)):
                t_actor_weights[i] = (
                    args.tau * actor_weights[i] + \
                    (1 - args.tau) * t_actor_weights[i]
                )
            for i in range(len(critic_weights)):
                t_critic_weights[i] = (
                    args.tau * critic_weights[i] + \
                    (1 - args.tau) * t_critic_weights[i]
                )
            self.target_actor.model.set_weights(
                                             t_actor_weights)
            self.target_critic.model.set_weights(
                                            t_critic_weights)
    
  14. 接下来,让我们实现一个辅助方法来计算 TD 目标:

        def get_td_target(self, rewards, q_values, dones):
            targets = np.asarray(q_values)
            for i in range(q_values.shape[0]):
                if dones[i]:
                    targets[i] = rewards[i]
                else:
                    targets[i] = args.gamma * q_values[i]
            return targets
    
  15. 确定性策略梯度算法的目的是向从确定性策略中采样的动作添加噪声。我们将使用奥恩斯坦-乌伦贝克OU)过程来生成噪声:

        def add_ou_noise(self, x, rho=0.15, mu=0, dt=1e-1,
         sigma=0.2, dim=1):
            return (
                x + rho * (mu - x) * dt + sigma * \
                np.sqrt(dt) * np.random.normal(size=dim)
            )
    
  16. 在这一步中,我们将使用经验回放来更新演员网络和评论员网络:

        def replay_experience(self):
            for _ in range(10):
                states, actions, rewards, next_states, \
                    dones = self.buffer.sample()
                target_q_values = self.target_critic.predict(
                    [next_states, self.target_actor.\
                     predict(next_states)]
                )
                td_targets = self.get_td_target(rewards,
                                      target_q_values, dones)
                self.critic.train(states, actions, 
                                  td_targets)
                s_actions = self.actor.predict(states)
                s_grads = self.critic.q_gradients(states, 
                                                  s_actions)
                grads = np.array(s_grads).reshape((-1, 
                                            self.action_dim))
                self.actor.train(states, grads)
                self.update_target()
    
  17. 利用我们实现的所有组件,我们现在准备将它们组合在一起,放入train方法中:

        def train(self, max_episodes=1000):
            with writer.as_default():
                for ep in range(max_episodes):
                    episode_reward, done = 0, False
                    state = self.env.reset()
                    bg_noise = np.zeros(self.action_dim)
                    while not done:
                        # self.env.render()
                        action = self.actor.get_action(state)
                        noise = self.add_ou_noise(bg_noise, \
                                         dim=self.action_dim)
                        action = np.clip(
                            action + noise, -self.action_\
                              bound, self.action_bound
                        )
                        next_state, reward, done, _ = \
                                       self.env.step(action)
                        self.buffer.store(state, action, \
                          (reward + 8) / 8, next_state, done)
                        bg_noise = noise
                        episode_reward += reward
                        state = next_state
                    if (
                        self.buffer.size() >= args.batch_size
                        and self.buffer.size() >= \
                            args.train_start
                    ):
                        self.replay_experience()
                    print(f"Episode#{ep} \
                            Reward:{episode_reward}")
                    tf.summary.scalar("episode_reward", 
                                     episode_reward, step=ep)
    
  18. 至此,我们的 DDPG 代理实现已经完成,我们准备定义主函数以开始训练!

    if __name__ == "__main__":
        env_name = "Pendulum-v0"
        env = gym.make(env_name)
        agent = Agent(env)
        agent.train(max_episodes=20000)
    

它是如何工作的…

DDPG 代理估计两个量——Q 值函数和最优策略。DDPG 结合了 DQN 和 DPG 中介绍的思想。DDPG 除了采用 DQN 中引入的思想外,还使用了策略梯度更新规则,如第 14 步中定义的更新步骤所示。

第四章:第四章:现实世界中的强化学习——构建加密货币交易智能体

深度强化学习深度 RL)智能体在解决现实世界中的挑战性问题时具有很大的潜力,并且存在许多机会。然而,现实世界中成功应用深度 RL 智能体的故事较少,除了游戏领域,主要是由于与 RL 智能体在实际部署中相关的各种挑战。本章包含了一些食谱,帮助你成功开发用于一个有趣且具有回报的现实世界问题的 RL 智能体:加密货币交易。本章的食谱包含了如何为加密货币交易实现自定义的、兼容 OpenAI Gym 的学习环境,这些环境支持离散和连续值的动作空间。此外,你还将学习如何为加密货币交易构建和训练 RL 智能体。交易学习环境也将提供。

具体来说,本章将涉及以下食谱:

  • 使用真实市场数据构建比特币交易强化学习平台

  • 使用价格图表构建以太坊交易强化学习平台

  • 为强化学习智能体构建一个先进的加密货币交易平台

  • 使用 RL 训练加密货币交易机器人

让我们开始吧!

技术要求

书中的代码在 Ubuntu 18.04 和 Ubuntu 20.04 上经过了广泛测试,并且如果安装了 Python 3.6+,它应该也能在后续版本的 Ubuntu 上运行。安装了 Python 3.6+ 并且根据每个食谱开头列出的必要 Python 包后,这些代码应该在 Windows 和 macOS X 上也能正常运行。你应该创建并使用一个名为 tf2rl-cookbook 的 Python 虚拟环境来安装这些包,并运行本书中的代码。建议安装 Miniconda 或 Anaconda 来管理 Python 虚拟环境。

每个章节中每个食谱的完整代码可以在这里找到:github.com/PacktPublishing/Tensorflow-2-Reinforcement-Learning-Cookbook

使用真实市场数据构建比特币交易强化学习平台

这个食谱将帮助你为智能体构建一个加密货币交易强化学习环境。这个环境模拟了基于来自 Gemini 加密货币交易所的真实数据的比特币交易所。在这个环境中,强化学习智能体可以进行买入/卖出/持有交易,并根据它的利润/亏损获得奖励,初始时智能体的交易账户中会有一笔现金余额。

准备工作

为完成这个食谱,请确保你使用的是最新版本。你需要激活 tf2rl-cookbook Python/conda 虚拟环境。确保更新该环境,使其与本书代码库中的最新 conda 环境规范文件(tfrl-cookbook.yml)匹配。如果以下 import 语句没有问题,那么你就可以开始了:

import os
import random
from typing import Dict
import gym
import numpy as np
import pandas as pd
from gym import spaces

现在,让我们开始吧!

如何做……

按照以下步骤学习如何实现 CryptoTradingEnv

  1. 让我们从导入所需的 Python 模块开始:

  2. 我们还将使用在 trading_utils.py 中实现的 TradeVisualizer 类。我们将在实际使用时更详细地讨论这个类:

    from trading_utils import TradeVisualizer
    
  3. 为了方便配置加密货币交易环境,我们将设置一个环境配置字典。请注意,我们的加密货币交易环境已被配置好,能够基于来自 Gemini 加密货币交易所的真实数据进行比特币交易:

    env_config = {
        "exchange": "Gemini", # Cryptocurrency exchange
        # (Gemini, coinbase, kraken, etc.)
        "ticker": "BTCUSD", # CryptoFiat
        "frequency": "daily", # daily/hourly/minutes
        "opening_account_balance": 100000,
        # Number of steps (days) of data provided to the 
        # agent in one observation.
        "observation_horizon_sequence_length": 30,
        "order_size": 1, # Number of coins to buy per 
        # buy/sell order
    }
    
  4. 让我们开始定义我们的 CryptoTradingEnv 类:

    class CryptoTradingEnv(gym.Env):
        def __init__(self, env_config: Dict = env_config):
            super(CryptoTradingEnv, self).__init__()
            self.ticker = env_config.get("ticker", "BTCUSD")
            data_dir = os.path.join(os.path.dirname(os.path.\
                                 realpath(__file__)), "data")
            self.exchange = env_config["exchange"]
            freq = env_config["frequency"]
            if freq == "daily":
                self.freq_suffix = "d"
            elif freq == "hourly":
                self.freq_suffix = "1hr"
            elif freq == "minutes":
                self.freq_suffix = "1min"
    
  5. 我们将使用一个文件对象作为加密货币交易所的数据源。我们必须确保在加载/流式传输数据到内存之前,数据源是存在的:

            self.ticker_file_stream = os.path.join(
                f"{data_dir}",
                f"{'_'.join([self.exchange, self.ticker,
                             self.freq_suffix])}.csv",
            )
            assert os.path.isfile(
                self.ticker_file_stream
            ), f"Cryptocurrency data file stream not found \
              at: data/{self.ticker_file_stream}.csv"
            # Cryptocurrency exchange data stream. An offline 
            # file stream is used. Alternatively, a web
            # API can be used to pull live data.
            self.ohlcv_df = pd.read_csv(self.ticker_file_\
                stream, skiprows=1).sort_values(by="Date"
            )
    
  6. 代理账户中的初始余额通过 env_config 配置。让我们根据配置的值初始化初始账户余额:

    self.opening_account_balance = env_config["opening_account_balance"]
    
  7. 接下来,让我们使用 OpenAI Gym 库提供的标准空间类型定义来定义该加密货币交易环境的动作空间和观察空间:

            # Action: 0-> Hold; 1-> Buy; 2 ->Sell;
            self.action_space = spaces.Discrete(3)
            self.observation_features = [
                "Open",
                "High",
                "Low",
                "Close",
                "Volume BTC",
                "Volume USD",
            ]
            self.horizon = env_config.get(
                       "observation_horizon_sequence_length")
            self.observation_space = spaces.Box(
                low=0,
                high=1,
                shape=(len(self.observation_features), 
                           self.horizon + 1),
                dtype=np.float,
            )
    
  8. 让我们定义代理在进行交易时将执行的交易订单大小:

            self.order_size = env_config.get("order_size")
    
  9. 至此,我们已成功初始化环境!接下来,让我们定义 step(…) 方法。你会注意到,为了简化理解,我们使用了两个辅助成员方法:self.execute_trade_actionself.get_observation,简化了 step(…) 方法的实现。我们将在稍后定义这些辅助方法,等到我们完成基本的 RL Gym 环境方法(stepresetrender)的实现。现在,让我们看看 step 方法的实现:

    def step(self, action):
            # Execute one step within the trading environment
            self.execute_trade_action(action)
            self.current_step += 1
            reward = self.account_value - \
                     self.opening_account_balance  
                    # Profit (loss)
            done = self.account_value <= 0 or \
                   self.current_step >= len(
                self.ohlcv_df.loc[:, "Open"].values
            )
            obs = self.get_observation()
            return obs, reward, done, {}
    
  10. 现在,让我们定义 reset() 方法,它将在每个 episode 开始时执行:

    def reset(self):
            # Reset the state of the environment to an 
            # initial state
            self.cash_balance = self.opening_account_balance
            self.account_value = self.opening_account_balance
            self.num_coins_held = 0
            self.cost_basis = 0
            self.current_step = 0
            self.trades = []
            if self.viz is None:
                self.viz = TradeVisualizer(
                    self.ticker,
                    self.ticker_file_stream,
                    "TFRL-Cookbook Ch4-CryptoTradingEnv",
                    skiprows=1,  # Skip the first line with 
                    # the data download source URL
                )
            return self.get_observation()
    
  11. 下一步,我们将定义 render() 方法,它将为我们提供加密货币交易环境的视图,帮助我们理解发生了什么!在这里,我们将使用来自 trading_utils.py 文件中的 TradeVisualizer 类。TradeVisualizer 帮助我们可视化代理在环境中学习时的实时账户余额。该可视化工具还通过显示代理在环境中执行的买卖交易,直观地呈现代理的操作。以下是 render() 方法输出的示例截图,供您参考:图 4.1 – CryptoTradingEnv 环境的示例渲染

        def render(self, **kwargs):
            # Render the environment to the screen
            if self.current_step > self.horizon:
                self.viz.render(
                    self.current_step,
                    self.account_value,
                    self.trades,
                    window_size=self.horizon,
                )
    
  12. 接下来,我们将实现一个方法,在训练完成后关闭所有可视化窗口:

        def close(self):
            if self.viz is not None:
                self.viz.close()
                self.viz = None
    
  13. 现在,我们可以实现 execute_trade_action 方法,它在之前第 9 步的 step(…) 方法中有所使用。我们将把实现过程分为三个步骤,每个步骤对应一个订单类型:Hold、Buy 和 Sell。我们先从 Hold 订单类型开始,因为它是最简单的。稍后你会明白为什么!

        def execute_trade_action(self, action):
            if action == 0:  # Hold position
                return
    
    
  14. 实际上,在我们继续实现买入和卖出订单执行逻辑之前,我们需要实现另一个中间步骤。在这里,我们必须确定订单类型(买入或卖出),然后获取当前模拟时间下比特币的价格:

            order_type = "buy" if action == 1 else "sell"
            # Stochastically determine the current price 
            # based on Market Open & Close
            current_price = random.uniform(
                self.ohlcv_df.loc[self.current_step, "Open"],
                self.ohlcv_df.loc[self.current_step, 
                                  "Close"],
            )
    
  15. 现在,我们准备好实现执行买入交易订单的逻辑,代码如下:

            if order_type == "buy":
                allowable_coins = \
                    int(self.cash_balance / current_price)
                if allowable_coins < self.order_size:
                    # Not enough cash to execute a buy order
                    return
                # Simulate a BUY order and execute it at 
                # current_price
                num_coins_bought = self.order_size
                current_cost = self.cost_basis * \
                               self.num_coins_held
                additional_cost = num_coins_bought * \
                                  current_price
                self.cash_balance -= additional_cost
                self.cost_basis = (current_cost + \
                    additional_cost) / (
                    self.num_coins_held + num_coins_bought
                )
                self.num_coins_held += num_coins_bought
    
  16. 让我们使用最新的买入交易更新trades列表:

                self.trades.append(
                    {
                        "type": "buy",
                        "step": self.current_step,
                        "shares": num_coins_bought,
                        "proceeds": additional_cost,
                    }
                )
    
  17. 下一步是实现执行卖出交易订单的逻辑:

            elif order_type == "sell":
                # Simulate a SELL order and execute it at 
                # current_price
                if self.num_coins_held < self.order_size:
                    # Not enough coins to execute a sell 
                    # order
                    return
                num_coins_sold = self.order_size
                self.cash_balance += num_coins_sold * \
                                     current_price
                self.num_coins_held -= num_coins_sold
                sale_proceeds = num_coins_sold * \
                                current_price
                self.trades.append(
                    {
                        "type": "sell",
                        "step": self.current_step,
                        "shares": num_coins_sold,
                        "proceeds": sale_proceeds,
                    }
                )
    
  18. 为了完成我们的交易执行函数,我们需要添加几行代码来更新账户价值,一旦交易订单执行完毕:

            if self.num_coins_held == 0:
                self.cost_basis = 0
            # Update account value
            self.account_value = self.cash_balance + \
                                 self.num_coins_held * \
                                 current_price
    
  19. 到此为止,我们已经完成了一个由 Gemini 加密货币交易所提供的真实 BTCUSD 数据驱动的比特币交易强化学习环境的实现!让我们看看如何轻松创建环境并运行示例,而不是在这个环境中使用一个随机代理,所有这一切只需要六行代码:

    if __name__ == "__main__":
        env = CryptoTradingEnv()
        obs = env.reset()
        for _ in range(600):
            action = env.action_space.sample()
            next_obs, reward, done, _ = env.step(action)
            env.render()
    

    你应该能看到在CryptoTradingEnv环境中随机代理的示例。env.render()函数应该产生类似以下的渲染:

图 4.2 – 展示 CryptoTradingEnv 环境的渲染,显示代理的当前账户余额以及买卖交易的执行情况

图 4.2 – 展示 CryptoTradingEnv 环境的渲染,显示代理的当前账户余额以及买卖交易的执行情况

现在,让我们看看这一切是如何运作的。

它是如何工作的……

在这个配方中,我们实现了CryptoTradingEnv函数,它提供了形状为(6, horizon + 1)的表格型观察数据,其中 horizon 可以通过env_config字典进行配置。horizon 参数指定了时间窗口的持续时间(例如,3 天),即 Agent 在每次交易之前允许观察加密货币市场数据的时间长度。一旦 Agent 执行了允许的离散动作之一——0(保持)、1(买入)或 2(卖出)——相应的交易将在当前的加密货币(比特币)交易价格下执行,并且交易账户余额将随之更新。Agent 还将根据从本集开始的交易所获得的利润(或亏损)获得奖励。

使用价格图表构建以太坊交易强化学习平台

这个配方将教你如何为 RL 代理实现一个以太坊加密货币交易环境,提供视觉观察数据。Agent 将观察指定时间段内的价格图表,图表包含开盘价、最高价、最低价、收盘价和交易量信息,以便做出决策(保持、买入或卖出)。Agent 的目标是最大化其奖励,即如果你将 Agent 部署到你的账户进行交易时所能获得的利润!

准备就绪

为了完成这个食谱,确保你使用的是最新版本。你需要激活tf2rl-cookbook Python/conda 虚拟环境。确保它会更新环境,使其匹配最新的 conda 环境规格文件(tfrl-cookbook.yml),该文件可以在本食谱的代码库中找到。如果以下import语句没有问题,你就可以开始了:

import os
import random
from typing import Dict
import cv2
import gym
import numpy as np
import pandas as pd
from gym import spaces
from trading_utils import TradeVisualizer

如何做到…

让我们遵循 OpenAI Gym 框架来实现我们的学习环境接口。我们将添加一些逻辑,模拟加密货币交易执行并适当地奖励智能体,因为这将有助于你的学习。

按照以下步骤完成你的实现:

  1. 让我们通过使用字典来配置环境:

    env_config = {
        "exchange": "Gemini",  # Cryptocurrency exchange 
        # (Gemini, coinbase, kraken, etc.)
        "ticker": "ETHUSD",  # CryptoFiat
        "frequency": "daily",  # daily/hourly/minutes
        "opening_account_balance": 100000,
        # Number of steps (days) of data provided to the 
        # agent in one observation
        "observation_horizon_sequence_length": 30,
        "order_size": 1,  # Number of coins to buy per 
        # buy/sell order
    }
    
  2. 让我们定义CryptoTradingVisualEnv类并从env_config加载设置:

    class CryptoTradingVisualEnv(gym.Env):
        def __init__(self, env_config: Dict = env_config):
            """Cryptocurrency trading environment for RL 
            agents
            The observations are cryptocurrency price info 
            (OHLCV) over a horizon as specified in 
            env_config. Action space is discrete to perform 
            buy/sell/hold trades.
            Args:
                ticker(str, optional): Ticker symbol for the\
                crypto-fiat currency pair.
                Defaults to "ETHUSD".
                env_config (Dict): Env configuration values
            """
            super(CryptoTradingVisualEnv, self).__init__()
            self.ticker = env_config.get("ticker", "ETHUSD")
            data_dir = os.path.join(os.path.dirname(os.path.\
                                 realpath(__file__)), "data")
            self.exchange = env_config["exchange"]
            freq = env_config["frequency"]
    
  3. 下一步,根据市场数据源的频率配置,加载来自输入流的加密货币交易所数据:

        if freq == "daily":
                self.freq_suffix = "d"
            elif freq == "hourly":
                self.freq_suffix = "1hr"
            elif freq == "minutes":
                self.freq_suffix = "1min"
            self.ticker_file_stream = os.path.join(
                f"{data_dir}",
                f"{'_'.join([self.exchange, self.ticker, \
                             self.freq_suffix])}.csv",
            )
            assert os.path.isfile(
                self.ticker_file_stream
            ), f"Cryptocurrency exchange data file stream \
            not found at: data/{self.ticker_file_stream}.csv"
            # Cryptocurrency exchange data stream. An offline 
            # file stream is used. Alternatively, a web
            # API can be used to pull live data.
            self.ohlcv_df = pd.read_csv(self.ticker_file_\
                stream, skiprows=1).sort_values(
                by="Date"
            )
    
  4. 让我们初始化其他环境类变量,并定义状态和动作空间:

            self.opening_account_balance = \
                env_config["opening_account_balance"]
            # Action: 0-> Hold; 1-> Buy; 2 ->Sell;
            self.action_space = spaces.Discrete(3)
            self.observation_features = [
                "Open",
                "High",
                "Low",
                "Close",
                "Volume ETH",
                "Volume USD",
            ]
            self.obs_width, self.obs_height = 128, 128
            self.horizon = env_config.get("
                observation_horizon_sequence_length")
            self.observation_space = spaces.Box(
                low=0, high=255, shape=(128, 128, 3),
                dtype=np.uint8,
            )
            self.order_size = env_config.get("order_size")
            self.viz = None  # Visualizer
    
  5. 让我们定义reset方法,以便(重新)初始化环境类变量:

        def reset(self):
            # Reset the state of the environment to an 
            # initial state
            self.cash_balance = self.opening_account_balance
            self.account_value = self.opening_account_balance
            self.num_coins_held = 0
            self.cost_basis = 0
            self.current_step = 0
            self.trades = []
            if self.viz is None:
                self.viz = TradeVisualizer(
                    self.ticker,
                    self.ticker_file_stream,
                    "TFRL-Cookbook\
                       Ch4-CryptoTradingVisualEnv",
                    skiprows=1,
                )
            return self.get_observation()
    
  6. 这个环境的关键特性是,智能体的观察是价格图表的图像,类似于你在人工交易员的计算机屏幕上看到的图表。这个图表包含闪烁的图形、红绿条和蜡烛!让我们定义get_observation方法,以返回图表屏幕的图像:

        def get_observation(self):
            """Return a view of the Ticker price chart as 
               image observation
            Returns:
                img_observation(np.ndarray): Image of ticker
                candle stick plot with volume bars as 
                observation
            """
            img_observation = \
                self.viz.render_image_observation(
                self.current_step, self.horizon
            )
            img_observation = cv2.resize(
                img_observation, dsize=(128, 128), 
                interpolation=cv2.INTER_CUBIC
            )
            return img_observation
    
  7. 现在,我们将实现交易环境的交易执行逻辑。必须从市场数据流中提取以太坊加密货币(以美元计)的当前价格(在本例中为一个文件):

        def execute_trade_action(self, action):
            if action == 0:  # Hold position
                return
            order_type = "buy" if action == 1 else "sell"
            # Stochastically determine the current price
            # based on Market Open & Close
            current_price = random.uniform(
                self.ohlcv_df.loc[self.current_step, "Open"],
                self.ohlcv_df.loc[self.current_step, 
                                  "Close"],
            )
    
  8. 如果智能体决定执行买入订单,我们必须计算智能体在单步中可以购买的以太坊代币/币的数量,并在模拟交易所执行“买入”订单:

                # Buy Order             allowable_coins = \
                    int(self.cash_balance / current_price)
                if allowable_coins < self.order_size:
                    # Not enough cash to execute a buy order
                    return
                # Simulate a BUY order and execute it at 
                # current_price
                num_coins_bought = self.order_size
                current_cost = self.cost_basis * \
                               self.num_coins_held
                additional_cost = num_coins_bought * \
                                  current_price
                self.cash_balance -= additional_cost
                self.cost_basis = \
                    (current_cost + additional_cost) / (
                    self.num_coins_held + num_coins_bought
                )
                self.num_coins_held += num_coins_bought
                self.trades.append(
                    {
                        "type": "buy",
                        "step": self.current_step,
                        "shares": num_coins_bought,
                        "proceeds": additional_cost,
                    }
                )
    
  9. 相反,如果智能体决定卖出,以下逻辑将执行卖出订单:

               # Simulate a SELL order and execute it at 
               # current_price
                if self.num_coins_held < self.order_size:
                    # Not enough coins to execute a sell
                    # order
                    return
                num_coins_sold = self.order_size
                self.cash_balance += num_coins_sold * \
                                     current_price
                self.num_coins_held -= num_coins_sold
                sale_proceeds = num_coins_sold * \
                                current_price
                self.trades.append(
                    {
                        "type": "sell",
                        "step": self.current_step,
                        "shares": num_coins_sold,
                        "proceeds": sale_proceeds,
                    }
                )
    
  10. 让我们更新账户余额,以反映买卖交易的影响:

            if self.num_coins_held == 0:
                self.cost_basis = 0
            # Update account value
            self.account_value = self.cash_balance + \
                                 self.num_coins_held * \
                                 current_price
    
  11. 我们现在准备实现step方法:

        def step(self, action):
            # Execute one step within the trading environment
            self.execute_trade_action(action)
            self.current_step += 1
            reward = self.account_value - \
                self.opening_account_balance  # Profit (loss)
            done = self.account_value <= 0 or \
                     self.current_step >= len(
                self.ohlcv_df.loc[:, "Open"].values
            )
            obs = self.get_observation()
            return obs, reward, done, {}
    
  12. 让我们实现一个方法,将当前状态渲染为图像并显示到屏幕上。这将帮助我们理解智能体在学习交易时环境中发生了什么:

        def render(self, **kwargs):
            # Render the environment to the screen
            if self.current_step > self.horizon:
                self.viz.render(
                    self.current_step,
                    self.account_value,
                    self.trades,
                    window_size=self.horizon,
                )
    
  13. 这就完成了我们的实现!让我们快速查看一下使用随机智能体的环境:

    if __name__ == "__main__":
        env = CryptoTradingVisualEnv()
        obs = env.reset()
        for _ in range(600):
            action = env.action_space.sample()
            next_obs, reward, done, _ = env.step(action)
            env.render()
    

    你应该看到示例随机智能体在CryptoTradinVisualEnv中执行的情况,其中智能体接收与此处所示相似的视觉/图像观察:

图 4.3 – 发送给学习智能体的示例观察

图 4.3 – 发送给学习智能体的示例观察

就这样,这个食谱完成了!

它是如何工作的……

在这个食谱中,我们实现了一个可视化的以太坊加密货币交易环境,提供图像作为代理的输入。图像包含了图表信息,如开盘、最高、最低、收盘和成交量数据。这个图表看起来就像一个人类交易员的屏幕,向代理提供当前市场的信号。

构建一个高级的加密货币交易平台为 RL 代理

如果我们不让代理只采取离散的动作,比如购买/卖出/持有预设数量的比特币或以太坊代币,而是让代理决定它想买或卖多少加密货币/代币呢?这正是这个食谱所要让你实现的功能,创建一个CryptoTradingVisualContinuousEnv的 RL 环境。

准备工作

为了完成这个方案,你需要确保你拥有最新版本的内容。你需要激活tf2rl-cookbook Python/conda 虚拟环境。确保你更新环境,以便它符合最新的 conda 环境规范文件(tfrl-cookbook.yml),该文件可以在这个食谱的代码库中找到。如果以下的import语句没有任何问题地运行,那么你就可以开始了:

import os
import random
from typing import Dict
import cv2
import gym
import numpy as np
import pandas as pd
from gym import spaces
from trading_utils import TradeVisualizer

怎么做…

这是一个复杂的环境,因为它使用高维图像作为观察输入,并允许执行连续的真实值动作。不过,由于你在本章中已经实现了前面几个食谱的经验,你很可能已经熟悉这个食谱的各个组成部分。

让我们开始吧:

  1. 首先,我们需要定义该环境允许的配置参数:

    env_config = {
        "exchange": "Gemini",  # Cryptocurrency exchange 
         # (Gemini, coinbase, kraken, etc.)
        "ticker": "BTCUSD",  # CryptoFiat
        "frequency": "daily",  # daily/hourly/minutes
        "opening_account_balance": 100000,
        # Number of steps (days) of data provided to the 
        # agent in one observation
        "observation_horizon_sequence_length": 30,
    }
    
  2. 让我们直接进入学习环境类的定义:

    class CryptoTradingVisualContinuousEnv(gym.Env):
        def __init__(self, env_config: Dict = env_config):
            """Cryptocurrency trading environment for RL 
            agents with continuous action space
            Args:
                ticker (str, optional): Ticker symbol for the 
                crypto-fiat currency pair.
                Defaults to "BTCUSD".
                env_config (Dict): Env configuration values
            """
            super(CryptoTradingVisualContinuousEnv, 
                  self).__init__()
            self.ticker = env_config.get("ticker", "BTCUSD")
            data_dir = os.path.join(os.path.dirname(os.path.\
                                 realpath(__file__)), "data")
            self.exchange = env_config["exchange"]
            freq = env_config["frequency"]
            if freq == "daily":
                self.freq_suffix = "d"
            elif freq == "hourly":
                self.freq_suffix = "1hr"
            elif freq == "minutes":
                self.freq_suffix = "1min"
    
  3. 这一步很直接,因为我们只需要将市场数据从输入源加载到内存中:

            self.ticker_file_stream = os.path.join(
                f"{data_dir}",
                f"{'_'.join([self.exchange, self.ticker, \
                             self.freq_suffix])}.csv",
            )
            assert os.path.isfile(
                self.ticker_file_stream
            ), f"Cryptocurrency exchange data file stream \
            not found at: data/{self.ticker_file_stream}.csv"
            # Cryptocurrency exchange data stream. An offline 
            # file stream is used. Alternatively, a web
            # API can be used to pull live data.
            self.ohlcv_df = pd.read_csv(
                self.ticker_file_stream, 
                skiprows=1).sort_values(by="Date"
            )
            self.opening_account_balance = \
                env_config["opening_account_balance"]
    
  4. 现在,让我们定义环境的连续动作空间和观察空间:

            self.action_space = spaces.Box(
                low=np.array([-1]), high=np.array([1]), \
                             dtype=np.float
            )
            self.observation_features = [
                "Open",
                "High",
                "Low",
                "Close",
                "Volume BTC",
                "Volume USD",
            ]
            self.obs_width, self.obs_height = 128, 128
            self.horizon = env_config.get(
                       "observation_horizon_sequence_length")
            self.observation_space = spaces.Box(
                low=0, high=255, shape=(128, 128, 3), 
                dtype=np.uint8,
            )
    
  5. 让我们定义环境中step方法的大致框架。接下来的步骤中我们将完成帮助方法的实现:

        def step(self, action):
            # Execute one step within the environment
            self.execute_trade_action(action)
            self.current_step += 1
            reward = self.account_value - \
                self.opening_account_balance  # Profit (loss)
            done = self.account_value <= 0 or \
                    self.current_step >= len(
                self.ohlcv_df.loc[:, "Open"].values
            )
            obs = self.get_observation()
            return obs, reward, done, {}
    
  6. 第一个帮助方法是execute_trade_action方法。接下来的几步实现应该很简单,因为前面几个食谱已经实现了在交易所按汇率买卖加密货币的逻辑:

        def execute_trade_action(self, action):
            if action == 0:  # Indicates "HODL" action
                # HODL position; No trade to be executed
                return
            order_type = "buy" if action > 0 else "sell"
            order_fraction_of_allowable_coins = abs(action)
            # Stochastically determine the current price 
            # based on Market Open & Close
            current_price = random.uniform(
                self.ohlcv_df.loc[self.current_step, "Open"],
                self.ohlcv_df.loc[self.current_step,
                                  "Close"],
            )
    
  7. 可以通过如下方式模拟交易所中的买入订单:

            if order_type == "buy":
                allowable_coins = \
                    int(self.cash_balance / current_price)
                # Simulate a BUY order and execute it at 
                # current_price
                num_coins_bought = int(allowable_coins * \
                order_fraction_of_allowable_coins)
                current_cost = self.cost_basis * \
                               self.num_coins_held
                additional_cost = num_coins_bought * \
                                  current_price
                self.cash_balance -= additional_cost
                self.cost_basis = (current_cost + \
                                   additional_cost) / (
                    self.num_coins_held + num_coins_bought
                )
                self.num_coins_held += num_coins_bought
                if num_coins_bought > 0:
                    self.trades.append(
                        {
                            "type": "buy",
                            "step": self.current_step,
                            "shares": num_coins_bought,
                            "proceeds": additional_cost,
                        }
                    )
    
  8. 同样地,卖出订单可以通过以下方式模拟:

            elif order_type == "sell":
                # Simulate a SELL order and execute it at 
                # current_price
                num_coins_sold = int(
                    self.num_coins_held * \
                    order_fraction_of_allowable_coins
                )
                self.cash_balance += num_coins_sold * \
                                     current_price
                self.num_coins_held -= num_coins_sold
                sale_proceeds = num_coins_sold * \
                                current_price
                if num_coins_sold > 0:
                    self.trades.append(
                        {
                            "type": "sell",
                            "step": self.current_step,
                            "shares": num_coins_sold,
                            "proceeds": sale_proceeds,
                        }
                    )
    
  9. 一旦买入/卖出订单执行完毕,账户余额需要更新:

            if self.num_coins_held == 0:
                self.cost_basis = 0
            # Update account value
            self.account_value = self.cash_balance + \
                                 self.num_coins_held * \
                                 current_price
    
  10. 为了测试CryptoTradingVisualcontinuousEnv,你可以使用以下代码行来进行__main__函数的测试:

    if __name__ == "__main__":
        env = CryptoTradingVisualContinuousEnv()
        obs = env.reset()
        for _ in range(600):
            action = env.action_space.sample()
            next_obs, reward, done, _ = env.step(action)
            env.render()
    

它是如何工作的…

CryptoTradingVisualcontinuousEnv提供了一个强化学习环境,观察值是类似交易者屏幕的图像,并为代理提供了一个连续的、实值的动作空间。在这个环境中,动作是单维的、连续的且实值的,大小表示加密货币的购买/出售比例。如果动作为正(0 到 1),则解释为买入指令;如果动作为负(-1 到 0),则解释为卖出指令。这个比例值根据交易账户中的余额转换成可以买卖的加密货币数量。

使用强化学习训练加密货币交易机器人

Soft Actor-Critic(SAC)代理是目前最流行、最先进的强化学习代理之一,基于一个脱离策略的最大熵深度强化学习算法。这个配方提供了你从零开始构建 SAC 代理所需的所有组件,使用 TensorFlow 2.x,并使用来自 Gemini 加密货币交易所的真实数据来训练它进行加密货币(比特币、以太坊等)交易。

准备工作

为了完成这个配方,请确保你使用的是最新版本。你需要激活tf2rl-cookbook的 Python/conda 虚拟环境。确保更新环境,使其与最新的 conda 环境规格文件(tfrl-cookbook.yml)匹配,该文件可以在本配方的代码库中找到。如果以下import语句没有问题,说明你可以开始操作了:

mport functools
import os
import random
from collections import deque
from functools import reduce
import imageio
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow.keras.layers import Concatenate, Dense, Input
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from crypto_trading_continuous_env import CryptoTradingContinuousEnv

如何操作…

这个配方将指导你逐步实现 SAC 代理的过程,并帮助你在加密货币交易环境中训练代理,从而实现自动化的盈利机器!

让我们准备好,开始实现:

  1. SAC 是一个演员-评论家代理,所以它有演员和评论家两个组件。让我们先定义使用 TensorFlow 2.x 的演员神经网络:

    def actor(state_shape, action_shape, units=(512, 256, 64)):
        state_shape_flattened = \
            functools.reduce(lambda x, y: x * y, state_shape)
        state = Input(shape=state_shape_flattened)
        x = Dense(units[0], name="L0", activation="relu")\
                  (state)
        for index in range(1, len(units)):
            x = Dense(units[index],name="L{}".format(index),\ 
                      activation="relu")(x)
        actions_mean = Dense(action_shape[0], \
                        name="Out_mean")(x)
        actions_std = Dense(action_shape[0], \
                       name="Out_std")(x)
        model = Model(inputs=state, 
                      outputs=[actions_mean, actions_std])
        return model
    
  2. 接下来,让我们定义评论家神经网络:

    def critic(state_shape, action_shape, units=(512, 256, 64)):
        state_shape_flattened = \
            functools.reduce(lambda x, y: x * y, state_shape)
        inputs = [Input(shape=state_shape_flattened),
                  Input(shape=action_shape)]
        concat = Concatenate(axis=-1)(inputs)
        x = Dense(units[0], name="Hidden0", 
                  activation="relu")(concat)
        for index in range(1, len(units)):
            x = Dense(units[index], 
                      name="Hidden{}".format(index),
                      activation="relu")(x)
        output = Dense(1, name="Out_QVal")(x)
        model = Model(inputs=inputs, outputs=output)
        return model
    
  3. 给定当前模型的权重和目标模型的权重,让我们实现一个快速的函数,利用tau作为平均因子,慢慢更新目标权重。这就像 Polyak 平均步骤:

    def update_target_weights(model, target_model, tau=0.005):
        weights = model.get_weights()
        target_weights = target_model.get_weights()
        for i in range(len(target_weights)):  # set tau% of
        # target model to be new weights
            target_weights[i] = weights[i] * tau + \
                                target_weights[i] * (1 - tau)
        target_model.set_weights(target_weights)
    
  4. 我们现在准备初始化我们的 SAC 代理类:

    class SAC(object):
        def __init__(
            self,
            env,
            lr_actor=3e-5,
            lr_critic=3e-4,
            actor_units=(64, 64),
            critic_units=(64, 64),
            auto_alpha=True,
            alpha=0.2,
            tau=0.005,
            gamma=0.99,
            batch_size=128,
            memory_cap=100000,
        ):
            self.env = env
            self.state_shape = env.observation_space.shape  
            # shape of observations
            self.action_shape = env.action_space.shape  
            # number of actions
            self.action_bound = (env.action_space.high - \
                                 env.action_space.low) / 2
            self.action_shift = (env.action_space.high + \
                                 env.action_space.low) / 2
            self.memory = deque(maxlen=int(memory_cap))
    
  5. 作为下一步,我们将初始化演员网络,并打印演员神经网络的摘要:

            # Define and initialize actor network
            self.actor = actor(self.state_shape, 
                              self.action_shape, actor_units)
            self.actor_optimizer = \
                Adam(learning_rate=lr_actor)
            self.log_std_min = -20
            self.log_std_max = 2
            print(self.actor.summary())
    
  6. 接下来,我们将定义两个评论家网络,并打印评论家神经网络的摘要:

            self.critic_1 = critic(self.state_shape, 
                             self.action_shape, critic_units)
            self.critic_target_1 = critic(self.state_shape,
                             self.action_shape, critic_units)
            self.critic_optimizer_1 = \
                 Adam(learning_rate=lr_critic)
            update_target_weights(self.critic_1, \
                             self.critic_target_1, tau=1.0)
            self.critic_2 = critic(self.state_shape, \
                             self.action_shape, critic_units)
            self.critic_target_2 = critic(self.state_shape,\
                             self.action_shape, critic_units)
            self.critic_optimizer_2 = \
                Adam(learning_rate=lr_critic)
            update_target_weights(self.critic_2, \
                self.critic_target_2, tau=1.0)
            print(self.critic_1.summary())
    
  7. 让我们初始化alpha温度参数和目标熵:

            self.auto_alpha = auto_alpha
            if auto_alpha:
                self.target_entropy = \
                    -np.prod(self.action_shape)
                self.log_alpha = \
                    tf.Variable(0.0, dtype=tf.float64)
                self.alpha = \
                    tf.Variable(0.0, dtype=tf.float64)
                self.alpha.assign(tf.exp(self.log_alpha))
                self.alpha_optimizer = \
                    Adam(learning_rate=lr_actor)
            else:
                self.alpha = tf.Variable(alpha, 
                                         dtype=tf.float64)
    
  8. 我们还将初始化 SAC 的其他超参数:

            self.gamma = gamma  # discount factor
            self.tau = tau  # target model update
            self.batch_size = batch_size
    
  9. 这完成了 SAC 代理的__init__方法。接下来,我们将实现一个方法来(预)处理采取的动作:

        def process_actions(self, mean, log_std, test=False, 
        eps=1e-6):
            std = tf.math.exp(log_std)
            raw_actions = mean
            if not test:
                raw_actions += tf.random.normal(shape=mean.\
                               shape, dtype=tf.float64) * std
            log_prob_u = tfp.distributions.Normal(loc=mean,
                            scale=std).log_prob(raw_actions)
            actions = tf.math.tanh(raw_actions)
            log_prob = tf.reduce_sum(log_prob_u - \
                        tf.math.log(1 - actions ** 2 + eps))
            actions = actions * self.action_bound + \
                       self.action_shift
            return actions, log_prob
    
  10. 我们现在准备实现act方法,以便在给定状态下生成 SAC 代理的动作:

        def act(self, state, test=False, use_random=False):
            state = state.reshape(-1)  # Flatten state
            state = \
             np.expand_dims(state, axis=0).astype(np.float64)
            if use_random:
                a = tf.random.uniform(
                    shape=(1, self.action_shape[0]), \
                    minval=-1, maxval=1, dtype=tf.float64
                )
            else:
                means, log_stds = self.actor.predict(state)
                log_stds = tf.clip_by_value(log_stds, 
                                            self.log_std_min,
                                            self.log_std_max)
                a, log_prob = self.process_actions(means,
                                                   log_stds, 
                                                   test=test)
            q1 = self.critic_1.predict([state, a])[0][0]
            q2 = self.critic_2.predict([state, a])[0][0]
            self.summaries["q_min"] = tf.math.minimum(q1, q2)
            self.summaries["q_mean"] = np.mean([q1, q2])
            return a
    
  11. 为了将经验保存到回放记忆中,让我们实现remember函数:

        def remember(self, state, action, reward, next_state, 
        done):
            state = state.reshape(-1)  # Flatten state
            state = np.expand_dims(state, axis=0)
            next_state = next_state.reshape(-1)  
           # Flatten next-state
            next_state = np.expand_dims(next_state, axis=0)
            self.memory.append([state, action, reward, 
                                next_state, done])
    
  12. 现在,让我们开始实现经验回放过程。我们将从初始化回放方法开始。我们将在接下来的步骤中完成回放方法的实现:

        def replay(self):
            if len(self.memory) < self.batch_size:
                return
            samples = random.sample(self.memory, self.batch_size)
            s = np.array(samples).T
            states, actions, rewards, next_states, dones = [
                np.vstack(s[i, :]).astype(np.float) for i in\
                range(5)
            ]
    
  13. 让我们启动一个持久化的GradientTape函数,并开始累积梯度。我们通过处理动作并获取下一组动作和对数概率来实现这一点:

            with tf.GradientTape(persistent=True) as tape:
                # next state action log probs
                means, log_stds = self.actor(next_states)
                log_stds = tf.clip_by_value(log_stds, 
                                            self.log_std_min,
                                            self.log_std_max)
                next_actions, log_probs = \
                    self.process_actions(means, log_stds)
    
  14. 这样,我们现在可以计算两个评论者网络的损失:

                current_q_1 = self.critic_1([states, 
                                             actions])
                current_q_2 = self.critic_2([states,
                                             actions])
                next_q_1 = self.critic_target_1([next_states, 
                                               next_actions])
                next_q_2 = self.critic_target_2([next_states,
                                               next_actions])
                next_q_min = tf.math.minimum(next_q_1,
                                              next_q_2)
                state_values = next_q_min - self.alpha * \
                                            log_probs
                target_qs = tf.stop_gradient(
                    rewards + state_values * self.gamma * \
                    (1.0 - dones)
                )
                critic_loss_1 = tf.reduce_mean(
                    0.5 * tf.math.square(current_q_1 - \
                                         target_qs)
                )
                critic_loss_2 = tf.reduce_mean(
                    0.5 * tf.math.square(current_q_2 - \
                                         target_qs)
                )
    
  15. 当前的状态-动作对和由演员提供的对数概率可以通过以下方式计算:

                means, log_stds = self.actor(states)
                log_stds = tf.clip_by_value(log_stds, 
                                            self.log_std_min,
                                            self.log_std_max)
                actions, log_probs = \
                    self.process_actions(means, log_stds)
    
  16. 我们现在可以计算演员的损失并将梯度应用到评论者上:

                current_q_1 = self.critic_1([states, 
                                             actions])
                current_q_2 = self.critic_2([states,
                                             actions])
                current_q_min = tf.math.minimum(current_q_1,
                                                current_q_2)
                actor_loss = tf.reduce_mean(self.alpha * \
                                   log_probs - current_q_min)
                if self.auto_alpha:
                    alpha_loss = -tf.reduce_mean(
                        (self.log_alpha * \
                        tf.stop_gradient(log_probs + \
                                        self.target_entropy))
                    )
            critic_grad = tape.gradient(
                critic_loss_1, 
                self.critic_1.trainable_variables
            )  
            self.critic_optimizer_1.apply_gradients(
                zip(critic_grad, 
                self.critic_1.trainable_variables)
            )
    
  17. 类似地,我们可以计算并应用演员的梯度:

            critic_grad = tape.gradient(
                critic_loss_2, 
            self.critic_2.trainable_variables
            )  # compute actor gradient
            self.critic_optimizer_2.apply_gradients(
                zip(critic_grad, 
                self.critic_2.trainable_variables)
            )
            actor_grad = tape.gradient(
                actor_loss, self.actor.trainable_variables
            )  # compute actor gradient
            self.actor_optimizer.apply_gradients(
                zip(actor_grad, 
                    self.actor.trainable_variables)
            )
    
  18. 现在,让我们将摘要记录到 TensorBoard:

            # tensorboard info
            self.summaries["q1_loss"] = critic_loss_1
            self.summaries["q2_loss"] = critic_loss_2
            self.summaries["actor_loss"] = actor_loss
            if self.auto_alpha:
                # optimize temperature
                alpha_grad = tape.gradient(alpha_loss, 
                                           [self.log_alpha])
                self.alpha_optimizer.apply_gradients(
                           zip(alpha_grad, [self.log_alpha]))
                self.alpha.assign(tf.exp(self.log_alpha))
                # tensorboard info
                self.summaries["alpha_loss"] = alpha_loss
    
  19. 这完成了我们的经验回放方法。现在,我们可以继续train方法的实现。让我们从初始化train方法开始。我们将在接下来的步骤中完成此方法的实现:

        def train(self, max_epochs=8000, random_epochs=1000,
        max_steps=1000, save_freq=50):
            current_time = datetime.datetime.now().\
                               strftime("%Y%m%d-%H%M%S")
            train_log_dir = os.path.join("logs", 
                       "TFRL-Cookbook-Ch4-SAC", current_time)
            summary_writer = \
                tf.summary.create_file_writer(train_log_dir)
            done, use_random, episode, steps, epoch, \
            episode_reward = (
                False,
                True,
                0,
                0,
                0,
                0,
            )
            cur_state = self.env.reset()
    
  20. 现在,我们准备开始主训练循环。首先,让我们处理结束集的情况:

            while epoch < max_epochs:
                if steps > max_steps:
                    done = True
                if done:
                    episode += 1
                    print(
                        "episode {}: {} total reward, 
                         {} alpha, {} steps, 
                         {} epochs".format(
                            episode, episode_reward, 
                            self.alpha.numpy(), steps, epoch
                        )
                    )
                    with summary_writer.as_default():
                        tf.summary.scalar(
                            "Main/episode_reward", \
                             episode_reward, step=episode
                        )
                        tf.summary.scalar(
                            "Main/episode_steps", 
                             steps, step=episode)
                    summary_writer.flush()
                    done, cur_state, steps, episode_reward =\
                         False, self.env.reset(), 0, 0
                    if episode % save_freq == 0:
                        self.save_model(
                            "sac_actor_episode{}.h5".\
                                 format(episode),
                            "sac_critic_episode{}.h5".\
                                 format(episode),
                        )
    
  21. 每次进入环境时,SAC 代理学习需要执行以下步骤:

                if epoch > random_epochs and \
                    len(self.memory) > self.batch_size:
                    use_random = False
                action = self.act(cur_state, \
                   use_random=use_random)  # determine action
                next_state, reward, done, _ = \
                    self.env.step(action[0])  # act on env
                # self.env.render(mode='rgb_array')
                self.remember(cur_state, action, reward, 
                            next_state, done)  #add to memory
                self.replay()  # train models through memory
                # replay
                update_target_weights(
                    self.critic_1, self.critic_target_1, 
                    tau=self.tau
                )  # iterates target model
                update_target_weights(self.critic_2, 
                self.critic_target_2, 
                tau=self.tau)
                cur_state = next_state
                episode_reward += reward
                steps += 1
                epoch += 1
    
  22. 处理完代理更新后,我们现在可以将一些有用的信息记录到 TensorBoard 中:

                # Tensorboard update
                with summary_writer.as_default():
                    if len(self.memory) > self.batch_size:
                        tf.summary.scalar(
                            "Loss/actor_loss", 
                             self.summaries["actor_loss"], 
                             step=epoch
                        )
                        tf.summary.scalar(
                            "Loss/q1_loss", 
                             self.summaries["q1_loss"], 
                             step=epoch
                        )
                        tf.summary.scalar(
                            "Loss/q2_loss", 
                           self.summaries["q2_loss"], 
                           step=epoch
                        )
                        if self.auto_alpha:
                            tf.summary.scalar(
                                "Loss/alpha_loss", 
                                self.summaries["alpha_loss"],
                                step=epoch
                            )
                    tf.summary.scalar("Stats/alpha", 
                                      self.alpha, step=epoch)
                    if self.auto_alpha:
                        tf.summary.scalar("Stats/log_alpha",
                                  self.log_alpha, step=epoch)
                    tf.summary.scalar("Stats/q_min", 
                        self.summaries["q_min"], step=epoch)
                    tf.summary.scalar("Stats/q_mean", 
                        self.summaries["q_mean"], step=epoch)
                    tf.summary.scalar("Main/step_reward", 
                                       reward, step=epoch)
                summary_writer.flush()
    
  23. 作为我们train方法实现的最后一步,我们可以保存演员和评论者模型,以便在需要时恢复训练或从检查点重新加载:

            self.save_model(
                "sac_actor_final_episode{}.h5".format(episode),
                "sac_critic_final_episode{}.h5".format(episode),
            )
    
  24. 现在,我们将实际实现之前引用的save_model方法:

        def save_model(self, a_fn, c_fn):
            self.actor.save(a_fn)
            self.critic_1.save(c_fn)
    
  25. 让我们快速实现一个方法,从保存的模型中加载演员和评论者的状态,以便在需要时可以从之前保存的检查点恢复或继续:

        def load_actor(self, a_fn):
            self.actor.load_weights(a_fn)
            print(self.actor.summary())
        def load_critic(self, c_fn):
            self.critic_1.load_weights(c_fn)
            self.critic_target_1.load_weights(c_fn)
            self.critic_2.load_weights(c_fn)
            self.critic_target_2.load_weights(c_fn)
            print(self.critic_1.summary())
    
  26. 要以“测试”模式运行 SAC 代理,我们可以实现一个辅助方法:

        def test(self, render=True, fps=30, 
        filename="test_render.mp4"):
            cur_state, done, rewards = self.env.reset(), \
                                        False, 0
            video = imageio.get_writer(filename, fps=fps)
            while not done:
                action = self.act(cur_state, test=True)
                next_state, reward, done, _ = \
                                     self.env.step(action[0])
                cur_state = next_state
                rewards += reward
                if render:
                    video.append_data(
                        self.env.render(mode="rgb_array"))
            video.close()
            return rewards
    
  27. 这完成了我们的 SAC 代理实现。我们现在准备在CryptoTradingContinuousEnv中训练 SAC 代理:

    if __name__ == "__main__":
        gym_env = CryptoTradingContinuousEnv()
        sac = SAC(gym_env)
        # Load Actor and Critic from previously saved 
        # checkpoints
        # sac.load_actor("sac_actor_episodexyz.h5")
        # sac.load_critic("sac_critic_episodexyz.h5")
        sac.train(max_epochs=100000, random_epochs=10000, 
                  save_freq=50)
        reward = sac.test()
        print(reward)
    

它是如何工作的…

SAC 是一种强大的 RL 算法,已证明在各种 RL 仿真环境中有效。SAC 不仅优化最大化每集奖励,还最大化代理策略的熵。您可以通过 TensorBoard 观察代理的学习进度,因为这个示例包括了记录代理进展的代码。您可以使用以下命令启动 TensorBoard:

tensorboard –-logdir=logs

上述命令将启动 TensorBoard。您可以通过浏览器在默认地址http://localhost:6006访问它。这里提供了一个 TensorBoard 截图供参考:

图 4.4 – TensorBoard 截图,显示 SAC 代理在 CryptoTradingContinuousEnv 中的训练进度

图 4.4 – TensorBoard 截图,显示 SAC 代理在 CryptoTradingContinuousEnv 中的训练进度

这就是本章节的内容。祝你训练愉快!