TensorFlow数据可视化

102 阅读1分钟
  1. 创建summary op 第一步是标记想要记录的节点. 常用的summary操作有tf.summary.scalar和tf.summary.histogram. 比如你想要记录交叉熵:

tf.summary.scalar('cross_entropy', cross_entropy)

  1. merge合并操作 调用tf.summary.merge_all(), 将所有收集的summaries合成一个tensor。

merged = tf.summary.merge_all()

  1. 创建writer对象

调用tf.summary.FileWriter将汇总数据写入磁盘,

FileWriter 的构造函数中包含了参数 logdir。这个 logdir 非常重要,所有事件都会写到它所指的目录下。此外,FileWriter中还包含了一个可选择的参数 Graph。如果输入了该参数,那么 TensorBoard 就会显示你的graph,让你能更好的理解graph中数据的流通。

train_writer = tf.summary.FileWriter(FLAGS.log_dir + '/train', sess.graph)
test_writer = tf.summary.FileWriter(FLAGS.log_dir + '/test')

4. 运行 和正常运行训练过程是一样的.对于placeholder的图要带上feed参数.

summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True)) train_writer.add_summary(summary, i)

  1. 打开tensorboard 在命令行中输入tensorboard --logdir=E:/tmp/mnist/logs/mnist_with_summaries :

代码如下:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2019/4/23 11:17
# @Author  : Seven
# @Site    :
# @File    : demo.py
# @Software: PyCharm

import numpy as np
import tensorflow as tf

n_observations = 100
xs = np.linspace(-3, 3, n_observations)
ys = 0.8*xs + 0.1 + np.random.uniform(-0.5, 0.5, n_observations)

X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')

W = tf.Variable(tf.random_normal([1]), name='weight')
tf.summary.histogram('weight', W)
b = tf.Variable(tf.random_normal([1]), name='bias')
tf.summary.histogram('bias', b)


Y_pred = tf.add(tf.multiply(X, W), b)

loss = tf.square(Y - Y_pred, name='loss')
tf.summary.scalar('loss', tf.reshape(loss, []))

learning_rate = 0.01
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

n_samples = xs.shape[0]
init = tf.global_variables_initializer()
with tf.Session() as sess:
    # 记得初始化所有变量
    sess.run(init)
    merged = tf.summary.merge_all()
    log_writer = tf.summary.FileWriter("./logs/linear_regression", sess.graph)

    # 训练模型
    for i in range(50):
        total_loss = 0
        for x, y in zip(xs, ys):
            # 通过feed_dic把数据灌进去
            _, loss_value, merged_summary = sess.run([optimizer, loss, merged], feed_dict={X: x, Y: y})
            total_loss += loss_value

        if i % 5 == 0:
            print('Epoch {0}: {1}'.format(i, total_loss / n_samples))
            log_writer.add_summary(merged_summary, i)

    # 关闭writer
    log_writer.close()

    # 取出w和b的值
    W, b = sess.run([W, b])

print(W, b)
print("W:"+str(W[0]))
print("b:"+str(b[0]))

输出结果:

Epoch 0: [1.3353578]
Epoch 5: [0.08880837]
Epoch 10: [0.08880833]
Epoch 15: [0.08880833]
Epoch 20: [0.08880833]
Epoch 25: [0.08880833]
Epoch 30: [0.08880833]
Epoch 35: [0.08880833]
Epoch 40: [0.08880833]
Epoch 45: [0.08880833]
[0.8415964] [0.09745394]
W:0.8415964
b:0.097453944

tensorboard

在终端执行代码:

tensorboard --logdir log (你保存文件所在位置)