Tensorflow minist-rnn

331 阅读2分钟
import tensorflow as tf
import numpy as np
/anaconda3/envs/py35/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.6 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
  return f(*args, **kwds)
/anaconda3/envs/py35/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot = True)
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
learning_rate = 0.001
batch_size = 128

n_input = 28
n_steps = 28
n_hidden = 64
n_classes = 10
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
output, _ = tf.nn.dynamic_rnn(
            tf.contrib.rnn.GRUCell(n_hidden),
            x,
            dtype = tf.float32,
            sequence_length = batch_size*[n_input],
    )
output.get_shape()
TensorShape([Dimension(None), Dimension(28), Dimension(64)])
index = tf.range(0,batch_size)*n_steps + (n_input -1)
flat = tf.reshape(output, [-1,int(output.get_shape()[2])])
last = tf.gather(flat, index)

num_classes = int(y.get_shape()[1])
weight = tf.Variable(tf.truncated_normal([n_hidden, num_classes], stddev = 0.01))
bias = tf.Variable(tf.constant(0.1, shape = [num_classes]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = -tf.reduce_sum(y * tf.log(prediction))
optimizer = tf.train.AdamOptimizer(learning_rate, beta1 = 0.5)
grads = optimizer.compute_gradients(cross_entropy)
for i, (g, v) in enumerate(grads):
    if g is not None:
        grads[i] = (tf.clip_by_norm(g, 5), v)
train_op = optimizer.apply_gradients(grads)

/anaconda3/envs/py35/lib/python3.5/site-packages/tensorflow/python/ops/gradients_impl.py:97: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "


WARNING:tensorflow:From /anaconda3/envs/py35/lib/python3.5/site-packages/tensorflow/python/ops/clip_ops.py:110: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
for step in range(1300):
    batch_x, batch_y = mnist.train.next_batch(batch_size)
    batch_x = batch_x.reshape(batch_size, n_steps, n_input)
    sess.run(train_op,feed_dict = {x:batch_x, y:batch_y})
    if step % 50 ==0:
        acc = sess.run(accuracy, feed_dict = {x:batch_x, y:batch_y})
        loss = sess.run(cross_entropy, feed_dict = {x:batch_x, y:batch_y})
        print("Iter " + str(step) + ", Minibatch Loss = " + "{:.6f}".format(loss) + " Training Accuracy=" + "{:.6f}".format(acc))
print("Optimization Finished!")
Iter 0, Minibatch Loss = 294.231812 Training Accuracy=0.164062
Iter 50, Minibatch Loss = 219.154602 Training Accuracy=0.437500
Iter 100, Minibatch Loss = 158.273178 Training Accuracy=0.570312
Iter 150, Minibatch Loss = 134.672287 Training Accuracy=0.617188
Iter 200, Minibatch Loss = 89.897095 Training Accuracy=0.765625
Iter 250, Minibatch Loss = 68.426300 Training Accuracy=0.828125
Iter 300, Minibatch Loss = 53.529186 Training Accuracy=0.875000
Iter 350, Minibatch Loss = 47.197731 Training Accuracy=0.867188
Iter 400, Minibatch Loss = 43.396774 Training Accuracy=0.906250
Iter 450, Minibatch Loss = 41.338951 Training Accuracy=0.929688
Iter 500, Minibatch Loss = 40.015926 Training Accuracy=0.921875
Iter 550, Minibatch Loss = 33.649063 Training Accuracy=0.914062
Iter 600, Minibatch Loss = 33.231575 Training Accuracy=0.945312
Iter 650, Minibatch Loss = 32.738220 Training Accuracy=0.929688
Iter 700, Minibatch Loss = 32.566116 Training Accuracy=0.937500
Iter 750, Minibatch Loss = 23.169317 Training Accuracy=0.929688
Iter 800, Minibatch Loss = 31.869144 Training Accuracy=0.945312
Iter 850, Minibatch Loss = 21.843014 Training Accuracy=0.929688
Iter 900, Minibatch Loss = 25.252846 Training Accuracy=0.968750
Iter 950, Minibatch Loss = 31.075363 Training Accuracy=0.929688
Iter 1000, Minibatch Loss = 28.718439 Training Accuracy=0.953125
Iter 1050, Minibatch Loss = 24.713192 Training Accuracy=0.960938
Iter 1100, Minibatch Loss = 24.387371 Training Accuracy=0.945312
Iter 1150, Minibatch Loss = 20.600651 Training Accuracy=0.976562
Iter 1200, Minibatch Loss = 12.169809 Training Accuracy=0.984375
Iter 1250, Minibatch Loss = 33.456257 Training Accuracy=0.929688
Optimization Finished!
test_x = mnist.test.images
test_x = test_x.reshape((-1, n_steps, n_input))
test_y = mnist.test.labels
acc = sess.run(accuracy , feed_dict = {x: test_x[:128], y:test_y[:128]})
print(acc)
0.984375