9.6. Neural network classifier demonstration¶
Last revised: 15-Oct-2019 by Christian Forssén [christian.forssen@chalmers.se]
%matplotlib inline
import numpy as np
import scipy as sp
from scipy.stats import multivariate_normal
import matplotlib.pyplot as plt
# Not really needed, but nicer plots
import seaborn as sns
sns.set_style("darkgrid")
sns.set_context("talk")
Developing a code for doing neural networks with back propagation¶
One can identify a set of key steps when using neural networks to solve supervised learning problems:
Collect and pre-process data
Define model and architecture
Choose cost function and optimizer
Train the model
Evaluate model performance on test data
Adjust hyperparameters (if necessary, network architecture)
Introduction to tensorflow¶
This short introduction uses Keras to:
Build a neural network that classifies images.
Train this neural network.
And, finally, evaluate the accuracy of the model.
See https://www.tensorflow.org/tutorials/quickstart/beginner for more details
See also the Tensorflow classification tutorial
# Install TensorFlow by updating the conda environment
import tensorflow as tf
print(tf.__version__)
2.7.0
Load and prepare the MNIST dataset. Convert the samples from integers to floating-point numbers:
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 2s 0us/step
11501568/11490434 [==============================] - 2s 0us/step
The images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255. The labels are an array of integers, ranging from 0 to 9.
Explore the data¶
# The shape of the training data
x_train.shape
(60000, 28, 28)
# Each training label is an integer
y_train
array([5, 0, 4, ..., 5, 6, 8], dtype=uint8)
plt.figure()
plt.imshow(x_train[0])
plt.colorbar()
plt.grid(False)
Scale these values to a range of 0 to 1 before feeding them to the neural network model. To do so, divide the values by 255. It’s important that the training set and the testing set be preprocessed in the same way:
x_train, x_test = x_train / 255.0, x_test / 255.0
To verify that the data is in the correct format and that you’re ready to build and train the network, let’s display the first 25 images from the training set and display the class name below each image.
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(x_train[i], cmap=plt.cm.binary)
plt.xlabel(str(y_train[i]))
Build the network¶
The basic building block of a neural network is the layer. Layers extract representations from the data fed into them. Hopefully, these representations are meaningful for the problem at hand.
Most of deep learning consists of chaining together simple layers. Most layers, such as tf.keras.layers.Dense
, have parameters that are learned during training.
Build the tf.keras.Sequential model by stacking layers. Choose an optimizer and loss function for training:
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
The first layer in this network, tf.keras.layers.Flatten
, transforms the format of the images from a two-dimensional array (of 28 by 28 pixels) to a one-dimensional array (of 28 * 28 = 784 pixels). Think of this layer as unstacking rows of pixels in the image and lining them up. This layer has no parameters to learn; it only reformats the data.
After the pixels are flattened, the network consists two tf.keras.layers.Dense
layers. These are densely connected, or fully connected, neural layers. The first Dense layer has 128 nodes (or neurons). The second (and last) layer is a 10-node softmax layer that returns an array of 10 probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the 10 classes.
In between the Dense layers is a tf.keras.layers.Dropout
layer. Dropout consists in randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting.
Before the model is ready for training, it needs a few more settings. These are added during the model’s compile step:
Loss function — This measures how accurate the model is during training. You want to minimize this function to “steer” the model in the right direction.
Optimizer — This is how the model is updated based on the data it sees and its loss function.
Metrics — Used to monitor the training and testing steps. The following example uses accuracy, the fraction of the images that are correctly classified.
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
Train and evaluate the model:¶
model.fit(x_train, y_train, epochs=5)
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 5s 81us/sample - loss: 0.2908 - accuracy: 0.9158
Epoch 2/5
60000/60000 [==============================] - 4s 67us/sample - loss: 0.1435 - accuracy: 0.9584
Epoch 3/5
60000/60000 [==============================] - 4s 70us/sample - loss: 0.1077 - accuracy: 0.9678
Epoch 4/5
60000/60000 [==============================] - 4s 70us/sample - loss: 0.0871 - accuracy: 0.9731s - loss: 0.0876 - accu
Epoch 5/5
60000/60000 [==============================] - 4s 70us/sample - loss: 0.0731 - accuracy: 0.9770
<tensorflow.python.keras.callbacks.History at 0x14a79a0f0>
Evaluate accuracy¶
Next, compare how the model performs on the test dataset:
test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2)
print('\nTest accuracy:', test_acc)
10000/1 - 0s - loss: 0.0386 - accuracy: 0.9807
Test accuracy: 0.9807
Make predictions¶
With the model trained, you can use it to make predictions about some images.
predictions = model.predict(x_test)
# Let's look at the prediction for the first test image
predictions[0]
array([3.3185064e-08, 7.0753789e-09, 1.5241905e-06, 2.6125135e-05,
2.3849409e-10, 2.8495455e-08, 2.2678712e-13, 9.9997008e-01,
1.9005751e-07, 2.0423677e-06], dtype=float32)
# Check the normalization of the output probabilities
np.sum(predictions[0])
1.0
# Which prob is largest?
np.argmax(predictions[0])
7
# Examining the test label shows that this classification is correct:
y_test[0]
7
# Some helper functions for nice plotting
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(str(predicted_label),
100*np.max(predictions_array),
true_label),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
# Plot the first X test images, their predicted labels, and the true labels.
# Color correct predictions in blue and incorrect predictions in red.
num_rows = 5
num_cols = 3
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, predictions[i], y_test, x_test)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, predictions[i], y_test)
plt.tight_layout()