Keras and the Last Number Problem#
Let’s see if we can do better than our simple hidden layer NN with the last number problem.
import numpy as np
import keras
from keras.utils import to_categorical
We’ll use the same data class
class ModelDataCategorical:
"""this is the model data for our "last number" training set. We
produce input of length N, consisting of numbers 0-9 and store
the result in a 10-element array as categorical data.
"""
def __init__(self, N=10):
self.N = N
# our model input data
self.x = np.random.randint(0, high=10, size=N)
self.x_scaled = self.x / 10 + 0.05
# our scaled model output data
self.y = np.array([self.x[-1]])
self.y_scaled = np.zeros(10) + 0.01
self.y_scaled[self.x[-1]] = 0.99
def interpret_result(self, out):
"""take the network output and return the number we predict"""
return np.argmax(out)
For Keras, we need to pack the scaled data (both input and output) into arrays. We’ll use
the Keras to_categorical() to make the data categorical.
Let’s make both a training set and a test set
x_train = []
y_train = []
for _ in range(10000):
m = ModelDataCategorical()
x_train.append(m.x_scaled)
y_train.append(m.y)
x_train = np.asarray(x_train)
y_train = to_categorical(y_train, 10)
x_test = []
y_test = []
for _ in range(1000):
m = ModelDataCategorical()
x_test.append(m.x_scaled)
y_test.append(m.y)
x_test = np.asarray(x_test)
y_test = to_categorical(y_test, 10)
Check to make sure the data looks like we expect:
x_train[0]
array([0.55, 0.15, 0.65, 0.45, 0.35, 0.15, 0.75, 0.05, 0.35, 0.15])
y_train[0]
array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])
Creating the network#
Now let’s build our network. We’ll use just a single hidden layer, but instead of the sigmoid used before, we’ll use RELU and the softmax activations.
from keras.models import Sequential
from keras.layers import Input, Dense, Dropout, Activation
from keras.optimizers import RMSprop
model = Sequential()
model.add(Input((10,)))
model.add(Dense(100, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(10, activation="softmax"))
rms = RMSprop()
model.compile(loss='categorical_crossentropy',
optimizer=rms, metrics=['accuracy'])
model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 100) │ 1,100 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout (Dropout) │ (None, 100) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 10) │ 1,010 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 2,110 (8.24 KB)
Trainable params: 2,110 (8.24 KB)
Non-trainable params: 0 (0.00 B)
Now we have ~ 2k parameters to fit.
Training#
Now we can train and test each epoch to see how we do
epochs = 100
batch_size = 256
model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size,
validation_data=(x_test, y_test), verbose=2)
Epoch 1/100
40/40 - 0s - 6ms/step - accuracy: 0.1517 - loss: 2.2674 - val_accuracy: 0.2230 - val_loss: 2.2156
Epoch 2/100
40/40 - 0s - 7ms/step - accuracy: 0.2446 - loss: 2.1625 - val_accuracy: 0.2420 - val_loss: 2.1060
Epoch 3/100
40/40 - 0s - 8ms/step - accuracy: 0.2623 - loss: 2.0577 - val_accuracy: 0.2690 - val_loss: 2.0033
Epoch 4/100
40/40 - 0s - 5ms/step - accuracy: 0.2847 - loss: 1.9567 - val_accuracy: 0.3070 - val_loss: 1.9022
Epoch 5/100
40/40 - 0s - 6ms/step - accuracy: 0.3092 - loss: 1.8608 - val_accuracy: 0.3350 - val_loss: 1.8089
Epoch 6/100
40/40 - 0s - 9ms/step - accuracy: 0.3392 - loss: 1.7737 - val_accuracy: 0.3890 - val_loss: 1.7205
Epoch 7/100
40/40 - 0s - 6ms/step - accuracy: 0.3702 - loss: 1.6964 - val_accuracy: 0.3770 - val_loss: 1.6483
Epoch 8/100
40/40 - 0s - 7ms/step - accuracy: 0.3970 - loss: 1.6254 - val_accuracy: 0.3960 - val_loss: 1.5868
Epoch 9/100
40/40 - 0s - 5ms/step - accuracy: 0.4247 - loss: 1.5634 - val_accuracy: 0.4680 - val_loss: 1.5245
Epoch 10/100
40/40 - 0s - 5ms/step - accuracy: 0.4441 - loss: 1.5073 - val_accuracy: 0.5020 - val_loss: 1.4690
Epoch 11/100
40/40 - 0s - 9ms/step - accuracy: 0.4610 - loss: 1.4536 - val_accuracy: 0.5210 - val_loss: 1.4100
Epoch 12/100
40/40 - 0s - 8ms/step - accuracy: 0.4840 - loss: 1.4034 - val_accuracy: 0.5810 - val_loss: 1.3628
Epoch 13/100
40/40 - 0s - 5ms/step - accuracy: 0.5115 - loss: 1.3530 - val_accuracy: 0.5950 - val_loss: 1.3139
Epoch 14/100
40/40 - 0s - 5ms/step - accuracy: 0.5285 - loss: 1.3100 - val_accuracy: 0.5830 - val_loss: 1.2770
Epoch 15/100
40/40 - 0s - 9ms/step - accuracy: 0.5622 - loss: 1.2654 - val_accuracy: 0.6220 - val_loss: 1.2390
Epoch 16/100
40/40 - 0s - 6ms/step - accuracy: 0.5744 - loss: 1.2306 - val_accuracy: 0.6140 - val_loss: 1.2048
Epoch 17/100
40/40 - 0s - 8ms/step - accuracy: 0.5961 - loss: 1.1936 - val_accuracy: 0.6980 - val_loss: 1.1626
Epoch 18/100
40/40 - 0s - 5ms/step - accuracy: 0.6108 - loss: 1.1608 - val_accuracy: 0.5880 - val_loss: 1.1349
Epoch 19/100
40/40 - 0s - 5ms/step - accuracy: 0.6430 - loss: 1.1245 - val_accuracy: 0.7100 - val_loss: 1.0963
Epoch 20/100
40/40 - 0s - 10ms/step - accuracy: 0.6537 - loss: 1.0932 - val_accuracy: 0.5590 - val_loss: 1.0762
Epoch 21/100
40/40 - 0s - 6ms/step - accuracy: 0.6591 - loss: 1.0691 - val_accuracy: 0.6710 - val_loss: 1.0487
Epoch 22/100
40/40 - 0s - 7ms/step - accuracy: 0.6918 - loss: 1.0383 - val_accuracy: 0.7000 - val_loss: 1.0185
Epoch 23/100
40/40 - 0s - 5ms/step - accuracy: 0.6960 - loss: 1.0133 - val_accuracy: 0.6940 - val_loss: 0.9908
Epoch 24/100
40/40 - 0s - 5ms/step - accuracy: 0.7064 - loss: 0.9870 - val_accuracy: 0.7980 - val_loss: 0.9598
Epoch 25/100
40/40 - 0s - 10ms/step - accuracy: 0.7344 - loss: 0.9623 - val_accuracy: 0.7100 - val_loss: 0.9410
Epoch 26/100
40/40 - 0s - 8ms/step - accuracy: 0.7437 - loss: 0.9381 - val_accuracy: 0.8170 - val_loss: 0.9124
Epoch 27/100
40/40 - 0s - 5ms/step - accuracy: 0.7630 - loss: 0.9161 - val_accuracy: 0.8460 - val_loss: 0.8843
Epoch 28/100
40/40 - 0s - 5ms/step - accuracy: 0.7806 - loss: 0.8938 - val_accuracy: 0.8300 - val_loss: 0.8632
Epoch 29/100
40/40 - 0s - 10ms/step - accuracy: 0.7947 - loss: 0.8681 - val_accuracy: 0.8170 - val_loss: 0.8485
Epoch 30/100
40/40 - 0s - 6ms/step - accuracy: 0.8035 - loss: 0.8458 - val_accuracy: 0.8730 - val_loss: 0.8169
Epoch 31/100
40/40 - 0s - 8ms/step - accuracy: 0.8202 - loss: 0.8228 - val_accuracy: 0.8840 - val_loss: 0.7936
Epoch 32/100
40/40 - 0s - 5ms/step - accuracy: 0.8367 - loss: 0.8040 - val_accuracy: 0.9100 - val_loss: 0.7710
Epoch 33/100
40/40 - 0s - 5ms/step - accuracy: 0.8455 - loss: 0.7795 - val_accuracy: 0.8830 - val_loss: 0.7591
Epoch 34/100
40/40 - 0s - 10ms/step - accuracy: 0.8600 - loss: 0.7586 - val_accuracy: 0.9130 - val_loss: 0.7387
Epoch 35/100
40/40 - 0s - 7ms/step - accuracy: 0.8686 - loss: 0.7407 - val_accuracy: 0.9020 - val_loss: 0.7134
Epoch 36/100
40/40 - 0s - 7ms/step - accuracy: 0.8793 - loss: 0.7189 - val_accuracy: 0.9620 - val_loss: 0.6935
Epoch 37/100
40/40 - 0s - 5ms/step - accuracy: 0.8914 - loss: 0.6985 - val_accuracy: 0.9280 - val_loss: 0.6715
Epoch 38/100
40/40 - 0s - 10ms/step - accuracy: 0.9011 - loss: 0.6799 - val_accuracy: 0.9500 - val_loss: 0.6597
Epoch 39/100
40/40 - 0s - 6ms/step - accuracy: 0.9069 - loss: 0.6623 - val_accuracy: 0.9660 - val_loss: 0.6328
Epoch 40/100
40/40 - 0s - 8ms/step - accuracy: 0.9191 - loss: 0.6452 - val_accuracy: 0.9710 - val_loss: 0.6143
Epoch 41/100
40/40 - 0s - 5ms/step - accuracy: 0.9231 - loss: 0.6274 - val_accuracy: 0.9310 - val_loss: 0.6075
Epoch 42/100
40/40 - 0s - 5ms/step - accuracy: 0.9309 - loss: 0.6073 - val_accuracy: 0.9890 - val_loss: 0.5817
Epoch 43/100
40/40 - 0s - 10ms/step - accuracy: 0.9412 - loss: 0.5915 - val_accuracy: 0.9730 - val_loss: 0.5727
Epoch 44/100
40/40 - 0s - 6ms/step - accuracy: 0.9489 - loss: 0.5731 - val_accuracy: 0.9900 - val_loss: 0.5432
Epoch 45/100
40/40 - 0s - 7ms/step - accuracy: 0.9511 - loss: 0.5585 - val_accuracy: 0.9940 - val_loss: 0.5329
Epoch 46/100
40/40 - 0s - 5ms/step - accuracy: 0.9564 - loss: 0.5410 - val_accuracy: 0.9810 - val_loss: 0.5148
Epoch 47/100
40/40 - 0s - 5ms/step - accuracy: 0.9619 - loss: 0.5246 - val_accuracy: 0.9940 - val_loss: 0.4967
Epoch 48/100
40/40 - 0s - 10ms/step - accuracy: 0.9668 - loss: 0.5093 - val_accuracy: 0.9960 - val_loss: 0.4807
Epoch 49/100
40/40 - 0s - 8ms/step - accuracy: 0.9701 - loss: 0.4958 - val_accuracy: 0.9970 - val_loss: 0.4687
Epoch 50/100
40/40 - 0s - 5ms/step - accuracy: 0.9732 - loss: 0.4809 - val_accuracy: 0.9980 - val_loss: 0.4477
Epoch 51/100
40/40 - 0s - 5ms/step - accuracy: 0.9754 - loss: 0.4688 - val_accuracy: 0.9980 - val_loss: 0.4330
Epoch 52/100
40/40 - 0s - 10ms/step - accuracy: 0.9806 - loss: 0.4505 - val_accuracy: 0.9990 - val_loss: 0.4189
Epoch 53/100
40/40 - 0s - 6ms/step - accuracy: 0.9811 - loss: 0.4382 - val_accuracy: 0.9990 - val_loss: 0.4103
Epoch 54/100
40/40 - 0s - 8ms/step - accuracy: 0.9838 - loss: 0.4230 - val_accuracy: 0.9980 - val_loss: 0.3908
Epoch 55/100
40/40 - 0s - 5ms/step - accuracy: 0.9854 - loss: 0.4105 - val_accuracy: 0.9990 - val_loss: 0.3812
Epoch 56/100
40/40 - 0s - 5ms/step - accuracy: 0.9877 - loss: 0.3968 - val_accuracy: 1.0000 - val_loss: 0.3657
Epoch 57/100
40/40 - 0s - 10ms/step - accuracy: 0.9866 - loss: 0.3845 - val_accuracy: 0.9980 - val_loss: 0.3545
Epoch 58/100
40/40 - 0s - 6ms/step - accuracy: 0.9886 - loss: 0.3738 - val_accuracy: 1.0000 - val_loss: 0.3423
Epoch 59/100
40/40 - 0s - 7ms/step - accuracy: 0.9898 - loss: 0.3629 - val_accuracy: 1.0000 - val_loss: 0.3402
Epoch 60/100
40/40 - 0s - 5ms/step - accuracy: 0.9887 - loss: 0.3513 - val_accuracy: 1.0000 - val_loss: 0.3264
Epoch 61/100
40/40 - 0s - 6ms/step - accuracy: 0.9922 - loss: 0.3391 - val_accuracy: 1.0000 - val_loss: 0.3090
Epoch 62/100
40/40 - 0s - 11ms/step - accuracy: 0.9908 - loss: 0.3284 - val_accuracy: 1.0000 - val_loss: 0.3057
Epoch 63/100
40/40 - 0s - 8ms/step - accuracy: 0.9935 - loss: 0.3184 - val_accuracy: 1.0000 - val_loss: 0.2906
Epoch 64/100
40/40 - 0s - 5ms/step - accuracy: 0.9927 - loss: 0.3073 - val_accuracy: 1.0000 - val_loss: 0.2766
Epoch 65/100
40/40 - 0s - 5ms/step - accuracy: 0.9937 - loss: 0.2984 - val_accuracy: 1.0000 - val_loss: 0.2674
Epoch 66/100
40/40 - 0s - 11ms/step - accuracy: 0.9947 - loss: 0.2881 - val_accuracy: 1.0000 - val_loss: 0.2669
Epoch 67/100
40/40 - 0s - 6ms/step - accuracy: 0.9938 - loss: 0.2782 - val_accuracy: 1.0000 - val_loss: 0.2488
Epoch 68/100
40/40 - 0s - 8ms/step - accuracy: 0.9948 - loss: 0.2699 - val_accuracy: 1.0000 - val_loss: 0.2475
Epoch 69/100
40/40 - 0s - 5ms/step - accuracy: 0.9956 - loss: 0.2610 - val_accuracy: 1.0000 - val_loss: 0.2340
Epoch 70/100
40/40 - 0s - 5ms/step - accuracy: 0.9971 - loss: 0.2509 - val_accuracy: 1.0000 - val_loss: 0.2276
Epoch 71/100
40/40 - 0s - 11ms/step - accuracy: 0.9958 - loss: 0.2433 - val_accuracy: 1.0000 - val_loss: 0.2171
Epoch 72/100
40/40 - 0s - 7ms/step - accuracy: 0.9963 - loss: 0.2342 - val_accuracy: 1.0000 - val_loss: 0.2109
Epoch 73/100
40/40 - 0s - 7ms/step - accuracy: 0.9967 - loss: 0.2277 - val_accuracy: 1.0000 - val_loss: 0.2041
Epoch 74/100
40/40 - 0s - 5ms/step - accuracy: 0.9977 - loss: 0.2181 - val_accuracy: 1.0000 - val_loss: 0.1905
Epoch 75/100
40/40 - 0s - 5ms/step - accuracy: 0.9971 - loss: 0.2115 - val_accuracy: 1.0000 - val_loss: 0.1863
Epoch 76/100
40/40 - 0s - 11ms/step - accuracy: 0.9978 - loss: 0.2035 - val_accuracy: 1.0000 - val_loss: 0.1873
Epoch 77/100
40/40 - 0s - 8ms/step - accuracy: 0.9982 - loss: 0.1966 - val_accuracy: 1.0000 - val_loss: 0.1773
Epoch 78/100
40/40 - 0s - 5ms/step - accuracy: 0.9983 - loss: 0.1895 - val_accuracy: 1.0000 - val_loss: 0.1608
Epoch 79/100
40/40 - 0s - 5ms/step - accuracy: 0.9978 - loss: 0.1821 - val_accuracy: 1.0000 - val_loss: 0.1594
Epoch 80/100
40/40 - 0s - 11ms/step - accuracy: 0.9981 - loss: 0.1772 - val_accuracy: 1.0000 - val_loss: 0.1495
Epoch 81/100
40/40 - 0s - 6ms/step - accuracy: 0.9985 - loss: 0.1709 - val_accuracy: 1.0000 - val_loss: 0.1450
Epoch 82/100
40/40 - 0s - 8ms/step - accuracy: 0.9982 - loss: 0.1644 - val_accuracy: 1.0000 - val_loss: 0.1368
Epoch 83/100
40/40 - 0s - 5ms/step - accuracy: 0.9991 - loss: 0.1576 - val_accuracy: 1.0000 - val_loss: 0.1357
Epoch 84/100
40/40 - 0s - 5ms/step - accuracy: 0.9983 - loss: 0.1523 - val_accuracy: 1.0000 - val_loss: 0.1264
Epoch 85/100
40/40 - 0s - 10ms/step - accuracy: 0.9986 - loss: 0.1459 - val_accuracy: 1.0000 - val_loss: 0.1247
Epoch 86/100
40/40 - 0s - 7ms/step - accuracy: 0.9992 - loss: 0.1409 - val_accuracy: 1.0000 - val_loss: 0.1215
Epoch 87/100
40/40 - 0s - 7ms/step - accuracy: 0.9987 - loss: 0.1364 - val_accuracy: 1.0000 - val_loss: 0.1136
Epoch 88/100
40/40 - 0s - 5ms/step - accuracy: 0.9988 - loss: 0.1306 - val_accuracy: 1.0000 - val_loss: 0.1084
Epoch 89/100
40/40 - 0s - 5ms/step - accuracy: 0.9985 - loss: 0.1261 - val_accuracy: 1.0000 - val_loss: 0.1044
Epoch 90/100
40/40 - 0s - 12ms/step - accuracy: 0.9993 - loss: 0.1218 - val_accuracy: 1.0000 - val_loss: 0.0978
Epoch 91/100
40/40 - 0s - 8ms/step - accuracy: 0.9990 - loss: 0.1178 - val_accuracy: 1.0000 - val_loss: 0.0953
Epoch 92/100
40/40 - 0s - 5ms/step - accuracy: 0.9990 - loss: 0.1132 - val_accuracy: 1.0000 - val_loss: 0.0901
Epoch 93/100
40/40 - 0s - 5ms/step - accuracy: 0.9993 - loss: 0.1091 - val_accuracy: 1.0000 - val_loss: 0.0868
Epoch 94/100
40/40 - 0s - 10ms/step - accuracy: 0.9994 - loss: 0.1053 - val_accuracy: 1.0000 - val_loss: 0.0868
Epoch 95/100
40/40 - 0s - 6ms/step - accuracy: 0.9992 - loss: 0.1014 - val_accuracy: 1.0000 - val_loss: 0.0788
Epoch 96/100
40/40 - 0s - 7ms/step - accuracy: 0.9991 - loss: 0.0979 - val_accuracy: 1.0000 - val_loss: 0.0757
Epoch 97/100
40/40 - 0s - 5ms/step - accuracy: 0.9996 - loss: 0.0945 - val_accuracy: 1.0000 - val_loss: 0.0712
Epoch 98/100
40/40 - 0s - 5ms/step - accuracy: 0.9992 - loss: 0.0901 - val_accuracy: 1.0000 - val_loss: 0.0730
Epoch 99/100
40/40 - 0s - 11ms/step - accuracy: 0.9995 - loss: 0.0874 - val_accuracy: 1.0000 - val_loss: 0.0647
Epoch 100/100
40/40 - 0s - 7ms/step - accuracy: 0.9996 - loss: 0.0833 - val_accuracy: 1.0000 - val_loss: 0.0637
<keras.src.callbacks.history.History at 0x7fc67107f770>
As we see, the network is essentially perfect now.