Keras and the Last Number Problem#
Let’s see if we can do better than our simple hidden layer NN with the last number problem.
import numpy as np
import keras
from keras.utils import to_categorical
2025-04-30 17:08:21.120303: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-04-30 17:08:21.123669: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-04-30 17:08:21.132532: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1746032901.147675 7262 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1746032901.151962 7262 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1746032901.163803 7262 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1746032901.163821 7262 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1746032901.163823 7262 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1746032901.163824 7262 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-04-30 17:08:21.167929: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
We’ll use the same data class
class ModelDataCategorical:
"""this is the model data for our "last number" training set. We
produce input of length N, consisting of numbers 0-9 and store
the result in a 10-element array as categorical data.
"""
def __init__(self, N=10):
self.N = N
# our model input data
self.x = np.random.randint(0, high=10, size=N)
self.x_scaled = self.x / 10 + 0.05
# our scaled model output data
self.y = np.array([self.x[-1]])
self.y_scaled = np.zeros(10) + 0.01
self.y_scaled[self.x[-1]] = 0.99
def interpret_result(self, out):
"""take the network output and return the number we predict"""
return np.argmax(out)
For Keras, we need to pack the scaled data (both input and output) into arrays. We’ll use
the Keras to_categorical()
to make the data categorical.
Let’s make both a training set and a test set
x_train = []
y_train = []
for _ in range(10000):
m = ModelDataCategorical()
x_train.append(m.x_scaled)
y_train.append(m.y)
x_train = np.asarray(x_train)
y_train = to_categorical(y_train, 10)
x_test = []
y_test = []
for _ in range(1000):
m = ModelDataCategorical()
x_test.append(m.x_scaled)
y_test.append(m.y)
x_test = np.asarray(x_test)
y_test = to_categorical(y_test, 10)
Check to make sure the data looks like we expect:
x_train[0]
array([0.55, 0.45, 0.65, 0.45, 0.95, 0.25, 0.85, 0.55, 0.85, 0.35])
y_train[0]
array([0., 0., 0., 1., 0., 0., 0., 0., 0., 0.])
Creating the network#
Now let’s build our network. We’ll use just a single hidden layer, but instead of the sigmoid used before, we’ll use RELU and the softmax activations.
from keras.models import Sequential
from keras.layers import Input, Dense, Dropout, Activation
from keras.optimizers import RMSprop
model = Sequential()
model.add(Input((10,)))
model.add(Dense(100, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(10, activation="softmax"))
2025-04-30 17:08:23.957954: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
rms = RMSprop()
model.compile(loss='categorical_crossentropy',
optimizer=rms, metrics=['accuracy'])
model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 100) │ 1,100 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout (Dropout) │ (None, 100) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 10) │ 1,010 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 2,110 (8.24 KB)
Trainable params: 2,110 (8.24 KB)
Non-trainable params: 0 (0.00 B)
Now we have ~ 2k parameters to fit.
Training#
Now we can train and test each epoch to see how we do
epochs = 100
batch_size = 256
model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size,
validation_data=(x_test, y_test), verbose=2)
Epoch 1/100
40/40 - 1s - 17ms/step - accuracy: 0.1315 - loss: 2.2887 - val_accuracy: 0.1980 - val_loss: 2.2504
Epoch 2/100
40/40 - 0s - 3ms/step - accuracy: 0.2110 - loss: 2.2150 - val_accuracy: 0.2490 - val_loss: 2.1667
Epoch 3/100
40/40 - 0s - 3ms/step - accuracy: 0.2509 - loss: 2.1225 - val_accuracy: 0.2730 - val_loss: 2.0637
Epoch 4/100
40/40 - 0s - 3ms/step - accuracy: 0.2768 - loss: 2.0209 - val_accuracy: 0.2680 - val_loss: 1.9587
Epoch 5/100
40/40 - 0s - 3ms/step - accuracy: 0.2978 - loss: 1.9208 - val_accuracy: 0.3200 - val_loss: 1.8569
Epoch 6/100
40/40 - 0s - 3ms/step - accuracy: 0.3268 - loss: 1.8293 - val_accuracy: 0.3860 - val_loss: 1.7715
Epoch 7/100
40/40 - 0s - 3ms/step - accuracy: 0.3564 - loss: 1.7450 - val_accuracy: 0.4040 - val_loss: 1.6872
Epoch 8/100
40/40 - 0s - 3ms/step - accuracy: 0.3901 - loss: 1.6678 - val_accuracy: 0.4520 - val_loss: 1.6168
Epoch 9/100
40/40 - 0s - 3ms/step - accuracy: 0.4212 - loss: 1.5990 - val_accuracy: 0.4810 - val_loss: 1.5470
Epoch 10/100
40/40 - 0s - 3ms/step - accuracy: 0.4402 - loss: 1.5390 - val_accuracy: 0.5410 - val_loss: 1.4879
Epoch 11/100
40/40 - 0s - 3ms/step - accuracy: 0.4725 - loss: 1.4799 - val_accuracy: 0.5220 - val_loss: 1.4299
Epoch 12/100
40/40 - 0s - 3ms/step - accuracy: 0.4913 - loss: 1.4252 - val_accuracy: 0.5610 - val_loss: 1.3791
Epoch 13/100
40/40 - 0s - 3ms/step - accuracy: 0.5191 - loss: 1.3737 - val_accuracy: 0.5350 - val_loss: 1.3364
Epoch 14/100
40/40 - 0s - 3ms/step - accuracy: 0.5347 - loss: 1.3346 - val_accuracy: 0.5730 - val_loss: 1.2944
Epoch 15/100
40/40 - 0s - 3ms/step - accuracy: 0.5494 - loss: 1.2912 - val_accuracy: 0.5300 - val_loss: 1.2496
Epoch 16/100
40/40 - 0s - 3ms/step - accuracy: 0.5599 - loss: 1.2519 - val_accuracy: 0.7010 - val_loss: 1.2075
Epoch 17/100
40/40 - 0s - 3ms/step - accuracy: 0.5882 - loss: 1.2122 - val_accuracy: 0.6640 - val_loss: 1.1705
Epoch 18/100
40/40 - 0s - 3ms/step - accuracy: 0.6069 - loss: 1.1787 - val_accuracy: 0.7010 - val_loss: 1.1349
Epoch 19/100
40/40 - 0s - 3ms/step - accuracy: 0.6264 - loss: 1.1452 - val_accuracy: 0.7340 - val_loss: 1.1004
Epoch 20/100
40/40 - 0s - 3ms/step - accuracy: 0.6460 - loss: 1.1095 - val_accuracy: 0.7720 - val_loss: 1.0711
Epoch 21/100
40/40 - 0s - 3ms/step - accuracy: 0.6551 - loss: 1.0792 - val_accuracy: 0.8030 - val_loss: 1.0399
Epoch 22/100
40/40 - 0s - 3ms/step - accuracy: 0.6837 - loss: 1.0487 - val_accuracy: 0.7980 - val_loss: 1.0080
Epoch 23/100
40/40 - 0s - 3ms/step - accuracy: 0.6900 - loss: 1.0240 - val_accuracy: 0.8100 - val_loss: 0.9849
Epoch 24/100
40/40 - 0s - 3ms/step - accuracy: 0.7095 - loss: 0.9953 - val_accuracy: 0.8000 - val_loss: 0.9591
Epoch 25/100
40/40 - 0s - 3ms/step - accuracy: 0.7198 - loss: 0.9694 - val_accuracy: 0.8140 - val_loss: 0.9414
Epoch 26/100
40/40 - 0s - 3ms/step - accuracy: 0.7425 - loss: 0.9435 - val_accuracy: 0.8460 - val_loss: 0.9106
Epoch 27/100
40/40 - 0s - 3ms/step - accuracy: 0.7569 - loss: 0.9206 - val_accuracy: 0.7070 - val_loss: 0.8950
Epoch 28/100
40/40 - 0s - 3ms/step - accuracy: 0.7693 - loss: 0.8973 - val_accuracy: 0.8230 - val_loss: 0.8615
Epoch 29/100
40/40 - 0s - 3ms/step - accuracy: 0.7861 - loss: 0.8728 - val_accuracy: 0.8820 - val_loss: 0.8361
Epoch 30/100
40/40 - 0s - 3ms/step - accuracy: 0.8063 - loss: 0.8505 - val_accuracy: 0.8810 - val_loss: 0.8186
Epoch 31/100
40/40 - 0s - 3ms/step - accuracy: 0.8169 - loss: 0.8268 - val_accuracy: 0.9260 - val_loss: 0.7877
Epoch 32/100
40/40 - 0s - 3ms/step - accuracy: 0.8263 - loss: 0.8081 - val_accuracy: 0.9030 - val_loss: 0.7712
Epoch 33/100
40/40 - 0s - 3ms/step - accuracy: 0.8420 - loss: 0.7856 - val_accuracy: 0.9330 - val_loss: 0.7463
Epoch 34/100
40/40 - 0s - 3ms/step - accuracy: 0.8531 - loss: 0.7640 - val_accuracy: 0.9160 - val_loss: 0.7294
Epoch 35/100
40/40 - 0s - 3ms/step - accuracy: 0.8669 - loss: 0.7445 - val_accuracy: 0.8990 - val_loss: 0.7153
Epoch 36/100
40/40 - 0s - 3ms/step - accuracy: 0.8781 - loss: 0.7238 - val_accuracy: 0.9080 - val_loss: 0.6972
Epoch 37/100
40/40 - 0s - 3ms/step - accuracy: 0.8932 - loss: 0.7035 - val_accuracy: 0.9580 - val_loss: 0.6702
Epoch 38/100
40/40 - 0s - 3ms/step - accuracy: 0.9012 - loss: 0.6865 - val_accuracy: 0.9360 - val_loss: 0.6538
Epoch 39/100
40/40 - 0s - 3ms/step - accuracy: 0.9099 - loss: 0.6643 - val_accuracy: 0.9540 - val_loss: 0.6333
Epoch 40/100
40/40 - 0s - 3ms/step - accuracy: 0.9140 - loss: 0.6497 - val_accuracy: 0.9420 - val_loss: 0.6152
Epoch 41/100
40/40 - 0s - 3ms/step - accuracy: 0.9300 - loss: 0.6295 - val_accuracy: 0.9670 - val_loss: 0.6045
Epoch 42/100
40/40 - 0s - 3ms/step - accuracy: 0.9354 - loss: 0.6135 - val_accuracy: 0.9940 - val_loss: 0.5818
Epoch 43/100
40/40 - 0s - 3ms/step - accuracy: 0.9440 - loss: 0.5963 - val_accuracy: 0.9780 - val_loss: 0.5677
Epoch 44/100
40/40 - 0s - 3ms/step - accuracy: 0.9468 - loss: 0.5803 - val_accuracy: 0.9920 - val_loss: 0.5546
Epoch 45/100
40/40 - 0s - 3ms/step - accuracy: 0.9561 - loss: 0.5630 - val_accuracy: 0.9810 - val_loss: 0.5333
Epoch 46/100
40/40 - 0s - 3ms/step - accuracy: 0.9599 - loss: 0.5460 - val_accuracy: 0.9770 - val_loss: 0.5234
Epoch 47/100
40/40 - 0s - 3ms/step - accuracy: 0.9636 - loss: 0.5285 - val_accuracy: 0.9990 - val_loss: 0.4998
Epoch 48/100
40/40 - 0s - 3ms/step - accuracy: 0.9670 - loss: 0.5162 - val_accuracy: 1.0000 - val_loss: 0.4827
Epoch 49/100
40/40 - 0s - 3ms/step - accuracy: 0.9721 - loss: 0.4987 - val_accuracy: 0.9980 - val_loss: 0.4721
Epoch 50/100
40/40 - 0s - 3ms/step - accuracy: 0.9749 - loss: 0.4843 - val_accuracy: 1.0000 - val_loss: 0.4502
Epoch 51/100
40/40 - 0s - 3ms/step - accuracy: 0.9783 - loss: 0.4686 - val_accuracy: 0.9990 - val_loss: 0.4538
Epoch 52/100
40/40 - 0s - 3ms/step - accuracy: 0.9813 - loss: 0.4524 - val_accuracy: 1.0000 - val_loss: 0.4201
Epoch 53/100
40/40 - 0s - 3ms/step - accuracy: 0.9818 - loss: 0.4400 - val_accuracy: 0.9990 - val_loss: 0.4150
Epoch 54/100
40/40 - 0s - 3ms/step - accuracy: 0.9840 - loss: 0.4274 - val_accuracy: 1.0000 - val_loss: 0.3976
Epoch 55/100
40/40 - 0s - 3ms/step - accuracy: 0.9883 - loss: 0.4133 - val_accuracy: 0.9950 - val_loss: 0.3850
Epoch 56/100
40/40 - 0s - 3ms/step - accuracy: 0.9877 - loss: 0.4005 - val_accuracy: 1.0000 - val_loss: 0.3762
Epoch 57/100
40/40 - 0s - 3ms/step - accuracy: 0.9880 - loss: 0.3889 - val_accuracy: 1.0000 - val_loss: 0.3640
Epoch 58/100
40/40 - 0s - 3ms/step - accuracy: 0.9898 - loss: 0.3752 - val_accuracy: 1.0000 - val_loss: 0.3507
Epoch 59/100
40/40 - 0s - 3ms/step - accuracy: 0.9920 - loss: 0.3628 - val_accuracy: 1.0000 - val_loss: 0.3318
Epoch 60/100
40/40 - 0s - 3ms/step - accuracy: 0.9927 - loss: 0.3501 - val_accuracy: 1.0000 - val_loss: 0.3266
Epoch 61/100
40/40 - 0s - 3ms/step - accuracy: 0.9926 - loss: 0.3395 - val_accuracy: 1.0000 - val_loss: 0.3124
Epoch 62/100
40/40 - 0s - 3ms/step - accuracy: 0.9927 - loss: 0.3287 - val_accuracy: 1.0000 - val_loss: 0.3049
Epoch 63/100
40/40 - 0s - 3ms/step - accuracy: 0.9928 - loss: 0.3178 - val_accuracy: 1.0000 - val_loss: 0.2879
Epoch 64/100
40/40 - 0s - 3ms/step - accuracy: 0.9941 - loss: 0.3080 - val_accuracy: 1.0000 - val_loss: 0.2822
Epoch 65/100
40/40 - 0s - 3ms/step - accuracy: 0.9947 - loss: 0.2987 - val_accuracy: 1.0000 - val_loss: 0.2769
Epoch 66/100
40/40 - 0s - 3ms/step - accuracy: 0.9948 - loss: 0.2880 - val_accuracy: 1.0000 - val_loss: 0.2579
Epoch 67/100
40/40 - 0s - 3ms/step - accuracy: 0.9959 - loss: 0.2770 - val_accuracy: 1.0000 - val_loss: 0.2484
Epoch 68/100
40/40 - 0s - 3ms/step - accuracy: 0.9961 - loss: 0.2689 - val_accuracy: 1.0000 - val_loss: 0.2403
Epoch 69/100
40/40 - 0s - 3ms/step - accuracy: 0.9969 - loss: 0.2582 - val_accuracy: 1.0000 - val_loss: 0.2312
Epoch 70/100
40/40 - 0s - 3ms/step - accuracy: 0.9970 - loss: 0.2497 - val_accuracy: 1.0000 - val_loss: 0.2256
Epoch 71/100
40/40 - 0s - 3ms/step - accuracy: 0.9969 - loss: 0.2424 - val_accuracy: 1.0000 - val_loss: 0.2193
Epoch 72/100
40/40 - 0s - 3ms/step - accuracy: 0.9974 - loss: 0.2343 - val_accuracy: 1.0000 - val_loss: 0.2129
Epoch 73/100
40/40 - 0s - 3ms/step - accuracy: 0.9971 - loss: 0.2254 - val_accuracy: 1.0000 - val_loss: 0.1980
Epoch 74/100
40/40 - 0s - 3ms/step - accuracy: 0.9977 - loss: 0.2170 - val_accuracy: 1.0000 - val_loss: 0.1953
Epoch 75/100
40/40 - 0s - 3ms/step - accuracy: 0.9983 - loss: 0.2100 - val_accuracy: 1.0000 - val_loss: 0.1876
Epoch 76/100
40/40 - 0s - 3ms/step - accuracy: 0.9980 - loss: 0.2029 - val_accuracy: 1.0000 - val_loss: 0.1825
Epoch 77/100
40/40 - 0s - 3ms/step - accuracy: 0.9974 - loss: 0.1963 - val_accuracy: 1.0000 - val_loss: 0.1720
Epoch 78/100
40/40 - 0s - 3ms/step - accuracy: 0.9974 - loss: 0.1894 - val_accuracy: 1.0000 - val_loss: 0.1628
Epoch 79/100
40/40 - 0s - 3ms/step - accuracy: 0.9977 - loss: 0.1826 - val_accuracy: 1.0000 - val_loss: 0.1573
Epoch 80/100
40/40 - 0s - 3ms/step - accuracy: 0.9977 - loss: 0.1770 - val_accuracy: 1.0000 - val_loss: 0.1492
Epoch 81/100
40/40 - 0s - 3ms/step - accuracy: 0.9992 - loss: 0.1700 - val_accuracy: 1.0000 - val_loss: 0.1462
Epoch 82/100
40/40 - 0s - 3ms/step - accuracy: 0.9990 - loss: 0.1629 - val_accuracy: 1.0000 - val_loss: 0.1393
Epoch 83/100
40/40 - 0s - 3ms/step - accuracy: 0.9985 - loss: 0.1565 - val_accuracy: 1.0000 - val_loss: 0.1321
Epoch 84/100
40/40 - 0s - 3ms/step - accuracy: 0.9985 - loss: 0.1509 - val_accuracy: 1.0000 - val_loss: 0.1289
Epoch 85/100
40/40 - 0s - 3ms/step - accuracy: 0.9986 - loss: 0.1452 - val_accuracy: 1.0000 - val_loss: 0.1259
Epoch 86/100
40/40 - 0s - 3ms/step - accuracy: 0.9990 - loss: 0.1388 - val_accuracy: 1.0000 - val_loss: 0.1197
Epoch 87/100
40/40 - 0s - 3ms/step - accuracy: 0.9990 - loss: 0.1340 - val_accuracy: 1.0000 - val_loss: 0.1106
Epoch 88/100
40/40 - 0s - 3ms/step - accuracy: 0.9991 - loss: 0.1285 - val_accuracy: 1.0000 - val_loss: 0.1057
Epoch 89/100
40/40 - 0s - 3ms/step - accuracy: 0.9992 - loss: 0.1247 - val_accuracy: 1.0000 - val_loss: 0.1028
Epoch 90/100
40/40 - 0s - 3ms/step - accuracy: 0.9992 - loss: 0.1198 - val_accuracy: 1.0000 - val_loss: 0.1012
Epoch 91/100
40/40 - 0s - 3ms/step - accuracy: 0.9992 - loss: 0.1147 - val_accuracy: 1.0000 - val_loss: 0.0973
Epoch 92/100
40/40 - 0s - 3ms/step - accuracy: 0.9995 - loss: 0.1104 - val_accuracy: 1.0000 - val_loss: 0.0961
Epoch 93/100
40/40 - 0s - 3ms/step - accuracy: 0.9989 - loss: 0.1063 - val_accuracy: 1.0000 - val_loss: 0.0865
Epoch 94/100
40/40 - 0s - 3ms/step - accuracy: 0.9995 - loss: 0.1020 - val_accuracy: 1.0000 - val_loss: 0.0820
Epoch 95/100
40/40 - 0s - 3ms/step - accuracy: 0.9997 - loss: 0.0985 - val_accuracy: 1.0000 - val_loss: 0.0760
Epoch 96/100
40/40 - 0s - 3ms/step - accuracy: 0.9996 - loss: 0.0937 - val_accuracy: 1.0000 - val_loss: 0.0778
Epoch 97/100
40/40 - 0s - 3ms/step - accuracy: 0.9996 - loss: 0.0909 - val_accuracy: 1.0000 - val_loss: 0.0830
Epoch 98/100
40/40 - 0s - 3ms/step - accuracy: 0.9994 - loss: 0.0879 - val_accuracy: 1.0000 - val_loss: 0.0676
Epoch 99/100
40/40 - 0s - 3ms/step - accuracy: 0.9991 - loss: 0.0840 - val_accuracy: 1.0000 - val_loss: 0.0674
Epoch 100/100
40/40 - 0s - 3ms/step - accuracy: 0.9990 - loss: 0.0817 - val_accuracy: 1.0000 - val_loss: 0.0621
<keras.src.callbacks.history.History at 0x7fbbe1aefb90>
As we see, the network is essentially perfect now.