Keras and the Last Number Problem#
Let’s see if we can do better than our simple hidden layer NN with the last number problem.
import numpy as np
import keras
from keras.utils import to_categorical
2025-06-27 16:25:53.297467: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-06-27 16:25:53.300690: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-06-27 16:25:53.309242: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1751041553.323696 7465 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1751041553.327852 7465 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1751041553.339562 7465 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1751041553.339573 7465 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1751041553.339575 7465 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1751041553.339577 7465 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-06-27 16:25:53.344004: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
We’ll use the same data class
class ModelDataCategorical:
"""this is the model data for our "last number" training set. We
produce input of length N, consisting of numbers 0-9 and store
the result in a 10-element array as categorical data.
"""
def __init__(self, N=10):
self.N = N
# our model input data
self.x = np.random.randint(0, high=10, size=N)
self.x_scaled = self.x / 10 + 0.05
# our scaled model output data
self.y = np.array([self.x[-1]])
self.y_scaled = np.zeros(10) + 0.01
self.y_scaled[self.x[-1]] = 0.99
def interpret_result(self, out):
"""take the network output and return the number we predict"""
return np.argmax(out)
For Keras, we need to pack the scaled data (both input and output) into arrays. We’ll use
the Keras to_categorical()
to make the data categorical.
Let’s make both a training set and a test set
x_train = []
y_train = []
for _ in range(10000):
m = ModelDataCategorical()
x_train.append(m.x_scaled)
y_train.append(m.y)
x_train = np.asarray(x_train)
y_train = to_categorical(y_train, 10)
x_test = []
y_test = []
for _ in range(1000):
m = ModelDataCategorical()
x_test.append(m.x_scaled)
y_test.append(m.y)
x_test = np.asarray(x_test)
y_test = to_categorical(y_test, 10)
Check to make sure the data looks like we expect:
x_train[0]
array([0.35, 0.35, 0.05, 0.35, 0.95, 0.65, 0.45, 0.35, 0.15, 0.05])
y_train[0]
array([1., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
Creating the network#
Now let’s build our network. We’ll use just a single hidden layer, but instead of the sigmoid used before, we’ll use RELU and the softmax activations.
from keras.models import Sequential
from keras.layers import Input, Dense, Dropout, Activation
from keras.optimizers import RMSprop
model = Sequential()
model.add(Input((10,)))
model.add(Dense(100, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(10, activation="softmax"))
2025-06-27 16:25:56.300434: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
rms = RMSprop()
model.compile(loss='categorical_crossentropy',
optimizer=rms, metrics=['accuracy'])
model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓ ┃ Layer (type) ┃ Output Shape ┃ Param # ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩ │ dense (Dense) │ (None, 100) │ 1,100 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dropout (Dropout) │ (None, 100) │ 0 │ ├─────────────────────────────────┼────────────────────────┼───────────────┤ │ dense_1 (Dense) │ (None, 10) │ 1,010 │ └─────────────────────────────────┴────────────────────────┴───────────────┘
Total params: 2,110 (8.24 KB)
Trainable params: 2,110 (8.24 KB)
Non-trainable params: 0 (0.00 B)
Now we have ~ 2k parameters to fit.
Training#
Now we can train and test each epoch to see how we do
epochs = 100
batch_size = 256
model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size,
validation_data=(x_test, y_test), verbose=2)
Epoch 1/100
40/40 - 1s - 17ms/step - accuracy: 0.1742 - loss: 2.2472 - val_accuracy: 0.1990 - val_loss: 2.1950
Epoch 2/100
40/40 - 0s - 3ms/step - accuracy: 0.2380 - loss: 2.1491 - val_accuracy: 0.2850 - val_loss: 2.0967
Epoch 3/100
40/40 - 0s - 3ms/step - accuracy: 0.2729 - loss: 2.0455 - val_accuracy: 0.2960 - val_loss: 1.9831
Epoch 4/100
40/40 - 0s - 3ms/step - accuracy: 0.2918 - loss: 1.9467 - val_accuracy: 0.3440 - val_loss: 1.8912
Epoch 5/100
40/40 - 0s - 3ms/step - accuracy: 0.3154 - loss: 1.8537 - val_accuracy: 0.3660 - val_loss: 1.7969
Epoch 6/100
40/40 - 0s - 3ms/step - accuracy: 0.3454 - loss: 1.7669 - val_accuracy: 0.3130 - val_loss: 1.7152
Epoch 7/100
40/40 - 0s - 3ms/step - accuracy: 0.3591 - loss: 1.6922 - val_accuracy: 0.4250 - val_loss: 1.6445
Epoch 8/100
40/40 - 0s - 3ms/step - accuracy: 0.3974 - loss: 1.6268 - val_accuracy: 0.3930 - val_loss: 1.5825
Epoch 9/100
40/40 - 0s - 3ms/step - accuracy: 0.4198 - loss: 1.5601 - val_accuracy: 0.4900 - val_loss: 1.5174
Epoch 10/100
40/40 - 0s - 3ms/step - accuracy: 0.4577 - loss: 1.5016 - val_accuracy: 0.5000 - val_loss: 1.4580
Epoch 11/100
40/40 - 0s - 3ms/step - accuracy: 0.4777 - loss: 1.4496 - val_accuracy: 0.5370 - val_loss: 1.4057
Epoch 12/100
40/40 - 0s - 2ms/step - accuracy: 0.4943 - loss: 1.3980 - val_accuracy: 0.5090 - val_loss: 1.3666
Epoch 13/100
40/40 - 0s - 2ms/step - accuracy: 0.5164 - loss: 1.3546 - val_accuracy: 0.5650 - val_loss: 1.3137
Epoch 14/100
40/40 - 0s - 2ms/step - accuracy: 0.5374 - loss: 1.3122 - val_accuracy: 0.6160 - val_loss: 1.2747
Epoch 15/100
40/40 - 0s - 3ms/step - accuracy: 0.5537 - loss: 1.2735 - val_accuracy: 0.5700 - val_loss: 1.2371
Epoch 16/100
40/40 - 0s - 3ms/step - accuracy: 0.5687 - loss: 1.2339 - val_accuracy: 0.5790 - val_loss: 1.2010
Epoch 17/100
40/40 - 0s - 3ms/step - accuracy: 0.6003 - loss: 1.1948 - val_accuracy: 0.6540 - val_loss: 1.1573
Epoch 18/100
40/40 - 0s - 2ms/step - accuracy: 0.6168 - loss: 1.1615 - val_accuracy: 0.6560 - val_loss: 1.1215
Epoch 19/100
40/40 - 0s - 2ms/step - accuracy: 0.6376 - loss: 1.1321 - val_accuracy: 0.6810 - val_loss: 1.0977
Epoch 20/100
40/40 - 0s - 2ms/step - accuracy: 0.6488 - loss: 1.1012 - val_accuracy: 0.7360 - val_loss: 1.0648
Epoch 21/100
40/40 - 0s - 3ms/step - accuracy: 0.6681 - loss: 1.0724 - val_accuracy: 0.6440 - val_loss: 1.0471
Epoch 22/100
40/40 - 0s - 3ms/step - accuracy: 0.6801 - loss: 1.0431 - val_accuracy: 0.7570 - val_loss: 1.0064
Epoch 23/100
40/40 - 0s - 3ms/step - accuracy: 0.7030 - loss: 1.0143 - val_accuracy: 0.7940 - val_loss: 0.9802
Epoch 24/100
40/40 - 0s - 3ms/step - accuracy: 0.7163 - loss: 0.9894 - val_accuracy: 0.7930 - val_loss: 0.9591
Epoch 25/100
40/40 - 0s - 3ms/step - accuracy: 0.7313 - loss: 0.9647 - val_accuracy: 0.8310 - val_loss: 0.9241
Epoch 26/100
40/40 - 0s - 3ms/step - accuracy: 0.7500 - loss: 0.9411 - val_accuracy: 0.8380 - val_loss: 0.9011
Epoch 27/100
40/40 - 0s - 3ms/step - accuracy: 0.7705 - loss: 0.9171 - val_accuracy: 0.8620 - val_loss: 0.8764
Epoch 28/100
40/40 - 0s - 3ms/step - accuracy: 0.7768 - loss: 0.8949 - val_accuracy: 0.8780 - val_loss: 0.8522
Epoch 29/100
40/40 - 0s - 3ms/step - accuracy: 0.7954 - loss: 0.8698 - val_accuracy: 0.8230 - val_loss: 0.8414
Epoch 30/100
40/40 - 0s - 3ms/step - accuracy: 0.8047 - loss: 0.8486 - val_accuracy: 0.8780 - val_loss: 0.8195
Epoch 31/100
40/40 - 0s - 3ms/step - accuracy: 0.8167 - loss: 0.8271 - val_accuracy: 0.9070 - val_loss: 0.7887
Epoch 32/100
40/40 - 0s - 3ms/step - accuracy: 0.8266 - loss: 0.8065 - val_accuracy: 0.9050 - val_loss: 0.7670
Epoch 33/100
40/40 - 0s - 3ms/step - accuracy: 0.8428 - loss: 0.7853 - val_accuracy: 0.8820 - val_loss: 0.7565
Epoch 34/100
40/40 - 0s - 2ms/step - accuracy: 0.8523 - loss: 0.7675 - val_accuracy: 0.9250 - val_loss: 0.7297
Epoch 35/100
40/40 - 0s - 3ms/step - accuracy: 0.8689 - loss: 0.7460 - val_accuracy: 0.8960 - val_loss: 0.7105
Epoch 36/100
40/40 - 0s - 3ms/step - accuracy: 0.8773 - loss: 0.7224 - val_accuracy: 0.9370 - val_loss: 0.6896
Epoch 37/100
40/40 - 0s - 3ms/step - accuracy: 0.8881 - loss: 0.7042 - val_accuracy: 0.9520 - val_loss: 0.6687
Epoch 38/100
40/40 - 0s - 3ms/step - accuracy: 0.8952 - loss: 0.6858 - val_accuracy: 0.9210 - val_loss: 0.6481
Epoch 39/100
40/40 - 0s - 3ms/step - accuracy: 0.9057 - loss: 0.6663 - val_accuracy: 0.9650 - val_loss: 0.6359
Epoch 40/100
40/40 - 0s - 3ms/step - accuracy: 0.9158 - loss: 0.6485 - val_accuracy: 0.9590 - val_loss: 0.6247
Epoch 41/100
40/40 - 0s - 3ms/step - accuracy: 0.9216 - loss: 0.6329 - val_accuracy: 0.9490 - val_loss: 0.6051
Epoch 42/100
40/40 - 0s - 3ms/step - accuracy: 0.9291 - loss: 0.6169 - val_accuracy: 0.9910 - val_loss: 0.5790
Epoch 43/100
40/40 - 0s - 3ms/step - accuracy: 0.9376 - loss: 0.5981 - val_accuracy: 0.9850 - val_loss: 0.5699
Epoch 44/100
40/40 - 0s - 3ms/step - accuracy: 0.9415 - loss: 0.5832 - val_accuracy: 0.9850 - val_loss: 0.5473
Epoch 45/100
40/40 - 0s - 3ms/step - accuracy: 0.9506 - loss: 0.5676 - val_accuracy: 0.9910 - val_loss: 0.5395
Epoch 46/100
40/40 - 0s - 3ms/step - accuracy: 0.9548 - loss: 0.5493 - val_accuracy: 0.9950 - val_loss: 0.5130
Epoch 47/100
40/40 - 0s - 3ms/step - accuracy: 0.9600 - loss: 0.5332 - val_accuracy: 0.9950 - val_loss: 0.5040
Epoch 48/100
40/40 - 0s - 3ms/step - accuracy: 0.9657 - loss: 0.5173 - val_accuracy: 0.9910 - val_loss: 0.4886
Epoch 49/100
40/40 - 0s - 3ms/step - accuracy: 0.9705 - loss: 0.5025 - val_accuracy: 0.9990 - val_loss: 0.4737
Epoch 50/100
40/40 - 0s - 2ms/step - accuracy: 0.9735 - loss: 0.4896 - val_accuracy: 1.0000 - val_loss: 0.4571
Epoch 51/100
40/40 - 0s - 2ms/step - accuracy: 0.9750 - loss: 0.4741 - val_accuracy: 0.9980 - val_loss: 0.4461
Epoch 52/100
40/40 - 0s - 2ms/step - accuracy: 0.9766 - loss: 0.4607 - val_accuracy: 0.9990 - val_loss: 0.4307
Epoch 53/100
40/40 - 0s - 2ms/step - accuracy: 0.9771 - loss: 0.4494 - val_accuracy: 1.0000 - val_loss: 0.4296
Epoch 54/100
40/40 - 0s - 2ms/step - accuracy: 0.9802 - loss: 0.4357 - val_accuracy: 1.0000 - val_loss: 0.4048
Epoch 55/100
40/40 - 0s - 2ms/step - accuracy: 0.9829 - loss: 0.4220 - val_accuracy: 0.9990 - val_loss: 0.3937
Epoch 56/100
40/40 - 0s - 2ms/step - accuracy: 0.9850 - loss: 0.4096 - val_accuracy: 1.0000 - val_loss: 0.3874
Epoch 57/100
40/40 - 0s - 2ms/step - accuracy: 0.9846 - loss: 0.3980 - val_accuracy: 1.0000 - val_loss: 0.3691
Epoch 58/100
40/40 - 0s - 2ms/step - accuracy: 0.9863 - loss: 0.3854 - val_accuracy: 1.0000 - val_loss: 0.3528
Epoch 59/100
40/40 - 0s - 2ms/step - accuracy: 0.9871 - loss: 0.3744 - val_accuracy: 1.0000 - val_loss: 0.3502
Epoch 60/100
40/40 - 0s - 2ms/step - accuracy: 0.9880 - loss: 0.3644 - val_accuracy: 1.0000 - val_loss: 0.3345
Epoch 61/100
40/40 - 0s - 2ms/step - accuracy: 0.9903 - loss: 0.3526 - val_accuracy: 1.0000 - val_loss: 0.3282
Epoch 62/100
40/40 - 0s - 2ms/step - accuracy: 0.9901 - loss: 0.3418 - val_accuracy: 1.0000 - val_loss: 0.3110
Epoch 63/100
40/40 - 0s - 2ms/step - accuracy: 0.9924 - loss: 0.3299 - val_accuracy: 1.0000 - val_loss: 0.3088
Epoch 64/100
40/40 - 0s - 2ms/step - accuracy: 0.9930 - loss: 0.3201 - val_accuracy: 1.0000 - val_loss: 0.3056
Epoch 65/100
40/40 - 0s - 2ms/step - accuracy: 0.9935 - loss: 0.3113 - val_accuracy: 1.0000 - val_loss: 0.2783
Epoch 66/100
40/40 - 0s - 2ms/step - accuracy: 0.9938 - loss: 0.2999 - val_accuracy: 1.0000 - val_loss: 0.2741
Epoch 67/100
40/40 - 0s - 2ms/step - accuracy: 0.9941 - loss: 0.2915 - val_accuracy: 1.0000 - val_loss: 0.2618
Epoch 68/100
40/40 - 0s - 3ms/step - accuracy: 0.9956 - loss: 0.2818 - val_accuracy: 1.0000 - val_loss: 0.2645
Epoch 69/100
40/40 - 0s - 2ms/step - accuracy: 0.9950 - loss: 0.2737 - val_accuracy: 1.0000 - val_loss: 0.2449
Epoch 70/100
40/40 - 0s - 2ms/step - accuracy: 0.9947 - loss: 0.2645 - val_accuracy: 1.0000 - val_loss: 0.2346
Epoch 71/100
40/40 - 0s - 2ms/step - accuracy: 0.9954 - loss: 0.2554 - val_accuracy: 1.0000 - val_loss: 0.2252
Epoch 72/100
40/40 - 0s - 2ms/step - accuracy: 0.9969 - loss: 0.2462 - val_accuracy: 1.0000 - val_loss: 0.2181
Epoch 73/100
40/40 - 0s - 2ms/step - accuracy: 0.9968 - loss: 0.2385 - val_accuracy: 1.0000 - val_loss: 0.2122
Epoch 74/100
40/40 - 0s - 3ms/step - accuracy: 0.9967 - loss: 0.2292 - val_accuracy: 1.0000 - val_loss: 0.2045
Epoch 75/100
40/40 - 0s - 3ms/step - accuracy: 0.9970 - loss: 0.2210 - val_accuracy: 1.0000 - val_loss: 0.1951
Epoch 76/100
40/40 - 0s - 2ms/step - accuracy: 0.9971 - loss: 0.2141 - val_accuracy: 1.0000 - val_loss: 0.1915
Epoch 77/100
40/40 - 0s - 2ms/step - accuracy: 0.9973 - loss: 0.2065 - val_accuracy: 1.0000 - val_loss: 0.1900
Epoch 78/100
40/40 - 0s - 2ms/step - accuracy: 0.9972 - loss: 0.1998 - val_accuracy: 1.0000 - val_loss: 0.1745
Epoch 79/100
40/40 - 0s - 3ms/step - accuracy: 0.9984 - loss: 0.1923 - val_accuracy: 1.0000 - val_loss: 0.1674
Epoch 80/100
40/40 - 0s - 2ms/step - accuracy: 0.9974 - loss: 0.1851 - val_accuracy: 1.0000 - val_loss: 0.1722
Epoch 81/100
40/40 - 0s - 2ms/step - accuracy: 0.9985 - loss: 0.1805 - val_accuracy: 1.0000 - val_loss: 0.1523
Epoch 82/100
40/40 - 0s - 2ms/step - accuracy: 0.9978 - loss: 0.1741 - val_accuracy: 1.0000 - val_loss: 0.1531
Epoch 83/100
40/40 - 0s - 2ms/step - accuracy: 0.9977 - loss: 0.1673 - val_accuracy: 1.0000 - val_loss: 0.1409
Epoch 84/100
40/40 - 0s - 2ms/step - accuracy: 0.9989 - loss: 0.1616 - val_accuracy: 1.0000 - val_loss: 0.1377
Epoch 85/100
40/40 - 0s - 2ms/step - accuracy: 0.9976 - loss: 0.1564 - val_accuracy: 1.0000 - val_loss: 0.1316
Epoch 86/100
40/40 - 0s - 2ms/step - accuracy: 0.9983 - loss: 0.1497 - val_accuracy: 1.0000 - val_loss: 0.1255
Epoch 87/100
40/40 - 0s - 2ms/step - accuracy: 0.9985 - loss: 0.1450 - val_accuracy: 1.0000 - val_loss: 0.1217
Epoch 88/100
40/40 - 0s - 2ms/step - accuracy: 0.9985 - loss: 0.1395 - val_accuracy: 1.0000 - val_loss: 0.1166
Epoch 89/100
40/40 - 0s - 2ms/step - accuracy: 0.9993 - loss: 0.1351 - val_accuracy: 1.0000 - val_loss: 0.1136
Epoch 90/100
40/40 - 0s - 2ms/step - accuracy: 0.9994 - loss: 0.1293 - val_accuracy: 1.0000 - val_loss: 0.1085
Epoch 91/100
40/40 - 0s - 2ms/step - accuracy: 0.9985 - loss: 0.1252 - val_accuracy: 1.0000 - val_loss: 0.1017
Epoch 92/100
40/40 - 0s - 2ms/step - accuracy: 0.9992 - loss: 0.1196 - val_accuracy: 1.0000 - val_loss: 0.0977
Epoch 93/100
40/40 - 0s - 3ms/step - accuracy: 0.9987 - loss: 0.1168 - val_accuracy: 1.0000 - val_loss: 0.0949
Epoch 94/100
40/40 - 0s - 3ms/step - accuracy: 0.9992 - loss: 0.1118 - val_accuracy: 1.0000 - val_loss: 0.0914
Epoch 95/100
40/40 - 0s - 2ms/step - accuracy: 0.9992 - loss: 0.1083 - val_accuracy: 1.0000 - val_loss: 0.0862
Epoch 96/100
40/40 - 0s - 2ms/step - accuracy: 0.9994 - loss: 0.1046 - val_accuracy: 1.0000 - val_loss: 0.0852
Epoch 97/100
40/40 - 0s - 2ms/step - accuracy: 0.9991 - loss: 0.0991 - val_accuracy: 1.0000 - val_loss: 0.0780
Epoch 98/100
40/40 - 0s - 3ms/step - accuracy: 0.9986 - loss: 0.0975 - val_accuracy: 1.0000 - val_loss: 0.0754
Epoch 99/100
40/40 - 0s - 3ms/step - accuracy: 0.9992 - loss: 0.0933 - val_accuracy: 1.0000 - val_loss: 0.0732
Epoch 100/100
40/40 - 0s - 3ms/step - accuracy: 0.9988 - loss: 0.0896 - val_accuracy: 1.0000 - val_loss: 0.0699
<keras.src.callbacks.history.History at 0x7fbe30acf4d0>
As we see, the network is essentially perfect now.