Keras and the Last Number Problem

Keras and the Last Number Problem#

Let’s see if we can do better than our simple hidden layer NN with the last number problem.

import numpy as np
import keras
from keras.utils import to_categorical
2025-06-10 23:59:22.845447: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-06-10 23:59:22.848710: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:32] Could not find cuda drivers on your machine, GPU will not be used.
2025-06-10 23:59:22.857423: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1749599962.871425    7415 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1749599962.875582    7415 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1749599962.887281    7415 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1749599962.887293    7415 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1749599962.887295    7415 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1749599962.887297    7415 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-06-10 23:59:22.891554: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.

We’ll use the same data class

class ModelDataCategorical:
    """this is the model data for our "last number" training set.  We
    produce input of length N, consisting of numbers 0-9 and store
    the result in a 10-element array as categorical data.

    """
    def __init__(self, N=10):
        self.N = N
        
        # our model input data
        self.x = np.random.randint(0, high=10, size=N)
        self.x_scaled = self.x / 10 + 0.05
        
        # our scaled model output data
        self.y = np.array([self.x[-1]])
        self.y_scaled = np.zeros(10) + 0.01
        self.y_scaled[self.x[-1]] = 0.99
        
    def interpret_result(self, out):
        """take the network output and return the number we predict"""
        return np.argmax(out)

For Keras, we need to pack the scaled data (both input and output) into arrays. We’ll use the Keras to_categorical() to make the data categorical.

Let’s make both a training set and a test set

x_train = []
y_train = []
for _ in range(10000):
    m = ModelDataCategorical()
    x_train.append(m.x_scaled)
    y_train.append(m.y)

x_train = np.asarray(x_train)
y_train = to_categorical(y_train, 10)
x_test = []
y_test = []
for _ in range(1000):
    m = ModelDataCategorical()
    x_test.append(m.x_scaled)
    y_test.append(m.y)

x_test = np.asarray(x_test)
y_test = to_categorical(y_test, 10)

Check to make sure the data looks like we expect:

x_train[0]
array([0.35, 0.65, 0.15, 0.75, 0.25, 0.15, 0.35, 0.55, 0.45, 0.55])
y_train[0]
array([0., 0., 0., 0., 0., 1., 0., 0., 0., 0.])

Creating the network#

Now let’s build our network. We’ll use just a single hidden layer, but instead of the sigmoid used before, we’ll use RELU and the softmax activations.

from keras.models import Sequential
from keras.layers import Input, Dense, Dropout, Activation
from keras.optimizers import RMSprop
model = Sequential()
model.add(Input((10,)))
model.add(Dense(100, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(10, activation="softmax"))
2025-06-10 23:59:25.626997: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
rms = RMSprop()
model.compile(loss='categorical_crossentropy',
              optimizer=rms, metrics=['accuracy'])
model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                     Output Shape                  Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ dense (Dense)                   │ (None, 100)            │         1,100 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout (Dropout)               │ (None, 100)            │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_1 (Dense)                 │ (None, 10)             │         1,010 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 2,110 (8.24 KB)
 Trainable params: 2,110 (8.24 KB)
 Non-trainable params: 0 (0.00 B)

Now we have ~ 2k parameters to fit.

Training#

Now we can train and test each epoch to see how we do

epochs = 100
batch_size = 256
model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size,
          validation_data=(x_test, y_test), verbose=2)
Epoch 1/100
40/40 - 1s - 16ms/step - accuracy: 0.1318 - loss: 2.2748 - val_accuracy: 0.2060 - val_loss: 2.2240
Epoch 2/100
40/40 - 0s - 2ms/step - accuracy: 0.2108 - loss: 2.1842 - val_accuracy: 0.2210 - val_loss: 2.1332
Epoch 3/100
40/40 - 0s - 3ms/step - accuracy: 0.2505 - loss: 2.0883 - val_accuracy: 0.2530 - val_loss: 2.0324
Epoch 4/100
40/40 - 0s - 2ms/step - accuracy: 0.2744 - loss: 1.9895 - val_accuracy: 0.3240 - val_loss: 1.9348
Epoch 5/100
40/40 - 0s - 2ms/step - accuracy: 0.3015 - loss: 1.8975 - val_accuracy: 0.3600 - val_loss: 1.8366
Epoch 6/100
40/40 - 0s - 2ms/step - accuracy: 0.3295 - loss: 1.8084 - val_accuracy: 0.3350 - val_loss: 1.7551
Epoch 7/100
40/40 - 0s - 2ms/step - accuracy: 0.3577 - loss: 1.7289 - val_accuracy: 0.3820 - val_loss: 1.6788
Epoch 8/100
40/40 - 0s - 2ms/step - accuracy: 0.3936 - loss: 1.6558 - val_accuracy: 0.4170 - val_loss: 1.6107
Epoch 9/100
40/40 - 0s - 2ms/step - accuracy: 0.4153 - loss: 1.5938 - val_accuracy: 0.4480 - val_loss: 1.5469
Epoch 10/100
40/40 - 0s - 2ms/step - accuracy: 0.4368 - loss: 1.5336 - val_accuracy: 0.5160 - val_loss: 1.4873
Epoch 11/100
40/40 - 0s - 2ms/step - accuracy: 0.4683 - loss: 1.4771 - val_accuracy: 0.5850 - val_loss: 1.4317
Epoch 12/100
40/40 - 0s - 2ms/step - accuracy: 0.4996 - loss: 1.4233 - val_accuracy: 0.5020 - val_loss: 1.3799
Epoch 13/100
40/40 - 0s - 3ms/step - accuracy: 0.5069 - loss: 1.3785 - val_accuracy: 0.5940 - val_loss: 1.3303
Epoch 14/100
40/40 - 0s - 2ms/step - accuracy: 0.5262 - loss: 1.3341 - val_accuracy: 0.6030 - val_loss: 1.2867
Epoch 15/100
40/40 - 0s - 2ms/step - accuracy: 0.5506 - loss: 1.2927 - val_accuracy: 0.6370 - val_loss: 1.2436
Epoch 16/100
40/40 - 0s - 2ms/step - accuracy: 0.5667 - loss: 1.2530 - val_accuracy: 0.6240 - val_loss: 1.2044
Epoch 17/100
40/40 - 0s - 2ms/step - accuracy: 0.5835 - loss: 1.2136 - val_accuracy: 0.7290 - val_loss: 1.1693
Epoch 18/100
40/40 - 0s - 2ms/step - accuracy: 0.6050 - loss: 1.1802 - val_accuracy: 0.7710 - val_loss: 1.1308
Epoch 19/100
40/40 - 0s - 2ms/step - accuracy: 0.6230 - loss: 1.1472 - val_accuracy: 0.6580 - val_loss: 1.1059
Epoch 20/100
40/40 - 0s - 2ms/step - accuracy: 0.6388 - loss: 1.1166 - val_accuracy: 0.7410 - val_loss: 1.0686
Epoch 21/100
40/40 - 0s - 2ms/step - accuracy: 0.6506 - loss: 1.0890 - val_accuracy: 0.7420 - val_loss: 1.0418
Epoch 22/100
40/40 - 0s - 2ms/step - accuracy: 0.6775 - loss: 1.0570 - val_accuracy: 0.7410 - val_loss: 1.0163
Epoch 23/100
40/40 - 0s - 3ms/step - accuracy: 0.6927 - loss: 1.0275 - val_accuracy: 0.7840 - val_loss: 0.9866
Epoch 24/100
40/40 - 0s - 2ms/step - accuracy: 0.7145 - loss: 0.9994 - val_accuracy: 0.7940 - val_loss: 0.9520
Epoch 25/100
40/40 - 0s - 2ms/step - accuracy: 0.7206 - loss: 0.9763 - val_accuracy: 0.8650 - val_loss: 0.9320
Epoch 26/100
40/40 - 0s - 2ms/step - accuracy: 0.7435 - loss: 0.9485 - val_accuracy: 0.8610 - val_loss: 0.9079
Epoch 27/100
40/40 - 0s - 2ms/step - accuracy: 0.7556 - loss: 0.9250 - val_accuracy: 0.8420 - val_loss: 0.8845
Epoch 28/100
40/40 - 0s - 2ms/step - accuracy: 0.7731 - loss: 0.9016 - val_accuracy: 0.8540 - val_loss: 0.8663
Epoch 29/100
40/40 - 0s - 2ms/step - accuracy: 0.7777 - loss: 0.8802 - val_accuracy: 0.8780 - val_loss: 0.8368
Epoch 30/100
40/40 - 0s - 2ms/step - accuracy: 0.8048 - loss: 0.8532 - val_accuracy: 0.9220 - val_loss: 0.8167
Epoch 31/100
40/40 - 0s - 2ms/step - accuracy: 0.8098 - loss: 0.8357 - val_accuracy: 0.9180 - val_loss: 0.7945
Epoch 32/100
40/40 - 0s - 2ms/step - accuracy: 0.8249 - loss: 0.8137 - val_accuracy: 0.9220 - val_loss: 0.7687
Epoch 33/100
40/40 - 0s - 3ms/step - accuracy: 0.8430 - loss: 0.7910 - val_accuracy: 0.9090 - val_loss: 0.7479
Epoch 34/100
40/40 - 0s - 3ms/step - accuracy: 0.8534 - loss: 0.7721 - val_accuracy: 0.9240 - val_loss: 0.7390
Epoch 35/100
40/40 - 0s - 2ms/step - accuracy: 0.8703 - loss: 0.7520 - val_accuracy: 0.9490 - val_loss: 0.7089
Epoch 36/100
40/40 - 0s - 2ms/step - accuracy: 0.8830 - loss: 0.7302 - val_accuracy: 0.9680 - val_loss: 0.6910
Epoch 37/100
40/40 - 0s - 2ms/step - accuracy: 0.8902 - loss: 0.7117 - val_accuracy: 0.9420 - val_loss: 0.6794
Epoch 38/100
40/40 - 0s - 2ms/step - accuracy: 0.9063 - loss: 0.6910 - val_accuracy: 0.9750 - val_loss: 0.6546
Epoch 39/100
40/40 - 0s - 2ms/step - accuracy: 0.9132 - loss: 0.6741 - val_accuracy: 0.9520 - val_loss: 0.6401
Epoch 40/100
40/40 - 0s - 2ms/step - accuracy: 0.9223 - loss: 0.6536 - val_accuracy: 0.9840 - val_loss: 0.6217
Epoch 41/100
40/40 - 0s - 2ms/step - accuracy: 0.9308 - loss: 0.6357 - val_accuracy: 0.9880 - val_loss: 0.6020
Epoch 42/100
40/40 - 0s - 2ms/step - accuracy: 0.9342 - loss: 0.6182 - val_accuracy: 0.9600 - val_loss: 0.5829
Epoch 43/100
40/40 - 0s - 2ms/step - accuracy: 0.9442 - loss: 0.6012 - val_accuracy: 0.9950 - val_loss: 0.5608
Epoch 44/100
40/40 - 0s - 3ms/step - accuracy: 0.9528 - loss: 0.5834 - val_accuracy: 0.9900 - val_loss: 0.5533
Epoch 45/100
40/40 - 0s - 2ms/step - accuracy: 0.9576 - loss: 0.5659 - val_accuracy: 0.9880 - val_loss: 0.5334
Epoch 46/100
40/40 - 0s - 2ms/step - accuracy: 0.9634 - loss: 0.5511 - val_accuracy: 0.9910 - val_loss: 0.5177
Epoch 47/100
40/40 - 0s - 2ms/step - accuracy: 0.9653 - loss: 0.5336 - val_accuracy: 0.9990 - val_loss: 0.4928
Epoch 48/100
40/40 - 0s - 2ms/step - accuracy: 0.9649 - loss: 0.5198 - val_accuracy: 0.9990 - val_loss: 0.4903
Epoch 49/100
40/40 - 0s - 2ms/step - accuracy: 0.9728 - loss: 0.5026 - val_accuracy: 0.9990 - val_loss: 0.4670
Epoch 50/100
40/40 - 0s - 2ms/step - accuracy: 0.9763 - loss: 0.4885 - val_accuracy: 0.9940 - val_loss: 0.4550
Epoch 51/100
40/40 - 0s - 2ms/step - accuracy: 0.9771 - loss: 0.4756 - val_accuracy: 0.9990 - val_loss: 0.4384
Epoch 52/100
40/40 - 0s - 2ms/step - accuracy: 0.9775 - loss: 0.4600 - val_accuracy: 1.0000 - val_loss: 0.4332
Epoch 53/100
40/40 - 0s - 2ms/step - accuracy: 0.9810 - loss: 0.4460 - val_accuracy: 1.0000 - val_loss: 0.4178
Epoch 54/100
40/40 - 0s - 3ms/step - accuracy: 0.9822 - loss: 0.4311 - val_accuracy: 0.9820 - val_loss: 0.4261
Epoch 55/100
40/40 - 0s - 2ms/step - accuracy: 0.9845 - loss: 0.4201 - val_accuracy: 0.9990 - val_loss: 0.3920
Epoch 56/100
40/40 - 0s - 2ms/step - accuracy: 0.9849 - loss: 0.4061 - val_accuracy: 0.9970 - val_loss: 0.3850
Epoch 57/100
40/40 - 0s - 2ms/step - accuracy: 0.9863 - loss: 0.3941 - val_accuracy: 0.9980 - val_loss: 0.3764
Epoch 58/100
40/40 - 0s - 2ms/step - accuracy: 0.9860 - loss: 0.3841 - val_accuracy: 1.0000 - val_loss: 0.3511
Epoch 59/100
40/40 - 0s - 2ms/step - accuracy: 0.9877 - loss: 0.3701 - val_accuracy: 0.9990 - val_loss: 0.3381
Epoch 60/100
40/40 - 0s - 2ms/step - accuracy: 0.9921 - loss: 0.3575 - val_accuracy: 1.0000 - val_loss: 0.3288
Epoch 61/100
40/40 - 0s - 2ms/step - accuracy: 0.9903 - loss: 0.3476 - val_accuracy: 1.0000 - val_loss: 0.3160
Epoch 62/100
40/40 - 0s - 2ms/step - accuracy: 0.9919 - loss: 0.3352 - val_accuracy: 1.0000 - val_loss: 0.3093
Epoch 63/100
40/40 - 0s - 2ms/step - accuracy: 0.9918 - loss: 0.3245 - val_accuracy: 1.0000 - val_loss: 0.2982
Epoch 64/100
40/40 - 0s - 2ms/step - accuracy: 0.9939 - loss: 0.3129 - val_accuracy: 1.0000 - val_loss: 0.2832
Epoch 65/100
40/40 - 0s - 2ms/step - accuracy: 0.9926 - loss: 0.3044 - val_accuracy: 1.0000 - val_loss: 0.2727
Epoch 66/100
40/40 - 0s - 2ms/step - accuracy: 0.9928 - loss: 0.2936 - val_accuracy: 1.0000 - val_loss: 0.2625
Epoch 67/100
40/40 - 0s - 2ms/step - accuracy: 0.9943 - loss: 0.2826 - val_accuracy: 1.0000 - val_loss: 0.2605
Epoch 68/100
40/40 - 0s - 2ms/step - accuracy: 0.9948 - loss: 0.2743 - val_accuracy: 0.9980 - val_loss: 0.2524
Epoch 69/100
40/40 - 0s - 2ms/step - accuracy: 0.9953 - loss: 0.2646 - val_accuracy: 1.0000 - val_loss: 0.2401
Epoch 70/100
40/40 - 0s - 2ms/step - accuracy: 0.9962 - loss: 0.2541 - val_accuracy: 1.0000 - val_loss: 0.2296
Epoch 71/100
40/40 - 0s - 2ms/step - accuracy: 0.9952 - loss: 0.2474 - val_accuracy: 1.0000 - val_loss: 0.2149
Epoch 72/100
40/40 - 0s - 2ms/step - accuracy: 0.9959 - loss: 0.2383 - val_accuracy: 1.0000 - val_loss: 0.2141
Epoch 73/100
40/40 - 0s - 2ms/step - accuracy: 0.9973 - loss: 0.2294 - val_accuracy: 1.0000 - val_loss: 0.2108
Epoch 74/100
40/40 - 0s - 3ms/step - accuracy: 0.9970 - loss: 0.2221 - val_accuracy: 1.0000 - val_loss: 0.1960
Epoch 75/100
40/40 - 0s - 2ms/step - accuracy: 0.9962 - loss: 0.2142 - val_accuracy: 1.0000 - val_loss: 0.1859
Epoch 76/100
40/40 - 0s - 2ms/step - accuracy: 0.9965 - loss: 0.2067 - val_accuracy: 1.0000 - val_loss: 0.1889
Epoch 77/100
40/40 - 0s - 2ms/step - accuracy: 0.9974 - loss: 0.1997 - val_accuracy: 1.0000 - val_loss: 0.1795
Epoch 78/100
40/40 - 0s - 2ms/step - accuracy: 0.9968 - loss: 0.1925 - val_accuracy: 1.0000 - val_loss: 0.1688
Epoch 79/100
40/40 - 0s - 2ms/step - accuracy: 0.9984 - loss: 0.1846 - val_accuracy: 1.0000 - val_loss: 0.1583
Epoch 80/100
40/40 - 0s - 2ms/step - accuracy: 0.9981 - loss: 0.1787 - val_accuracy: 1.0000 - val_loss: 0.1608
Epoch 81/100
40/40 - 0s - 2ms/step - accuracy: 0.9970 - loss: 0.1728 - val_accuracy: 1.0000 - val_loss: 0.1455
Epoch 82/100
40/40 - 0s - 2ms/step - accuracy: 0.9986 - loss: 0.1650 - val_accuracy: 1.0000 - val_loss: 0.1498
Epoch 83/100
40/40 - 0s - 2ms/step - accuracy: 0.9976 - loss: 0.1601 - val_accuracy: 1.0000 - val_loss: 0.1328
Epoch 84/100
40/40 - 0s - 2ms/step - accuracy: 0.9983 - loss: 0.1532 - val_accuracy: 1.0000 - val_loss: 0.1297
Epoch 85/100
40/40 - 0s - 2ms/step - accuracy: 0.9978 - loss: 0.1486 - val_accuracy: 1.0000 - val_loss: 0.1247
Epoch 86/100
40/40 - 0s - 2ms/step - accuracy: 0.9985 - loss: 0.1428 - val_accuracy: 1.0000 - val_loss: 0.1182
Epoch 87/100
40/40 - 0s - 2ms/step - accuracy: 0.9986 - loss: 0.1369 - val_accuracy: 1.0000 - val_loss: 0.1131
Epoch 88/100
40/40 - 0s - 2ms/step - accuracy: 0.9983 - loss: 0.1323 - val_accuracy: 1.0000 - val_loss: 0.1098
Epoch 89/100
40/40 - 0s - 2ms/step - accuracy: 0.9985 - loss: 0.1276 - val_accuracy: 1.0000 - val_loss: 0.1061
Epoch 90/100
40/40 - 0s - 2ms/step - accuracy: 0.9993 - loss: 0.1225 - val_accuracy: 1.0000 - val_loss: 0.0977
Epoch 91/100
40/40 - 0s - 2ms/step - accuracy: 0.9987 - loss: 0.1179 - val_accuracy: 1.0000 - val_loss: 0.0924
Epoch 92/100
40/40 - 0s - 2ms/step - accuracy: 0.9988 - loss: 0.1141 - val_accuracy: 1.0000 - val_loss: 0.0886
Epoch 93/100
40/40 - 0s - 2ms/step - accuracy: 0.9988 - loss: 0.1098 - val_accuracy: 1.0000 - val_loss: 0.0881
Epoch 94/100
40/40 - 0s - 3ms/step - accuracy: 0.9991 - loss: 0.1059 - val_accuracy: 1.0000 - val_loss: 0.0836
Epoch 95/100
40/40 - 0s - 2ms/step - accuracy: 0.9988 - loss: 0.1025 - val_accuracy: 1.0000 - val_loss: 0.0793
Epoch 96/100
40/40 - 0s - 2ms/step - accuracy: 0.9994 - loss: 0.0978 - val_accuracy: 1.0000 - val_loss: 0.0762
Epoch 97/100
40/40 - 0s - 2ms/step - accuracy: 0.9991 - loss: 0.0937 - val_accuracy: 1.0000 - val_loss: 0.0747
Epoch 98/100
40/40 - 0s - 2ms/step - accuracy: 0.9994 - loss: 0.0914 - val_accuracy: 1.0000 - val_loss: 0.0713
Epoch 99/100
40/40 - 0s - 2ms/step - accuracy: 0.9991 - loss: 0.0873 - val_accuracy: 1.0000 - val_loss: 0.0662
Epoch 100/100
40/40 - 0s - 3ms/step - accuracy: 0.9996 - loss: 0.0838 - val_accuracy: 1.0000 - val_loss: 0.0623
<keras.src.callbacks.history.History at 0x7f2a089ba190>

As we see, the network is essentially perfect now.