Keras and the Last Number Problem

Keras and the Last Number Problem#

Let’s see if we can do better than our simple hidden layer NN with the last number problem.

import numpy as np
import keras
from keras.utils import to_categorical
2025-12-11 14:14:56.219602: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2025-12-11 14:14:56.219921: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-12-11 14:14:56.264379: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-12-11 14:14:57.553486: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-12-11 14:14:57.553806: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.

We’ll use the same data class

class ModelDataCategorical:
    """this is the model data for our "last number" training set.  We
    produce input of length N, consisting of numbers 0-9 and store
    the result in a 10-element array as categorical data.

    """
    def __init__(self, N=10):
        self.N = N
        
        # our model input data
        self.x = np.random.randint(0, high=10, size=N)
        self.x_scaled = self.x / 10 + 0.05
        
        # our scaled model output data
        self.y = np.array([self.x[-1]])
        self.y_scaled = np.zeros(10) + 0.01
        self.y_scaled[self.x[-1]] = 0.99
        
    def interpret_result(self, out):
        """take the network output and return the number we predict"""
        return np.argmax(out)

For Keras, we need to pack the scaled data (both input and output) into arrays. We’ll use the Keras to_categorical() to make the data categorical.

Let’s make both a training set and a test set

x_train = []
y_train = []
for _ in range(10000):
    m = ModelDataCategorical()
    x_train.append(m.x_scaled)
    y_train.append(m.y)

x_train = np.asarray(x_train)
y_train = to_categorical(y_train, 10)
x_test = []
y_test = []
for _ in range(1000):
    m = ModelDataCategorical()
    x_test.append(m.x_scaled)
    y_test.append(m.y)

x_test = np.asarray(x_test)
y_test = to_categorical(y_test, 10)

Check to make sure the data looks like we expect:

x_train[0]
array([0.45, 0.35, 0.85, 0.75, 0.55, 0.15, 0.45, 0.15, 0.75, 0.35])
y_train[0]
array([0., 0., 0., 1., 0., 0., 0., 0., 0., 0.])

Creating the network#

Now let’s build our network. We’ll use just a single hidden layer, but instead of the sigmoid used before, we’ll use RELU and the softmax activations.

from keras.models import Sequential
from keras.layers import Input, Dense, Dropout, Activation
from keras.optimizers import RMSprop
model = Sequential()
model.add(Input((10,)))
model.add(Dense(100, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(10, activation="softmax"))
2025-12-11 14:14:58.597058: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
rms = RMSprop()
model.compile(loss='categorical_crossentropy',
              optimizer=rms, metrics=['accuracy'])
model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                     Output Shape                  Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ dense (Dense)                   │ (None, 100)            │         1,100 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout (Dropout)               │ (None, 100)            │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_1 (Dense)                 │ (None, 10)             │         1,010 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 2,110 (8.24 KB)
 Trainable params: 2,110 (8.24 KB)
 Non-trainable params: 0 (0.00 B)

Now we have ~ 2k parameters to fit.

Training#

Now we can train and test each epoch to see how we do

epochs = 100
batch_size = 256
model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size,
          validation_data=(x_test, y_test), verbose=2)
Epoch 1/100
40/40 - 1s - 17ms/step - accuracy: 0.1446 - loss: 2.2652 - val_accuracy: 0.1720 - val_loss: 2.2065
Epoch 2/100
40/40 - 0s - 3ms/step - accuracy: 0.2026 - loss: 2.1707 - val_accuracy: 0.2180 - val_loss: 2.1205
Epoch 3/100
40/40 - 0s - 3ms/step - accuracy: 0.2456 - loss: 2.0739 - val_accuracy: 0.2440 - val_loss: 2.0183
Epoch 4/100
40/40 - 0s - 3ms/step - accuracy: 0.2666 - loss: 1.9782 - val_accuracy: 0.2980 - val_loss: 1.9241
Epoch 5/100
40/40 - 0s - 3ms/step - accuracy: 0.3101 - loss: 1.8781 - val_accuracy: 0.3310 - val_loss: 1.8328
Epoch 6/100
40/40 - 0s - 3ms/step - accuracy: 0.3337 - loss: 1.7949 - val_accuracy: 0.3360 - val_loss: 1.7563
Epoch 7/100
40/40 - 0s - 3ms/step - accuracy: 0.3643 - loss: 1.7159 - val_accuracy: 0.3870 - val_loss: 1.6786
Epoch 8/100
40/40 - 0s - 3ms/step - accuracy: 0.3921 - loss: 1.6459 - val_accuracy: 0.4120 - val_loss: 1.6094
Epoch 9/100
40/40 - 0s - 3ms/step - accuracy: 0.4135 - loss: 1.5827 - val_accuracy: 0.4840 - val_loss: 1.5507
Epoch 10/100
40/40 - 0s - 3ms/step - accuracy: 0.4452 - loss: 1.5217 - val_accuracy: 0.4510 - val_loss: 1.4919
Epoch 11/100
40/40 - 0s - 3ms/step - accuracy: 0.4542 - loss: 1.4661 - val_accuracy: 0.4860 - val_loss: 1.4360
Epoch 12/100
40/40 - 0s - 3ms/step - accuracy: 0.4894 - loss: 1.4139 - val_accuracy: 0.5450 - val_loss: 1.3869
Epoch 13/100
40/40 - 0s - 3ms/step - accuracy: 0.5034 - loss: 1.3650 - val_accuracy: 0.5460 - val_loss: 1.3451
Epoch 14/100
40/40 - 0s - 3ms/step - accuracy: 0.5311 - loss: 1.3228 - val_accuracy: 0.4770 - val_loss: 1.3025
Epoch 15/100
40/40 - 0s - 3ms/step - accuracy: 0.5456 - loss: 1.2810 - val_accuracy: 0.5870 - val_loss: 1.2589
Epoch 16/100
40/40 - 0s - 3ms/step - accuracy: 0.5668 - loss: 1.2447 - val_accuracy: 0.5900 - val_loss: 1.2242
Epoch 17/100
40/40 - 0s - 3ms/step - accuracy: 0.5886 - loss: 1.2065 - val_accuracy: 0.6450 - val_loss: 1.1878
Epoch 18/100
40/40 - 0s - 3ms/step - accuracy: 0.6107 - loss: 1.1733 - val_accuracy: 0.6540 - val_loss: 1.1574
Epoch 19/100
40/40 - 0s - 3ms/step - accuracy: 0.6264 - loss: 1.1413 - val_accuracy: 0.5930 - val_loss: 1.1285
Epoch 20/100
40/40 - 0s - 3ms/step - accuracy: 0.6436 - loss: 1.1071 - val_accuracy: 0.7590 - val_loss: 1.0879
Epoch 21/100
40/40 - 0s - 3ms/step - accuracy: 0.6623 - loss: 1.0784 - val_accuracy: 0.7020 - val_loss: 1.0622
Epoch 22/100
40/40 - 0s - 3ms/step - accuracy: 0.6727 - loss: 1.0498 - val_accuracy: 0.7730 - val_loss: 1.0302
Epoch 23/100
40/40 - 0s - 3ms/step - accuracy: 0.6944 - loss: 1.0239 - val_accuracy: 0.8050 - val_loss: 1.0050
Epoch 24/100
40/40 - 0s - 3ms/step - accuracy: 0.7173 - loss: 0.9959 - val_accuracy: 0.8280 - val_loss: 0.9743
Epoch 25/100
40/40 - 0s - 3ms/step - accuracy: 0.7310 - loss: 0.9702 - val_accuracy: 0.7970 - val_loss: 0.9514
Epoch 26/100
40/40 - 0s - 3ms/step - accuracy: 0.7450 - loss: 0.9462 - val_accuracy: 0.7830 - val_loss: 0.9302
Epoch 27/100
40/40 - 0s - 3ms/step - accuracy: 0.7606 - loss: 0.9187 - val_accuracy: 0.8430 - val_loss: 0.9021
Epoch 28/100
40/40 - 0s - 3ms/step - accuracy: 0.7714 - loss: 0.8973 - val_accuracy: 0.8210 - val_loss: 0.8837
Epoch 29/100
40/40 - 0s - 3ms/step - accuracy: 0.7853 - loss: 0.8754 - val_accuracy: 0.7820 - val_loss: 0.8685
Epoch 30/100
40/40 - 0s - 3ms/step - accuracy: 0.7960 - loss: 0.8514 - val_accuracy: 0.8040 - val_loss: 0.8426
Epoch 31/100
40/40 - 0s - 3ms/step - accuracy: 0.8133 - loss: 0.8293 - val_accuracy: 0.9060 - val_loss: 0.8105
Epoch 32/100
40/40 - 0s - 3ms/step - accuracy: 0.8279 - loss: 0.8077 - val_accuracy: 0.8340 - val_loss: 0.8078
Epoch 33/100
40/40 - 0s - 3ms/step - accuracy: 0.8327 - loss: 0.7875 - val_accuracy: 0.8720 - val_loss: 0.7709
Epoch 34/100
40/40 - 0s - 3ms/step - accuracy: 0.8454 - loss: 0.7669 - val_accuracy: 0.8970 - val_loss: 0.7501
Epoch 35/100
40/40 - 0s - 3ms/step - accuracy: 0.8582 - loss: 0.7449 - val_accuracy: 0.9260 - val_loss: 0.7296
Epoch 36/100
40/40 - 0s - 3ms/step - accuracy: 0.8704 - loss: 0.7248 - val_accuracy: 0.8800 - val_loss: 0.7226
Epoch 37/100
40/40 - 0s - 3ms/step - accuracy: 0.8751 - loss: 0.7050 - val_accuracy: 0.9110 - val_loss: 0.6916
Epoch 38/100
40/40 - 0s - 3ms/step - accuracy: 0.8833 - loss: 0.6889 - val_accuracy: 0.9300 - val_loss: 0.6714
Epoch 39/100
40/40 - 0s - 3ms/step - accuracy: 0.8909 - loss: 0.6688 - val_accuracy: 0.9470 - val_loss: 0.6559
Epoch 40/100
40/40 - 0s - 3ms/step - accuracy: 0.9050 - loss: 0.6514 - val_accuracy: 0.9330 - val_loss: 0.6404
Epoch 41/100
40/40 - 0s - 3ms/step - accuracy: 0.9113 - loss: 0.6336 - val_accuracy: 0.9560 - val_loss: 0.6150
Epoch 42/100
40/40 - 0s - 3ms/step - accuracy: 0.9185 - loss: 0.6152 - val_accuracy: 0.9560 - val_loss: 0.6048
Epoch 43/100
40/40 - 0s - 3ms/step - accuracy: 0.9283 - loss: 0.5986 - val_accuracy: 0.9620 - val_loss: 0.5852
Epoch 44/100
40/40 - 0s - 3ms/step - accuracy: 0.9321 - loss: 0.5813 - val_accuracy: 0.9740 - val_loss: 0.5718
Epoch 45/100
40/40 - 0s - 3ms/step - accuracy: 0.9428 - loss: 0.5656 - val_accuracy: 0.9930 - val_loss: 0.5468
Epoch 46/100
40/40 - 0s - 3ms/step - accuracy: 0.9503 - loss: 0.5467 - val_accuracy: 0.9870 - val_loss: 0.5390
Epoch 47/100
40/40 - 0s - 3ms/step - accuracy: 0.9485 - loss: 0.5318 - val_accuracy: 0.9810 - val_loss: 0.5275
Epoch 48/100
40/40 - 0s - 3ms/step - accuracy: 0.9590 - loss: 0.5175 - val_accuracy: 0.9840 - val_loss: 0.5035
Epoch 49/100
40/40 - 0s - 3ms/step - accuracy: 0.9611 - loss: 0.5002 - val_accuracy: 0.9930 - val_loss: 0.4956
Epoch 50/100
40/40 - 0s - 3ms/step - accuracy: 0.9676 - loss: 0.4866 - val_accuracy: 0.9550 - val_loss: 0.4884
Epoch 51/100
40/40 - 0s - 3ms/step - accuracy: 0.9670 - loss: 0.4724 - val_accuracy: 0.9790 - val_loss: 0.4631
Epoch 52/100
40/40 - 0s - 3ms/step - accuracy: 0.9723 - loss: 0.4585 - val_accuracy: 0.9990 - val_loss: 0.4486
Epoch 53/100
40/40 - 0s - 3ms/step - accuracy: 0.9766 - loss: 0.4450 - val_accuracy: 1.0000 - val_loss: 0.4319
Epoch 54/100
40/40 - 0s - 3ms/step - accuracy: 0.9789 - loss: 0.4336 - val_accuracy: 0.9980 - val_loss: 0.4248
Epoch 55/100
40/40 - 0s - 3ms/step - accuracy: 0.9792 - loss: 0.4223 - val_accuracy: 0.9990 - val_loss: 0.4134
Epoch 56/100
40/40 - 0s - 3ms/step - accuracy: 0.9818 - loss: 0.4080 - val_accuracy: 0.9980 - val_loss: 0.4052
Epoch 57/100
40/40 - 0s - 3ms/step - accuracy: 0.9841 - loss: 0.3959 - val_accuracy: 0.9950 - val_loss: 0.3890
Epoch 58/100
40/40 - 0s - 3ms/step - accuracy: 0.9849 - loss: 0.3844 - val_accuracy: 1.0000 - val_loss: 0.3721
Epoch 59/100
40/40 - 0s - 3ms/step - accuracy: 0.9866 - loss: 0.3715 - val_accuracy: 0.9960 - val_loss: 0.3673
Epoch 60/100
40/40 - 0s - 3ms/step - accuracy: 0.9872 - loss: 0.3610 - val_accuracy: 1.0000 - val_loss: 0.3458
Epoch 61/100
40/40 - 0s - 3ms/step - accuracy: 0.9893 - loss: 0.3489 - val_accuracy: 1.0000 - val_loss: 0.3407
Epoch 62/100
40/40 - 0s - 3ms/step - accuracy: 0.9899 - loss: 0.3380 - val_accuracy: 1.0000 - val_loss: 0.3235
Epoch 63/100
40/40 - 0s - 3ms/step - accuracy: 0.9904 - loss: 0.3284 - val_accuracy: 1.0000 - val_loss: 0.3153
Epoch 64/100
40/40 - 0s - 3ms/step - accuracy: 0.9931 - loss: 0.3160 - val_accuracy: 1.0000 - val_loss: 0.3025
Epoch 65/100
40/40 - 0s - 3ms/step - accuracy: 0.9948 - loss: 0.3066 - val_accuracy: 1.0000 - val_loss: 0.2945
Epoch 66/100
40/40 - 0s - 3ms/step - accuracy: 0.9930 - loss: 0.2961 - val_accuracy: 1.0000 - val_loss: 0.2876
Epoch 67/100
40/40 - 0s - 3ms/step - accuracy: 0.9945 - loss: 0.2862 - val_accuracy: 1.0000 - val_loss: 0.2777
Epoch 68/100
40/40 - 0s - 3ms/step - accuracy: 0.9940 - loss: 0.2795 - val_accuracy: 1.0000 - val_loss: 0.2682
Epoch 69/100
40/40 - 0s - 3ms/step - accuracy: 0.9949 - loss: 0.2685 - val_accuracy: 1.0000 - val_loss: 0.2552
Epoch 70/100
40/40 - 0s - 3ms/step - accuracy: 0.9953 - loss: 0.2615 - val_accuracy: 1.0000 - val_loss: 0.2500
Epoch 71/100
40/40 - 0s - 3ms/step - accuracy: 0.9949 - loss: 0.2516 - val_accuracy: 1.0000 - val_loss: 0.2384
Epoch 72/100
40/40 - 0s - 3ms/step - accuracy: 0.9952 - loss: 0.2428 - val_accuracy: 1.0000 - val_loss: 0.2356
Epoch 73/100
40/40 - 0s - 3ms/step - accuracy: 0.9965 - loss: 0.2353 - val_accuracy: 1.0000 - val_loss: 0.2267
Epoch 74/100
40/40 - 0s - 3ms/step - accuracy: 0.9956 - loss: 0.2272 - val_accuracy: 1.0000 - val_loss: 0.2142
Epoch 75/100
40/40 - 0s - 3ms/step - accuracy: 0.9966 - loss: 0.2195 - val_accuracy: 1.0000 - val_loss: 0.2058
Epoch 76/100
40/40 - 0s - 3ms/step - accuracy: 0.9969 - loss: 0.2113 - val_accuracy: 1.0000 - val_loss: 0.2086
Epoch 77/100
40/40 - 0s - 3ms/step - accuracy: 0.9957 - loss: 0.2067 - val_accuracy: 1.0000 - val_loss: 0.1972
Epoch 78/100
40/40 - 0s - 3ms/step - accuracy: 0.9985 - loss: 0.1978 - val_accuracy: 1.0000 - val_loss: 0.1846
Epoch 79/100
40/40 - 0s - 3ms/step - accuracy: 0.9983 - loss: 0.1906 - val_accuracy: 1.0000 - val_loss: 0.1828
Epoch 80/100
40/40 - 0s - 3ms/step - accuracy: 0.9971 - loss: 0.1845 - val_accuracy: 1.0000 - val_loss: 0.1730
Epoch 81/100
40/40 - 0s - 3ms/step - accuracy: 0.9984 - loss: 0.1777 - val_accuracy: 1.0000 - val_loss: 0.1752
Epoch 82/100
40/40 - 0s - 3ms/step - accuracy: 0.9980 - loss: 0.1725 - val_accuracy: 1.0000 - val_loss: 0.1560
Epoch 83/100
40/40 - 0s - 3ms/step - accuracy: 0.9984 - loss: 0.1633 - val_accuracy: 1.0000 - val_loss: 0.1506
Epoch 84/100
40/40 - 0s - 3ms/step - accuracy: 0.9990 - loss: 0.1583 - val_accuracy: 1.0000 - val_loss: 0.1477
Epoch 85/100
40/40 - 0s - 3ms/step - accuracy: 0.9984 - loss: 0.1525 - val_accuracy: 1.0000 - val_loss: 0.1393
Epoch 86/100
40/40 - 0s - 3ms/step - accuracy: 0.9983 - loss: 0.1461 - val_accuracy: 1.0000 - val_loss: 0.1384
Epoch 87/100
40/40 - 0s - 3ms/step - accuracy: 0.9989 - loss: 0.1410 - val_accuracy: 1.0000 - val_loss: 0.1295
Epoch 88/100
40/40 - 0s - 3ms/step - accuracy: 0.9991 - loss: 0.1359 - val_accuracy: 1.0000 - val_loss: 0.1266
Epoch 89/100
40/40 - 0s - 3ms/step - accuracy: 0.9995 - loss: 0.1312 - val_accuracy: 1.0000 - val_loss: 0.1164
Epoch 90/100
40/40 - 0s - 3ms/step - accuracy: 0.9990 - loss: 0.1253 - val_accuracy: 1.0000 - val_loss: 0.1117
Epoch 91/100
40/40 - 0s - 3ms/step - accuracy: 0.9995 - loss: 0.1212 - val_accuracy: 1.0000 - val_loss: 0.1136
Epoch 92/100
40/40 - 0s - 3ms/step - accuracy: 0.9989 - loss: 0.1172 - val_accuracy: 1.0000 - val_loss: 0.1086
Epoch 93/100
40/40 - 0s - 3ms/step - accuracy: 0.9994 - loss: 0.1114 - val_accuracy: 1.0000 - val_loss: 0.1002
Epoch 94/100
40/40 - 0s - 3ms/step - accuracy: 0.9993 - loss: 0.1080 - val_accuracy: 1.0000 - val_loss: 0.1007
Epoch 95/100
40/40 - 0s - 3ms/step - accuracy: 0.9991 - loss: 0.1046 - val_accuracy: 1.0000 - val_loss: 0.0884
Epoch 96/100
40/40 - 0s - 3ms/step - accuracy: 0.9992 - loss: 0.0998 - val_accuracy: 1.0000 - val_loss: 0.0872
Epoch 97/100
40/40 - 0s - 3ms/step - accuracy: 0.9991 - loss: 0.0961 - val_accuracy: 1.0000 - val_loss: 0.0818
Epoch 98/100
40/40 - 0s - 3ms/step - accuracy: 0.9995 - loss: 0.0922 - val_accuracy: 1.0000 - val_loss: 0.0792
Epoch 99/100
40/40 - 0s - 3ms/step - accuracy: 1.0000 - loss: 0.0876 - val_accuracy: 1.0000 - val_loss: 0.0736
Epoch 100/100
40/40 - 0s - 3ms/step - accuracy: 0.9991 - loss: 0.0854 - val_accuracy: 1.0000 - val_loss: 0.0711
<keras.src.callbacks.history.History at 0x7f6e9ed17c10>

As we see, the network is essentially perfect now.