Keras and the Last Number Problem

Keras and the Last Number Problem#

Let’s see if we can do better than our simple hidden layer NN with the last number problem.

import numpy as np
import keras
from keras.utils import to_categorical
2025-08-18 10:47:49.911828: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.
2025-08-18 10:47:49.958073: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-08-18 10:47:51.572990: I external/local_xla/xla/tsl/cuda/cudart_stub.cc:31] Could not find cuda drivers on your machine, GPU will not be used.

We’ll use the same data class

class ModelDataCategorical:
    """this is the model data for our "last number" training set.  We
    produce input of length N, consisting of numbers 0-9 and store
    the result in a 10-element array as categorical data.

    """
    def __init__(self, N=10):
        self.N = N
        
        # our model input data
        self.x = np.random.randint(0, high=10, size=N)
        self.x_scaled = self.x / 10 + 0.05
        
        # our scaled model output data
        self.y = np.array([self.x[-1]])
        self.y_scaled = np.zeros(10) + 0.01
        self.y_scaled[self.x[-1]] = 0.99
        
    def interpret_result(self, out):
        """take the network output and return the number we predict"""
        return np.argmax(out)

For Keras, we need to pack the scaled data (both input and output) into arrays. We’ll use the Keras to_categorical() to make the data categorical.

Let’s make both a training set and a test set

x_train = []
y_train = []
for _ in range(10000):
    m = ModelDataCategorical()
    x_train.append(m.x_scaled)
    y_train.append(m.y)

x_train = np.asarray(x_train)
y_train = to_categorical(y_train, 10)
x_test = []
y_test = []
for _ in range(1000):
    m = ModelDataCategorical()
    x_test.append(m.x_scaled)
    y_test.append(m.y)

x_test = np.asarray(x_test)
y_test = to_categorical(y_test, 10)

Check to make sure the data looks like we expect:

x_train[0]
array([0.75, 0.25, 0.05, 0.55, 0.65, 0.05, 0.25, 0.95, 0.25, 0.15])
y_train[0]
array([0., 1., 0., 0., 0., 0., 0., 0., 0., 0.])

Creating the network#

Now let’s build our network. We’ll use just a single hidden layer, but instead of the sigmoid used before, we’ll use RELU and the softmax activations.

from keras.models import Sequential
from keras.layers import Input, Dense, Dropout, Activation
from keras.optimizers import RMSprop
model = Sequential()
model.add(Input((10,)))
model.add(Dense(100, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(10, activation="softmax"))
2025-08-18 10:47:52.787828: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303)
rms = RMSprop()
model.compile(loss='categorical_crossentropy',
              optimizer=rms, metrics=['accuracy'])
model.summary()
Model: "sequential"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Layer (type)                     Output Shape                  Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
│ dense (Dense)                   │ (None, 100)            │         1,100 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dropout (Dropout)               │ (None, 100)            │             0 │
├─────────────────────────────────┼────────────────────────┼───────────────┤
│ dense_1 (Dense)                 │ (None, 10)             │         1,010 │
└─────────────────────────────────┴────────────────────────┴───────────────┘
 Total params: 2,110 (8.24 KB)
 Trainable params: 2,110 (8.24 KB)
 Non-trainable params: 0 (0.00 B)

Now we have ~ 2k parameters to fit.

Training#

Now we can train and test each epoch to see how we do

epochs = 100
batch_size = 256
model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size,
          validation_data=(x_test, y_test), verbose=2)
Epoch 1/100
40/40 - 1s - 16ms/step - accuracy: 0.1674 - loss: 2.2674 - val_accuracy: 0.2030 - val_loss: 2.2230
Epoch 2/100
40/40 - 0s - 2ms/step - accuracy: 0.2206 - loss: 2.1873 - val_accuracy: 0.2610 - val_loss: 2.1446
Epoch 3/100
40/40 - 0s - 2ms/step - accuracy: 0.2504 - loss: 2.0997 - val_accuracy: 0.2300 - val_loss: 2.0604
Epoch 4/100
40/40 - 0s - 2ms/step - accuracy: 0.2679 - loss: 2.0104 - val_accuracy: 0.3000 - val_loss: 1.9653
Epoch 5/100
40/40 - 0s - 2ms/step - accuracy: 0.3000 - loss: 1.9163 - val_accuracy: 0.3300 - val_loss: 1.8764
Epoch 6/100
40/40 - 0s - 3ms/step - accuracy: 0.3212 - loss: 1.8283 - val_accuracy: 0.2770 - val_loss: 1.7952
Epoch 7/100
40/40 - 0s - 2ms/step - accuracy: 0.3471 - loss: 1.7500 - val_accuracy: 0.3530 - val_loss: 1.7175
Epoch 8/100
40/40 - 0s - 2ms/step - accuracy: 0.3665 - loss: 1.6811 - val_accuracy: 0.4240 - val_loss: 1.6454
Epoch 9/100
40/40 - 0s - 2ms/step - accuracy: 0.4019 - loss: 1.6160 - val_accuracy: 0.4090 - val_loss: 1.5854
Epoch 10/100
40/40 - 0s - 2ms/step - accuracy: 0.4298 - loss: 1.5563 - val_accuracy: 0.4390 - val_loss: 1.5317
Epoch 11/100
40/40 - 0s - 2ms/step - accuracy: 0.4516 - loss: 1.5004 - val_accuracy: 0.4570 - val_loss: 1.4726
Epoch 12/100
40/40 - 0s - 2ms/step - accuracy: 0.4688 - loss: 1.4495 - val_accuracy: 0.5350 - val_loss: 1.4236
Epoch 13/100
40/40 - 0s - 2ms/step - accuracy: 0.4983 - loss: 1.4017 - val_accuracy: 0.5510 - val_loss: 1.3761
Epoch 14/100
40/40 - 0s - 2ms/step - accuracy: 0.5203 - loss: 1.3562 - val_accuracy: 0.6060 - val_loss: 1.3291
Epoch 15/100
40/40 - 0s - 2ms/step - accuracy: 0.5433 - loss: 1.3160 - val_accuracy: 0.5650 - val_loss: 1.2910
Epoch 16/100
40/40 - 0s - 2ms/step - accuracy: 0.5591 - loss: 1.2746 - val_accuracy: 0.6260 - val_loss: 1.2444
Epoch 17/100
40/40 - 0s - 2ms/step - accuracy: 0.5720 - loss: 1.2366 - val_accuracy: 0.7210 - val_loss: 1.2088
Epoch 18/100
40/40 - 0s - 3ms/step - accuracy: 0.5949 - loss: 1.2031 - val_accuracy: 0.6230 - val_loss: 1.1733
Epoch 19/100
40/40 - 0s - 2ms/step - accuracy: 0.6113 - loss: 1.1656 - val_accuracy: 0.7060 - val_loss: 1.1393
Epoch 20/100
40/40 - 0s - 2ms/step - accuracy: 0.6352 - loss: 1.1359 - val_accuracy: 0.7120 - val_loss: 1.1047
Epoch 21/100
40/40 - 0s - 2ms/step - accuracy: 0.6433 - loss: 1.1065 - val_accuracy: 0.7590 - val_loss: 1.0760
Epoch 22/100
40/40 - 0s - 2ms/step - accuracy: 0.6712 - loss: 1.0750 - val_accuracy: 0.7110 - val_loss: 1.0466
Epoch 23/100
40/40 - 0s - 2ms/step - accuracy: 0.6755 - loss: 1.0483 - val_accuracy: 0.6970 - val_loss: 1.0228
Epoch 24/100
40/40 - 0s - 2ms/step - accuracy: 0.6936 - loss: 1.0243 - val_accuracy: 0.8260 - val_loss: 0.9902
Epoch 25/100
40/40 - 0s - 2ms/step - accuracy: 0.7168 - loss: 0.9954 - val_accuracy: 0.8280 - val_loss: 0.9610
Epoch 26/100
40/40 - 0s - 3ms/step - accuracy: 0.7267 - loss: 0.9693 - val_accuracy: 0.8200 - val_loss: 0.9386
Epoch 27/100
40/40 - 0s - 3ms/step - accuracy: 0.7400 - loss: 0.9481 - val_accuracy: 0.8500 - val_loss: 0.9183
Epoch 28/100
40/40 - 0s - 2ms/step - accuracy: 0.7598 - loss: 0.9224 - val_accuracy: 0.8570 - val_loss: 0.9025
Epoch 29/100
40/40 - 0s - 2ms/step - accuracy: 0.7703 - loss: 0.9027 - val_accuracy: 0.8880 - val_loss: 0.8750
Epoch 30/100
40/40 - 0s - 2ms/step - accuracy: 0.7820 - loss: 0.8800 - val_accuracy: 0.8820 - val_loss: 0.8552
Epoch 31/100
40/40 - 0s - 2ms/step - accuracy: 0.7980 - loss: 0.8589 - val_accuracy: 0.9180 - val_loss: 0.8281
Epoch 32/100
40/40 - 0s - 2ms/step - accuracy: 0.8053 - loss: 0.8401 - val_accuracy: 0.9040 - val_loss: 0.8132
Epoch 33/100
40/40 - 0s - 2ms/step - accuracy: 0.8192 - loss: 0.8182 - val_accuracy: 0.9290 - val_loss: 0.7941
Epoch 34/100
40/40 - 0s - 2ms/step - accuracy: 0.8208 - loss: 0.7993 - val_accuracy: 0.9000 - val_loss: 0.7663
Epoch 35/100
40/40 - 0s - 2ms/step - accuracy: 0.8419 - loss: 0.7768 - val_accuracy: 0.9380 - val_loss: 0.7497
Epoch 36/100
40/40 - 0s - 2ms/step - accuracy: 0.8451 - loss: 0.7576 - val_accuracy: 0.8980 - val_loss: 0.7324
Epoch 37/100
40/40 - 0s - 2ms/step - accuracy: 0.8585 - loss: 0.7399 - val_accuracy: 0.9180 - val_loss: 0.7127
Epoch 38/100
40/40 - 0s - 2ms/step - accuracy: 0.8609 - loss: 0.7231 - val_accuracy: 0.8950 - val_loss: 0.6991
Epoch 39/100
40/40 - 0s - 2ms/step - accuracy: 0.8674 - loss: 0.7041 - val_accuracy: 0.9670 - val_loss: 0.6752
Epoch 40/100
40/40 - 0s - 2ms/step - accuracy: 0.8775 - loss: 0.6879 - val_accuracy: 0.9490 - val_loss: 0.6622
Epoch 41/100
40/40 - 0s - 2ms/step - accuracy: 0.8866 - loss: 0.6703 - val_accuracy: 0.9500 - val_loss: 0.6440
Epoch 42/100
40/40 - 0s - 2ms/step - accuracy: 0.8988 - loss: 0.6509 - val_accuracy: 0.9490 - val_loss: 0.6328
Epoch 43/100
40/40 - 0s - 2ms/step - accuracy: 0.9042 - loss: 0.6351 - val_accuracy: 0.9760 - val_loss: 0.6059
Epoch 44/100
40/40 - 0s - 2ms/step - accuracy: 0.9087 - loss: 0.6198 - val_accuracy: 0.9760 - val_loss: 0.5901
Epoch 45/100
40/40 - 0s - 2ms/step - accuracy: 0.9154 - loss: 0.6018 - val_accuracy: 0.9700 - val_loss: 0.5747
Epoch 46/100
40/40 - 0s - 2ms/step - accuracy: 0.9209 - loss: 0.5878 - val_accuracy: 0.9700 - val_loss: 0.5632
Epoch 47/100
40/40 - 0s - 2ms/step - accuracy: 0.9263 - loss: 0.5723 - val_accuracy: 0.9280 - val_loss: 0.5589
Epoch 48/100
40/40 - 0s - 2ms/step - accuracy: 0.9310 - loss: 0.5571 - val_accuracy: 0.9870 - val_loss: 0.5334
Epoch 49/100
40/40 - 0s - 2ms/step - accuracy: 0.9414 - loss: 0.5402 - val_accuracy: 0.9560 - val_loss: 0.5277
Epoch 50/100
40/40 - 0s - 2ms/step - accuracy: 0.9391 - loss: 0.5302 - val_accuracy: 0.9790 - val_loss: 0.5061
Epoch 51/100
40/40 - 0s - 2ms/step - accuracy: 0.9491 - loss: 0.5143 - val_accuracy: 0.9900 - val_loss: 0.4906
Epoch 52/100
40/40 - 0s - 2ms/step - accuracy: 0.9532 - loss: 0.4998 - val_accuracy: 0.9870 - val_loss: 0.4775
Epoch 53/100
40/40 - 0s - 2ms/step - accuracy: 0.9585 - loss: 0.4864 - val_accuracy: 0.9980 - val_loss: 0.4573
Epoch 54/100
40/40 - 0s - 2ms/step - accuracy: 0.9622 - loss: 0.4718 - val_accuracy: 0.9950 - val_loss: 0.4531
Epoch 55/100
40/40 - 0s - 2ms/step - accuracy: 0.9637 - loss: 0.4585 - val_accuracy: 0.9980 - val_loss: 0.4317
Epoch 56/100
40/40 - 0s - 2ms/step - accuracy: 0.9689 - loss: 0.4472 - val_accuracy: 0.9940 - val_loss: 0.4222
Epoch 57/100
40/40 - 0s - 2ms/step - accuracy: 0.9696 - loss: 0.4341 - val_accuracy: 0.9990 - val_loss: 0.4140
Epoch 58/100
40/40 - 0s - 2ms/step - accuracy: 0.9742 - loss: 0.4198 - val_accuracy: 1.0000 - val_loss: 0.3998
Epoch 59/100
40/40 - 0s - 2ms/step - accuracy: 0.9743 - loss: 0.4090 - val_accuracy: 0.9990 - val_loss: 0.3874
Epoch 60/100
40/40 - 0s - 2ms/step - accuracy: 0.9781 - loss: 0.3973 - val_accuracy: 1.0000 - val_loss: 0.3788
Epoch 61/100
40/40 - 0s - 2ms/step - accuracy: 0.9777 - loss: 0.3874 - val_accuracy: 1.0000 - val_loss: 0.3627
Epoch 62/100
40/40 - 0s - 2ms/step - accuracy: 0.9798 - loss: 0.3774 - val_accuracy: 1.0000 - val_loss: 0.3549
Epoch 63/100
40/40 - 0s - 2ms/step - accuracy: 0.9831 - loss: 0.3643 - val_accuracy: 1.0000 - val_loss: 0.3429
Epoch 64/100
40/40 - 0s - 2ms/step - accuracy: 0.9815 - loss: 0.3544 - val_accuracy: 1.0000 - val_loss: 0.3309
Epoch 65/100
40/40 - 0s - 2ms/step - accuracy: 0.9869 - loss: 0.3438 - val_accuracy: 0.9990 - val_loss: 0.3292
Epoch 66/100
40/40 - 0s - 2ms/step - accuracy: 0.9861 - loss: 0.3340 - val_accuracy: 1.0000 - val_loss: 0.3181
Epoch 67/100
40/40 - 0s - 2ms/step - accuracy: 0.9892 - loss: 0.3238 - val_accuracy: 1.0000 - val_loss: 0.3089
Epoch 68/100
40/40 - 0s - 2ms/step - accuracy: 0.9880 - loss: 0.3128 - val_accuracy: 1.0000 - val_loss: 0.2940
Epoch 69/100
40/40 - 0s - 2ms/step - accuracy: 0.9914 - loss: 0.3039 - val_accuracy: 1.0000 - val_loss: 0.2839
Epoch 70/100
40/40 - 0s - 2ms/step - accuracy: 0.9895 - loss: 0.2955 - val_accuracy: 1.0000 - val_loss: 0.2754
Epoch 71/100
40/40 - 0s - 2ms/step - accuracy: 0.9905 - loss: 0.2880 - val_accuracy: 1.0000 - val_loss: 0.2744
Epoch 72/100
40/40 - 0s - 2ms/step - accuracy: 0.9908 - loss: 0.2777 - val_accuracy: 1.0000 - val_loss: 0.2596
Epoch 73/100
40/40 - 0s - 2ms/step - accuracy: 0.9929 - loss: 0.2694 - val_accuracy: 1.0000 - val_loss: 0.2474
Epoch 74/100
40/40 - 0s - 2ms/step - accuracy: 0.9926 - loss: 0.2605 - val_accuracy: 1.0000 - val_loss: 0.2394
Epoch 75/100
40/40 - 0s - 2ms/step - accuracy: 0.9934 - loss: 0.2538 - val_accuracy: 1.0000 - val_loss: 0.2352
Epoch 76/100
40/40 - 0s - 2ms/step - accuracy: 0.9966 - loss: 0.2447 - val_accuracy: 1.0000 - val_loss: 0.2315
Epoch 77/100
40/40 - 0s - 2ms/step - accuracy: 0.9958 - loss: 0.2368 - val_accuracy: 1.0000 - val_loss: 0.2148
Epoch 78/100
40/40 - 0s - 2ms/step - accuracy: 0.9957 - loss: 0.2309 - val_accuracy: 1.0000 - val_loss: 0.2098
Epoch 79/100
40/40 - 0s - 2ms/step - accuracy: 0.9960 - loss: 0.2224 - val_accuracy: 1.0000 - val_loss: 0.2032
Epoch 80/100
40/40 - 0s - 2ms/step - accuracy: 0.9967 - loss: 0.2142 - val_accuracy: 1.0000 - val_loss: 0.1980
Epoch 81/100
40/40 - 0s - 2ms/step - accuracy: 0.9972 - loss: 0.2078 - val_accuracy: 1.0000 - val_loss: 0.1851
Epoch 82/100
40/40 - 0s - 2ms/step - accuracy: 0.9969 - loss: 0.2014 - val_accuracy: 1.0000 - val_loss: 0.1941
Epoch 83/100
40/40 - 0s - 3ms/step - accuracy: 0.9970 - loss: 0.1941 - val_accuracy: 1.0000 - val_loss: 0.1820
Epoch 84/100
40/40 - 0s - 2ms/step - accuracy: 0.9976 - loss: 0.1866 - val_accuracy: 1.0000 - val_loss: 0.1703
Epoch 85/100
40/40 - 0s - 2ms/step - accuracy: 0.9969 - loss: 0.1808 - val_accuracy: 1.0000 - val_loss: 0.1619
Epoch 86/100
40/40 - 0s - 2ms/step - accuracy: 0.9978 - loss: 0.1734 - val_accuracy: 1.0000 - val_loss: 0.1556
Epoch 87/100
40/40 - 0s - 2ms/step - accuracy: 0.9978 - loss: 0.1693 - val_accuracy: 1.0000 - val_loss: 0.1490
Epoch 88/100
40/40 - 0s - 2ms/step - accuracy: 0.9980 - loss: 0.1626 - val_accuracy: 1.0000 - val_loss: 0.1429
Epoch 89/100
40/40 - 0s - 2ms/step - accuracy: 0.9982 - loss: 0.1561 - val_accuracy: 1.0000 - val_loss: 0.1348
Epoch 90/100
40/40 - 0s - 2ms/step - accuracy: 0.9986 - loss: 0.1508 - val_accuracy: 1.0000 - val_loss: 0.1453
Epoch 91/100
40/40 - 0s - 2ms/step - accuracy: 0.9978 - loss: 0.1470 - val_accuracy: 1.0000 - val_loss: 0.1260
Epoch 92/100
40/40 - 0s - 2ms/step - accuracy: 0.9986 - loss: 0.1410 - val_accuracy: 1.0000 - val_loss: 0.1302
Epoch 93/100
40/40 - 0s - 2ms/step - accuracy: 0.9985 - loss: 0.1366 - val_accuracy: 1.0000 - val_loss: 0.1163
Epoch 94/100
40/40 - 0s - 2ms/step - accuracy: 0.9988 - loss: 0.1308 - val_accuracy: 1.0000 - val_loss: 0.1161
Epoch 95/100
40/40 - 0s - 2ms/step - accuracy: 0.9984 - loss: 0.1269 - val_accuracy: 1.0000 - val_loss: 0.1100
Epoch 96/100
40/40 - 0s - 2ms/step - accuracy: 0.9986 - loss: 0.1228 - val_accuracy: 1.0000 - val_loss: 0.1100
Epoch 97/100
40/40 - 0s - 2ms/step - accuracy: 0.9990 - loss: 0.1184 - val_accuracy: 1.0000 - val_loss: 0.1010
Epoch 98/100
40/40 - 0s - 2ms/step - accuracy: 0.9986 - loss: 0.1145 - val_accuracy: 1.0000 - val_loss: 0.0977
Epoch 99/100
40/40 - 0s - 2ms/step - accuracy: 0.9986 - loss: 0.1102 - val_accuracy: 1.0000 - val_loss: 0.0949
Epoch 100/100
40/40 - 0s - 2ms/step - accuracy: 0.9985 - loss: 0.1066 - val_accuracy: 1.0000 - val_loss: 0.0890
<keras.src.callbacks.history.History at 0x7ff8013bfa50>

As we see, the network is essentially perfect now.