As can be seen, at around epoch 60, my validation loss starts to increase while my validation accuracy remains the same. It seems like it's beginning to overfit at around that time, but wouldn't the training loss continue to decrease to nearly zero if it's simply memorizing my training data? My model also seems very small for it to overfit (I'm trying to classify FFT data). Is there something I'm blatantly doing wrong?
Here is my model:
model = Sequential()
model.add(Conv1D(filters = 32, kernel_size = 3, activation = 'relu', input_shape = (size, 1)))
model.add(Dropout(dropout))
model.add(GlobalMaxPooling1D())
model.add(Dropout(dropout))
model.add(Dense(64, activation='relu'))
model.add(Dense(1, activation='sigmoid')) #Output layer
My training data shape:
x: (1038, 2206)
y: (1038, 1)
My parameters:
EPOCHS = 300
LR = 1e-3
DROPOUT = 0.5
BATCH_SIZE = 128
On a side note, my validation accuracy is around 98%, yet when I test my model on the same validation data, I get the incorrect output. I don't believe my validation data is incorrectly made because I made it in exactly the same way as my training data.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…