1. 모델 컴파일하기
모델의 학습을 시작하기 전에 마지막으로 손실 함수, 최적화 알고리즘, 학습 과정 모니터링에 사용할 평가 지표 등 하이퍼파라미터를 결정한다.
#모델 컴파일하기
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
loss 손실함수 categorical_crossentropy 크로스엔트로피 손실함수를 사용했다.
optimizer 최적화 알고리즘 rmsprop을 사용했다.
metrics 평가지표는 accuracy 정확도를 사용했다.
2. 모델 학습하기
#모델 학습하기
from keras.callbacks import ModelCheckpoint
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1, save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_valid, y_valid), callbacks=[checkpointer], verbose=2, shuffle=True)
fit() 학습시킨다.
10에포크만 진행했다.
Epoch 1/10
Epoch 1: val_loss improved from inf to 1.27923, saving model to model.weights.best.hdf5
1407/1407 - 22s - loss: 1.6179 - accuracy: 0.4123 - val_loss: 1.2792 - val_accuracy: 0.5384 - 22s/epoch - 16ms/step
Epoch 2/10
Epoch 2: val_loss improved from 1.27923 to 1.08214, saving model to model.weights.best.hdf5
1407/1407 - 24s - loss: 1.2749 - accuracy: 0.5456 - val_loss: 1.0821 - val_accuracy: 0.6288 - 24s/epoch - 17ms/step
Epoch 3/10
Epoch 3: val_loss did not improve from 1.08214
1407/1407 - 24s - loss: 1.1367 - accuracy: 0.5971 - val_loss: 1.1055 - val_accuracy: 0.6166 - 24s/epoch - 17ms/step
Epoch 4/10
Epoch 4: val_loss improved from 1.08214 to 0.97848, saving model to model.weights.best.hdf5
1407/1407 - 25s - loss: 1.0570 - accuracy: 0.6287 - val_loss: 0.9785 - val_accuracy: 0.6512 - 25s/epoch - 18ms/step
Epoch 5/10
Epoch 5: val_loss improved from 0.97848 to 0.92089, saving model to model.weights.best.hdf5
1407/1407 - 24s - loss: 1.0162 - accuracy: 0.6426 - val_loss: 0.9209 - val_accuracy: 0.6800 - 24s/epoch - 17ms/step
Epoch 6/10
Epoch 6: val_loss improved from 0.92089 to 0.91063, saving model to model.weights.best.hdf5
1407/1407 - 23s - loss: 0.9763 - accuracy: 0.6598 - val_loss: 0.9106 - val_accuracy: 0.6840 - 23s/epoch - 16ms/step
Epoch 7/10
Epoch 7: val_loss did not improve from 0.91063
1407/1407 - 22s - loss: 0.9572 - accuracy: 0.6663 - val_loss: 0.9952 - val_accuracy: 0.6618 - 22s/epoch - 16ms/step
Epoch 8/10
Epoch 8: val_loss did not improve from 0.91063
1407/1407 - 22s - loss: 0.9367 - accuracy: 0.6789 - val_loss: 0.9809 - val_accuracy: 0.6666 - 22s/epoch - 16ms/step
Epoch 9/10
Epoch 9: val_loss did not improve from 0.91063
1407/1407 - 23s - loss: 0.9295 - accuracy: 0.6830 - val_loss: 0.9230 - val_accuracy: 0.6864 - 23s/epoch - 16ms/step
Epoch 10/10
Epoch 10: val_loss did not improve from 0.91063
1407/1407 - 22s - loss: 0.9229 - accuracy: 0.6864 - val_loss: 0.9174 - val_accuracy: 0.6894 - 22s/epoch - 16ms/step
3. val_acc가 가장 좋았던 가중치 읽기
#val_acc가 가장 좋았던 모델 사용하기
model.load_weights('model.weights.best.hdf5')
4. 모델 평가하기
#모델 평가하기
score= model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracuy:', score[1])
Test accuracuy: 0.6984999775886536
5. 전체 코드
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(20,5))
for i in range(36):
ax = fig.add_subplot(3, 12, i+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(x_train[i]))
x_train = x_train.astype('float32')/255
x_test = x_test.astype('float32')/255
import keras
from keras.utils import np_utils
num_classes= len(np.unique(y_train))
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test= keras.utils.to_categorical(y_test, num_classes)
(x_train, x_valid) = x_train[5000:], x_train[:5000]
(y_train, y_valid) = y_train[5000:], y_train[:5000]
print("x_train shape:", x_train.shape)
print("y_train shape:", y_train.shape)
print("x_test shape:", x_test.shape)
print("y_test shape:", y_test.shape)
print("x_valid shape:", x_valid.shape)
print("y_valid shape:", y_valid.shape)
from keras.models import Sequential
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense, Dropout
model=Sequential()
model.add(Conv2D(filters=16, kernel_size=2, padding='same', activation='relu', input_shape=(32,32,3)))
model.add(MaxPool2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=2, padding='same', activation='relu'))
model.add(MaxPool2D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(500, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(10, activation='softmax'))
model.summary()
#모델 컴파일하기
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
#모델 학습하기
from keras.callbacks import ModelCheckpoint
checkpointer = ModelCheckpoint(filepath='model.weights.best.hdf5', verbose=1, save_best_only=True)
hist = model.fit(x_train, y_train, batch_size=32, epochs=10, validation_data=(x_valid, y_valid), callbacks=[checkpointer], verbose=2, shuffle=True)
#val_acc가 가장 좋았던 모델 사용하기
model.load_weights('model.weights.best.hdf5')
#모델 평가하기
score= model.evaluate(x_test, y_test, verbose=0)
print('\n', 'Test accuracuy:', score[1])
'머신러닝 > CNN' 카테고리의 다른 글
컴퓨터 비전 시스템의 이해 (0) | 2023.04.15 |
---|---|
cifar10 데이터 이용한 CNN 모델 설계 1 (0) | 2023.03.11 |