如下所示:
keras.callbacks.ModelCheckpoint(self.checkpoint_path,
verbose=0, save_weights_only=True,mode="max",save_best_only=True),
默認(rèn)是每一次poch,但是這樣硬盤(pán)空間很快就會(huì)被耗光.
將save_best_only 設(shè)置為T(mén)rue使其只保存最好的模型,值得一提的是其記錄的acc是來(lái)自于一個(gè)monitor_op,其默認(rèn)為"val_loss",其實(shí)現(xiàn)是取self.best為 -np.Inf. 所以,第一次的訓(xùn)練結(jié)果總是被保存.
mode模式自動(dòng)為auto 和 max一樣,還有一個(gè)min的選項(xiàng)...應(yīng)該是loss沒(méi)有負(fù)號(hào)的時(shí)候用的....
https://keras.io/callbacks/ 瀏覽上面的文檔.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
# Print the batch number at the beginning of every batch. batch_print_callback = LambdaCallback( on_batch_begin = lambda batch,logs: print (batch)) # Stream the epoch loss to a file in JSON format. The file content # is not well-formed JSON but rather has a JSON object per line. import json json_log = open ( 'loss_log.json' , mode = 'wt' , buffering = 1 ) json_logging_callback = LambdaCallback( on_epoch_end = lambda epoch, logs: json_log.write( json.dumps({ 'epoch' : epoch, 'loss' : logs[ 'loss' ]}) + '\n' ), on_train_end = lambda logs: json_log.close() ) # Terminate some processes after having finished model training. processes = ... cleanup_callback = LambdaCallback( on_train_end = lambda logs: [ p.terminate() for p in processes if p.is_alive()]) model.fit(..., callbacks = [batch_print_callback, json_logging_callback, cleanup_callback]) |
Keras的callback 一般在model.fit函數(shù)使用,由于Keras的便利性.有很多模型策略以及日志的策略.
比如 當(dāng)loss不再變化時(shí)停止訓(xùn)練
keras.callbacks.EarlyStopping(monitor='val_loss', min_delta=0, patience=0, verbose=0, mode='auto', baseline=None, restore_best_weights=False)
比如日志傳送遠(yuǎn)程服務(wù)器等,以及自適應(yīng)的學(xué)習(xí)率scheduler.
確實(shí)很便利....
補(bǔ)充知識(shí):keras callbacks常用功能如ModelCheckpoint、ReduceLROnPlateau,EarlyStopping等
ModelCheckpoint:
keras.callbacks.ModelCheckpoint(filepath,monitor='val_loss',verbose=0,save_best_only=False, save_weights_only=False, mode='auto', period=1)
參數(shù):
filename:字符串,保存模型的路徑(可以將模型的準(zhǔn)確率和損失等寫(xiě)到路徑中,格式如下:)
ModelCheckpoint('model_check/'+'ep{epoch:d}-acc{acc:.3f}-val_acc{val_acc:.3f}.h5',monitor='val_loss')
還可以添加損失值等如
‘loss{loss:.3f}-val_loss{val_loss:.3f}'
monitor:需要檢測(cè)的值如測(cè)試集損失或者訓(xùn)練集損失等
save_best_only:當(dāng)設(shè)置為T(mén)rue時(shí),監(jiān)測(cè)值有改進(jìn)時(shí)才會(huì)保存當(dāng)前的模型
verbose:信息展示模式,0或1(當(dāng)為1時(shí)會(huì)有如下矩形框的信息提示)
mode:‘auto',‘min',‘max'之一,在save_best_only=True時(shí)決定性能最佳模型的評(píng)判準(zhǔn)則,例如,當(dāng)監(jiān)測(cè)值為val_acc時(shí),模式應(yīng)為max,當(dāng)監(jiān)測(cè)值為val_loss時(shí),模式應(yīng)為min。在auto模式下,評(píng)價(jià)準(zhǔn)則由被監(jiān)測(cè)值的名字自動(dòng)推斷。
save_weights_only:若設(shè)置為T(mén)rue,則只保存模型權(quán)重,否則將保存整個(gè)模型
period:CheckPoint之間的間隔的epoch數(shù)
參考代碼如下:
在使用時(shí)傳遞給fit中callbacks即可
1
2
3
4
5
6
7
8
9
10
|
checkpoint = ModelCheckpoint(log_dir + "ep{epoch: 03d } - loss{loss:. 3f } - val_loss{val_loss:. 3f }.h5", monitor = 'val_loss' , save_weights_only = True , save_best_only = True , period = 1 ) train_history = model.fit_generator(data_generator_wrap(), steps_per_epoch = max ( 1 , num_train / / batch_size), validation_data = data_generator_wrap(), validation_steps = max ( 1 , num_val / / batch_size), epochs = 40 , initial_epoch = 0 ,callbacks = [logging, reduce_lr,checkpoint]) |
ReduceLROnPlateau:
keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto', epsilon=0.0001, cooldown=0, min_lr=0)
當(dāng)評(píng)價(jià)指標(biāo)不在提升時(shí),減少學(xué)習(xí)率
當(dāng)學(xué)習(xí)停滯時(shí),減少2倍或10倍的學(xué)習(xí)率常常能獲得較好的效果。該回調(diào)函數(shù)檢測(cè)指標(biāo)的情況,如果在patience個(gè)epoch中看不到模型性能提升,則減少學(xué)習(xí)率
參數(shù)
monitor:被監(jiān)測(cè)的量
factor:每次減少學(xué)習(xí)率的因子,學(xué)習(xí)率將以lr = lr*factor的形式被減少
patience:當(dāng)patience個(gè)epoch過(guò)去而模型性能不提升時(shí),學(xué)習(xí)率減少的動(dòng)作會(huì)被觸發(fā)
mode:‘auto',‘min',‘max'之一,在min模式下,如果檢測(cè)值觸發(fā)學(xué)習(xí)率減少。在max模式下,當(dāng)檢測(cè)值不再上升則觸發(fā)學(xué)習(xí)率減少。
epsilon:閾值,用來(lái)確定是否進(jìn)入檢測(cè)值的“平原區(qū)”
cooldown:學(xué)習(xí)率減少后,會(huì)經(jīng)過(guò)cooldown個(gè)epoch才重新進(jìn)行正常操作
min_lr:學(xué)習(xí)率的下限
參考代碼如下:
1
2
3
4
|
reduce_lr = ReduceLROnPlateau(monitor = 'val_loss' , factor = 0.1 , patience = 3 , verbose = 1 ) train_history = model.fit(data(),validation_data = datae_g(),epochs = 40 ,callbacks = [logging, reduce_lr, checkpoint]) EarlyStopping keras.callbacks.EarlyStopping(monitor = 'val_loss' , patience = 0 , verbose = 0 , mode = 'auto' ) |
當(dāng)監(jiān)測(cè)值不再改善時(shí),該回調(diào)函數(shù)將中止訓(xùn)練
參數(shù)
monitor:需要監(jiān)視的量
patience:當(dāng)early stop被激活(如發(fā)現(xiàn)loss相比上一個(gè)epoch訓(xùn)練沒(méi)有下降),則經(jīng)過(guò)patience個(gè)epoch后停止訓(xùn)練。
verbose:信息展示模式
mode:‘auto',‘min',‘max'之一,在min模式下,如果檢測(cè)值停止下降則中止訓(xùn)練。在max模式下,當(dāng)檢測(cè)值不再上升則停止訓(xùn)練。
以上這篇淺談keras.callbacks設(shè)置模型保存策略就是小編分享給大家的全部?jī)?nèi)容了,希望能給大家一個(gè)參考,也希望大家多多支持服務(wù)器之家。
原文鏈接:https://blog.csdn.net/dayuqi/article/details/85090353