keras.preprocessing.image.ImageDataGenerator(featurewise_center=False, samplewise_center=False, featurewise_std_normalization=False, samplewise_std_normalization=False, zca_whitening=False, zca_epsilon=1e-6, rotation_range=0., width_shift_range=0., height_shift_range=0., shear_range=0., zoom_range=0., channel_shift_range=0., fill_mode='nearest', cval=0., horizontal_flip=False, vertical_flip=False, rescale=None, preprocessing_function=None, data_format=K.image_data_format())
用以生成一個batch的圖像數據,支持實時數據提高。訓練時該函數無限生成數據,知道達到規定的epoch次數爲止。git
~/.keras/keras.json
中設置的值,若從未設置過,則爲「channel_last」featurewise_center
,featurewise_std_normalization
或zca_whitening
時須要此函數。X:numpy array,樣本數據,秩應爲4.在黑白圖像的狀況下channel軸的值爲1,在彩色圖像狀況下值爲3github
augment:布爾值,肯定是否使用隨即提高過的數據json
round:若設augment=True
,肯定要在數據上進行多少輪數據提高,默認值爲1數組
seed: 整數,隨機數種子app
x:樣本數據,秩應爲4.在黑白圖像的狀況下channel軸的值爲1,在彩色圖像狀況下值爲3ide
y:標籤函數
batch_size:整數,默認32oop
shuffle:布爾值,是否隨機打亂數據,默認爲Trueui
save_to_dir:None或字符串,該參數能讓你將提高後的圖片保存起來,用以可視化編碼
save_prefix:字符串,保存提高後圖片時使用的前綴, 僅當設置了save_to_dir
時生效
save_format:"png"或"jpeg"之一,指定保存圖片的數據格式,默認"jpeg"
yields:形如(x,y)的tuple,x是表明圖像數據的numpy數組.y是表明標籤的numpy數組.該迭代器無限循環.
seed: 整數,隨機數種子
directory
下的子文件夾名稱/結構自動推斷。每個子文件夾都會被認爲是一個新的類。(類別的順序將按照字母表順序映射到標籤值)。經過屬性class_indices
可得到文件夾名與類的序號的對應字典。model.predict_generator()
和model.evaluate_generator()
等函數時會用到.save_to_dir
時生效使用.flow()
的例子
(x_train, y_train), (x_test, y_test) = cifar10.load_data() y_train = np_utils.to_categorical(y_train, num_classes) y_test = np_utils.to_categorical(y_test, num_classes) datagen = ImageDataGenerator( featurewise_center=True, featurewise_std_normalization=True, rotation_range=20, width_shift_range=0.2, height_shift_range=0.2, horizontal_flip=True) # compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(x_train) # fits the model on batches with real-time data augmentation: model.fit_generator(datagen.flow(x_train, y_train, batch_size=32), steps_per_epoch=len(x_train), epochs=epochs) # here's a more "manual" example for e in range(epochs): print 'Epoch', e batches = 0 for x_batch, y_batch in datagen.flow(x_train, y_train, batch_size=32): loss = model.train(x_batch, y_batch) batches += 1 if batches >= len(x_train) / 32: # we need to break the loop by hand because # the generator loops indefinitely break
使用.flow_from_directory(directory)
的例子
train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( 'data/train', target_size=(150, 150), batch_size=32, class_mode='binary') validation_generator = test_datagen.flow_from_directory( 'data/validation', target_size=(150, 150), batch_size=32, class_mode='binary') model.fit_generator( train_generator, steps_per_epoch=2000, epochs=50, validation_data=validation_generator, validation_steps=800) 同時變換圖像和mask # we create two instances with the same arguments data_gen_args = dict(featurewise_center=True, featurewise_std_normalization=True, rotation_range=90., width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.2) image_datagen = ImageDataGenerator(**data_gen_args) mask_datagen = ImageDataGenerator(**data_gen_args) # Provide the same seed and keyword arguments to the fit and flow methods seed = 1 image_datagen.fit(images, augment=True, seed=seed) mask_datagen.fit(masks, augment=True, seed=seed) image_generator = image_datagen.flow_from_directory( 'data/images', class_mode=None, seed=seed) mask_generator = mask_datagen.flow_from_directory( 'data/masks', class_mode=None, seed=seed) # combine generators into one which yields image and masks train_generator = zip(image_generator, mask_generator) model.fit_generator( train_generator, steps_per_epoch=2000, epochs=50)