本文主要演示瞭如何尋找MNIST圖像的「對抗噪聲」,以及如何使神經網絡對對抗噪聲免疫。python
01 - 簡單線性模型 | 02 - 卷積神經網絡 | 03 - PrettyTensor | 04 - 保存& 恢復
05 - 集成學習 | 06 - CIFAR 10 | 07 - Inception 模型 | 08 - 遷移學習
09 - 視頻數據 | 11 - 對抗樣本git
by Magnus Erik Hvass Pedersen / GitHub / Videos on YouTube
中文翻譯 thrillerist / Githubgithub
若有轉載,請附上本文連接。數組
以前的教程#11展現瞭如何找到最早進神經網絡的對抗樣本,它會引發網絡誤分類圖像,即便在人眼看來圖像徹底相同。例如,在添加了對抗噪聲以後,一張鸚鵡的圖像會被誤分類成書架,但在人類眼中圖像徹底沒什麼變化。bash
教程#11是經過每張圖像的優化過程來尋找對抗噪聲的。因爲噪聲是專門爲某張圖像生成,所以它可能不是通用的,沒法在其餘圖像上起做用。網絡
本教程將會找到那些致使幾乎全部輸入圖像都被誤分類成目標類別的對抗噪聲。咱們使用MNIST手寫數字數據集爲例。如今,對抗噪聲對人眼是清晰可見的,但人類仍是可以很容易地辨認出數字,然而神經網絡幾乎將全部圖像誤分類。session
這篇教程裏,咱們還會試着讓神經網絡對對抗噪聲免疫。app
教程 #11 用Numpy來作對抗優化。在這篇教程裏,咱們會直接在TensorFlow裏實現優化過程。這會更快速,尤爲是在使用GPU的時候,由於不用每次迭代都在GPU裏拷貝數據。ide
推薦你先學習教程 #11。你也須要大概地熟悉神經網絡,詳見教程 #01和 #02。函數
下面的圖表直接展現了以後實現的卷積神經網絡中數據的傳遞。
例子展現的是數字7的輸入圖像。隨後在圖像上添加對抗噪聲。紅色的噪聲點是正值,它讓像素值更深,藍色噪聲點是負值,讓輸入圖像在此處的顏色更淺。
這些噪聲圖像傳到神經網絡中,而後獲得一個預測數字。這種狀況下,對抗噪聲讓神經網絡相信這張數字7的圖像顯示的是數字3。噪聲對人眼是清晰可見的,但人類仍然能夠容易地辨認出數字7來。
這邊值得注意的是,單一的噪聲模式會致使神經網絡將幾乎全部的輸入圖像都誤分類成指望的目標類型。
在這個神經網絡中有兩個單獨的優化程序。首先,咱們優化神經網絡的變量來分類訓練集的圖像。這是神經網絡的常規優化過程。一旦分類準確率足夠高,咱們就切換到第二個優化程序,(它用來)尋找單一模式的對抗噪聲,使得全部的輸入圖像都被誤分類成目標類型。
這兩個優化程序是徹底獨立的。第一個程序只修改量神經網絡的變量,第二個程序只修改對抗噪聲。
from IPython.display import Image
Image('images/12_adversarial_noise_flowchart.png')複製代碼
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import time
from datetime import timedelta
import math
# We also need PrettyTensor.
import prettytensor as pt複製代碼
使用Python3.5.2(Anaconda)開發,TensorFlow版本是:
tf.__version__複製代碼
'0.12.0-rc0'
PrettyTensor 版本:
pt.__version__複製代碼
'0.7.1'
MNIST數據集大約12MB,若是沒在給定路徑中找到就會自動下載。
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)複製代碼
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
如今已經載入了MNIST數據集,它由70,000張圖像和對應的標籤(好比圖像的類別)組成。數據集分紅三份互相獨立的子集。咱們在教程中只用訓練集和測試集。
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))複製代碼
Size of:
- Training-set: 55000 - Test-set: 10000 - Validation-set: 5000複製代碼
類型標籤使用One-Hot編碼,這意外每一個標籤是長爲10的向量,除了一個元素以外,其餘的都爲零。這個元素的索引就是類別的數字,即相應圖片中畫的數字。咱們也須要測試數據集類別數字的整型值,如今計算它。
data.test.cls = np.argmax(data.test.labels, axis=1)複製代碼
在下面的源碼中,有不少地方用到了數據維度。它們只在一個地方定義,所以咱們能夠在代碼中使用這些數字而不是直接寫數字。
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10複製代碼
這個函數用來在3x3的柵格中畫9張圖像,而後在每張圖像下面寫出真實類別和預測類別。若是提供了噪聲,就將其添加到全部圖像上。
def plot_images(images, cls_true, cls_pred=None, noise=0.0):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Get the i'th image and reshape the array.
image = images[i].reshape(img_shape)
# Add the adversarial noise to the image.
image += noise
# Ensure the noisy pixel-values are between 0 and 1.
image = np.clip(image, 0.0, 1.0)
# Plot image.
ax.imshow(image,
cmap='binary', interpolation='nearest')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()複製代碼
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)複製代碼
如今將使用TensorFlow和PrettyTensor構建神經網絡的計算圖。 與往常同樣,咱們須要爲圖像建立佔位符變量,將其送到計算圖中,而後將對抗噪聲添加到圖像中。接着把噪聲圖像用做卷積神經網絡的輸入。
這個網絡有兩個單獨的優化程序。神經網絡自己變量的一個常規優化過程,以及對抗噪聲的另外一個優化過程。 兩個優化過程都直接在TensorFlow中實現。
佔位符變量爲TensorFlow中的計算圖提供了輸入,咱們能夠在每次執行圖的時候更改。 咱們稱爲feeding佔位符變量。
首先,咱們爲輸入圖像定義佔位符變量。 這容許咱們改變輸入到TensorFlow圖中的圖像。 這是一個張量,表明它是一個多維數組。 數據類型設爲float32
,形狀設爲[None,img_size_flat]
,其中None
表明張量能夠保存任意數量的圖像,每一個圖像是長度爲img_size_flat
的向量。
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')複製代碼
卷積層但願x
被編碼爲4維張量,所以咱們須要將它的形狀轉換至[num_images, img_height, img_width, num_channels]
。注意img_height == img_width == img_size
,若是第一維的大小設爲-1, num_images的大小也會被自動推導出來。轉換運算以下:
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])複製代碼
接下來咱們爲輸入變量x中的圖像所對應的真實標籤訂義佔位符變量。變量的形狀是[None, num_classes]
,這表明着它保存了任意數量的標籤,每一個標籤是長度爲num_classes
的向量,本例中長度爲10。
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')複製代碼
咱們也能夠爲類別號提供一個佔位符,但這裏用argmax來計算它。這裏只是TensorFlow中的一些操做符,沒有執行什麼運算。
y_true_cls = tf.argmax(y_true, dimension=1)複製代碼
輸入圖像的像素值在0.0到1.0之間。對抗噪聲是在輸入圖像上添加或刪除的數值。
對抗噪聲的界限設爲0.35,則噪聲在正負0.35之間。
noise_limit = 0.35複製代碼
對抗噪聲的優化器會試圖最小化兩個損失度量:(1)神經網絡常規的損失度量,所以咱們會找到使得目標類型分類準確率最高的噪聲;(2)L2-loss度量,它會保持儘量低的噪聲。
下面的權重決定了與常規的損失度量相比,L2-loss的重要性。一般接近零的L2權重表現的更好。
noise_l2_weight = 0.02複製代碼
當咱們爲噪聲建立變量時,必須告知TensorFlow它屬於哪個變量集合,這樣,後面就能通知兩個優化器要更新哪些變量。
首先爲變量集合定義一個名稱。這只是一個字符串。
ADVERSARY_VARIABLES = 'adversary_variables'複製代碼
接着,建立噪聲變量所屬集合的列表。若是咱們將噪聲變量添加到集合tf.GraphKeys.VARIABLES
中,它就會和TensorFlow圖中的其餘變量一塊兒被初始化,但不會被優化。這裏有點混亂。
collections = [tf.GraphKeys.VARIABLES, ADVERSARY_VARIABLES]複製代碼
如今咱們能夠爲對抗噪聲添加新的變量。它會被初始化爲零。它是不可訓練的,所以並不會與神經網絡中的其餘變量一塊兒被優化。這讓咱們能夠建立兩個獨立的優化程序。
x_noise = tf.Variable(tf.zeros([img_size, img_size, num_channels]),
name='x_noise', trainable=False,
collections=collections)複製代碼
對抗噪聲會被限制在咱們上面設定的噪聲界限內。注意此時並未在計算圖表內進行計算,在優化步驟以後執行,詳見下文。
x_noise_clip = tf.assign(x_noise, tf.clip_by_value(x_noise,
-noise_limit,
noise_limit))複製代碼
噪聲圖像只是輸入圖像和對抗噪聲的總和。
x_noisy_image = x_image + x_noise複製代碼
把噪聲圖像添加到輸入圖像上時,它可能會溢出有效圖像(像素)的邊界,所以咱們裁剪/限制噪聲圖像,確保它的像素值在0到1之間。
x_noisy_image = tf.clip_by_value(x_noisy_image, 0.0, 1.0)複製代碼
咱們會用PrettyTensor來構造卷積神經網絡。首先須要將噪聲圖像的張量封裝到PrettyTensor對象中,該對象提供了構造神經網絡的函數。
x_pretty = pt.wrap(x_noisy_image)複製代碼
將輸入圖像封裝到PrettyTensor對象以後,用幾行代碼就能添加捲積層和全鏈接層。
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)複製代碼
注意,在with
代碼塊中,pt.defaults_scope(activation_fn=tf.nn.relu)
把 activation_fn=tf.nn.relu
看成每一個的層參數,所以這些層都用到了 Rectified Linear Units (ReLU) 。defaults_scope使咱們能更方便地修改全部層的參數。
這是會在常規優化程序裏被訓練的神經網絡的變量列表。注意,'x_noise:0'
不在列表裏,所以這個程序並不會優化對抗噪聲。
[var.name for var in tf.trainable_variables()]複製代碼
['layer_conv1/weights:0',
'layer_conv1/bias:0',
'layer_conv2/weights:0',
'layer_conv2/bias:0',
'layer_fc1/weights:0',
'layer_fc1/bias:0',
'fully_connected/weights:0',
'fully_connected/bias:0']
神經網絡中這些變量的優化由Adam-optimizer完成,它用到上面PretyTensor構造的神經網絡所返回的損失度量。
此時不執行優化,實際上這裏根本沒有計算,咱們只是把優化對象添加到TensorFlow圖表中,以便稍後運行。
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)複製代碼
獲取變量列表,這些是須要在第二個程序裏爲對抗噪聲作優化的變量。
adversary_variables = tf.get_collection(ADVERSARY_VARIABLES)複製代碼
展現變量名稱列表。這裏只有一個元素,是咱們在上面建立的對抗噪聲變量。
[var.name for var in adversary_variables]複製代碼
['x_noise:0']
咱們會將常規優化的損失函數與所謂的L2-loss相結合。這將會獲得在最佳分類準確率下的最小對抗噪聲。
L2-loss由一個一般設置爲接近零的權重縮放。
l2_loss_noise = noise_l2_weight * tf.nn.l2_loss(x_noise)複製代碼
將正常的損失函數和對抗噪聲的L2-loss相結合。
loss_adversary = loss + l2_loss_noise複製代碼
如今能夠爲對抗噪聲建立優化器。因爲優化器並不能更新神經網絡的全部變量,咱們必須給出一個須要更新的變量的列表,即對抗噪聲變量。注意,這裏的學習率比上面的常規優化器要大不少。
optimizer_adversary = tf.train.AdamOptimizer(learning_rate=1e-2).minimize(loss_adversary, var_list=adversary_variables)複製代碼
如今咱們爲神經網絡建立了兩個優化器,一個用於神經網絡的變量,另外一個用於對抗噪聲的單個變量。
在TensorFlow圖表中,咱們須要另一些操做,以便在優化過程當中向用戶展現進度。
首先,計算出神經網絡輸出y_pred
的預測類別號,它是一個包含10個元素的向量。類型號是最大元素的索引。
y_pred_cls = tf.argmax(y_pred, dimension=1)複製代碼
接着建立一個布爾數組,用來表示每張圖像的預測類型是否與真實類型相同。
correct_prediction = tf.equal(y_pred_cls, y_true_cls)複製代碼
上面的計算先將布爾值向量類型轉換成浮點型向量,這樣子False就變成0,True變成1,而後計算這些值的平均數,以此來計算分類的準確度。
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))複製代碼
一旦建立了TensorFlow圖,咱們須要建立一個TensorFlow會話,用來運行圖。
session = tf.Session()複製代碼
咱們須要在開始優化weights
和biases
變量以前對它們進行初始化。
session.run(tf.global_variables_initializer())複製代碼
幫助函數將對抗噪聲初始化/重置爲零。
def init_noise():
session.run(tf.variables_initializer([x_noise]))複製代碼
調用函數來初始化對抗噪聲。
init_noise()複製代碼
在訓練集中有55,000張圖。用所有圖像計算模型的梯度會花不少時間。所以咱們在優化器的每次迭代裏只用到了一小部分的圖像。
若是內存耗盡致使電腦死機或變得很慢,你應該試着減小這些數量,但同時可能還須要更優化的迭代。
train_batch_size = 64複製代碼
下面的函數用來執行必定數量的優化迭代,以此來逐漸改善神經網絡的變量。在每次迭代中,會從訓練集中選擇新的一批數據,而後TensorFlow在這些訓練樣本上執行優化。每100次迭代會打印出進度。
這個函數與以前教程中的類似,除了如今它多了一個對抗目標類別(adversary target-class)的參數。當目標類別設爲整數時,將會用它取代訓練集中的真實類別號。也會用對抗優化器代替常規優化器,而後在每次優化以後,噪聲將被限制/截斷到容許的範圍。這裏優化了對抗噪聲,並忽略神經網絡中的其餘變量。
def optimize(num_iterations, adversary_target_cls=None):
# Start-time used for printing time-usage below.
start_time = time.time()
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# If we are searching for the adversarial noise, then
# use the adversarial target-class instead.
if adversary_target_cls is not None:
# The class-labels are One-Hot encoded.
# Set all the class-labels to zero.
y_true_batch = np.zeros_like(y_true_batch)
# Set the element for the adversarial target-class to 1.
y_true_batch[:, adversary_target_cls] = 1.0
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# If doing normal optimization of the neural network.
if adversary_target_cls is None:
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
else:
# Run the adversarial optimizer instead.
# Note that we have 'faked' the class above to be
# the adversarial target-class instead of the true class.
session.run(optimizer_adversary, feed_dict=feed_dict_train)
# Clip / limit the adversarial noise. This executes
# another TensorFlow operation. It cannot be executed
# in the same session.run() as the optimizer, because
# it may run in parallel so the execution order is not
# guaranteed. We need the clip to run after the optimizer.
session.run(x_noise_clip)
# Print status every 100 iterations.
if (i % 100 == 0) or (i == num_iterations - 1):
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i, acc))
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time usage: " + str(timedelta(seconds=int(round(time_dif)))))複製代碼
這個函數從TensorFlow圖表中獲取對抗噪聲。
def get_noise():
# Run the TensorFlow session to retrieve the contents of
# the x_noise variable inside the graph.
noise = session.run(x_noise)
return np.squeeze(noise)複製代碼
這個函數繪製了對抗噪聲,並打印一些統計信息。
def plot_noise():
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Print statistics.
print("Noise:")
print("- Min:", noise.min())
print("- Max:", noise.max())
print("- Std:", noise.std())
# Plot the noise.
plt.imshow(noise, interpolation='nearest', cmap='seismic',
vmin=-1.0, vmax=1.0)複製代碼
函數用來繪製測試集中被誤分類的樣本。
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9],
noise=noise)複製代碼
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)複製代碼
函數用來打印測試集上的分類準確度。
爲測試集上的全部圖片計算分類會花費一段時間,所以咱們直接用這個函數來調用上面的結果,這樣就不用每次都從新計算了。
這個函數可能會佔用不少電腦內存,這也是爲何將測試集分紅更小的幾個部分。若是你的電腦內存比較小或死機了,就要試着下降batch-size。
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False, show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)複製代碼
此時對抗噪聲尚未效果,由於上面只將它初始化爲零,在優化過程當中並未更新。
optimize(num_iterations=1000)複製代碼
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 90.6%
Optimization Iteration: 200, Training Accuracy: 84.4%
Optimization Iteration: 300, Training Accuracy: 84.4%
Optimization Iteration: 400, Training Accuracy: 89.1%
Optimization Iteration: 500, Training Accuracy: 87.5%
Optimization Iteration: 600, Training Accuracy: 93.8%
Optimization Iteration: 700, Training Accuracy: 93.8%
Optimization Iteration: 800, Training Accuracy: 93.8%
Optimization Iteration: 900, Training Accuracy: 96.9%
Optimization Iteration: 999, Training Accuracy: 92.2%
Time usage: 0:00:03
測試集上的分類準確率大約96-97%。(每次運行Python Notobook時,結果會有所變化。)
print_test_accuracy(show_example_errors=True)複製代碼
Accuracy on Test-Set: 96.3% (9633 / 10000)
Example errors:
在咱們開始優化對抗噪聲以前,先將它初始化爲零。上面已經完成了這一步,但這裏再執行一次,以防你用其餘目標類型從新運行代碼。
init_noise()複製代碼
如今執行對抗噪聲的優化。這裏使用對抗優化器而不是常規優化器,這說明它只優化對抗噪聲變量,同時忽略神經網絡中的其餘變量。
optimize(num_iterations=1000, adversary_target_cls=3)複製代碼
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 96.9%
Optimization Iteration: 300, Training Accuracy: 98.4%
Optimization Iteration: 400, Training Accuracy: 95.3%
Optimization Iteration: 500, Training Accuracy: 96.9%
Optimization Iteration: 600, Training Accuracy: 100.0%
Optimization Iteration: 700, Training Accuracy: 98.4%
Optimization Iteration: 800, Training Accuracy: 95.3%
Optimization Iteration: 900, Training Accuracy: 93.8%
Optimization Iteration: 999, Training Accuracy: 100.0%
Time usage: 0:00:03
如今對抗噪聲已經被優化了,能夠在一張圖像中展現出來。紅色像素顯示了正噪聲值,藍色像素顯示了負噪聲值。這個噪聲模式將會被添加到每張輸入圖像中。正噪聲值(紅)使像素變暗,負噪聲值(藍)使像素變亮。以下所示。
plot_noise()複製代碼
Noise:
- Min: -0.35 - Max: 0.35 - Std: 0.195455複製代碼
當測試集的全部圖像上都添加了該噪聲以後,根據選定的目標類別,分類準確率一般在是10-15%之間。咱們也能從混淆矩陣中看出,測試集中的大多數圖像都被分類成指望的目標類別——儘管有些目標類型比其餘的須要更多的對抗噪聲。
因此咱們找到了使對抗噪聲,使神經網絡將測試集中絕大部分圖像誤分類成指望的類別。
咱們也能夠畫出一些帶有對抗噪聲的誤分類圖像樣本。噪聲清晰可見,但人眼仍是能夠輕易地分辨出數字。
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)複製代碼
Accuracy on Test-Set: 13.2% (1323 / 10000)
Example errors:
Confusion Matrix:
[[ 85 0 0 895 0 0 0 0 0 0]
[ 0 0 0 1135 0 0 0 0 0 0]
[ 0 0 46 986 0 0 0 0 0 0]
[ 0 0 0 1010 0 0 0 0 0 0]
[ 0 0 0 959 20 0 0 0 3 0]
[ 0 0 0 847 0 45 0 0 0 0]
[ 0 0 0 914 0 1 42 0 1 0]
[ 0 0 0 977 0 0 0 51 0 0]
[ 0 0 0 952 0 0 0 0 22 0]
[ 0 0 1 1006 0 0 0 0 0 2]]
這是幫助函數用於尋找全部目標類別的對抗噪聲。函數從類型號0遍歷到9,執行上面的優化。而後將結果保存到一個數組中。
def find_all_noise(num_iterations=1000):
# Adversarial noise for all target-classes.
all_noise = []
# For each target-class.
for i in range(num_classes):
print("Finding adversarial noise for target-class:", i)
# Reset the adversarial noise to zero.
init_noise()
# Optimize the adversarial noise.
optimize(num_iterations=num_iterations,
adversary_target_cls=i)
# Get the adversarial noise from inside the TensorFlow graph.
noise = get_noise()
# Append the noise to the array.
all_noise.append(noise)
# Print newline.
print()
return all_noise複製代碼
all_noise = find_all_noise(num_iterations=300)複製代碼
Finding adversarial noise for target-class: 0
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 90.6%
Optimization Iteration: 200, Training Accuracy: 92.2%
Optimization Iteration: 299, Training Accuracy: 93.8%
Time usage: 0:00:01Finding adversarial noise for target-class: 1
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 62.5%
Optimization Iteration: 200, Training Accuracy: 62.5%
Optimization Iteration: 299, Training Accuracy: 75.0%
Time usage: 0:00:01Finding adversarial noise for target-class: 2
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 95.3%
Optimization Iteration: 299, Training Accuracy: 96.9%
Time usage: 0:00:01Finding adversarial noise for target-class: 3
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 200, Training Accuracy: 96.9%
Optimization Iteration: 299, Training Accuracy: 98.4%
Time usage: 0:00:01Finding adversarial noise for target-class: 4
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 81.2%
Optimization Iteration: 200, Training Accuracy: 82.8%
Optimization Iteration: 299, Training Accuracy: 82.8%
Time usage: 0:00:01Finding adversarial noise for target-class: 5
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 200, Training Accuracy: 96.9%
Optimization Iteration: 299, Training Accuracy: 98.4%
Time usage: 0:00:01Finding adversarial noise for target-class: 6
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 92.2%
Optimization Iteration: 299, Training Accuracy: 96.9%
Time usage: 0:00:01Finding adversarial noise for target-class: 7
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 200, Training Accuracy: 93.8%
Optimization Iteration: 299, Training Accuracy: 92.2%
Time usage: 0:00:01Finding adversarial noise for target-class: 8
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 200, Training Accuracy: 93.8%
Optimization Iteration: 299, Training Accuracy: 96.9%
Time usage: 0:00:01Finding adversarial noise for target-class: 9
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 84.4%
Optimization Iteration: 200, Training Accuracy: 87.5%
Optimization Iteration: 299, Training Accuracy: 90.6%
Time usage: 0:00:01
這個幫助函數用於在柵格中繪製全部目標類型(0到9)的對抗噪聲。
def plot_all_noise(all_noise):
# Create figure with 10 sub-plots.
fig, axes = plt.subplots(2, 5)
fig.subplots_adjust(hspace=0.2, wspace=0.1)
# For each sub-plot.
for i, ax in enumerate(axes.flat):
# Get the adversarial noise for the i'th target-class.
noise = all_noise[i]
# Plot the noise.
ax.imshow(noise,
cmap='seismic', interpolation='nearest',
vmin=-1.0, vmax=1.0)
# Show the classes as the label on the x-axis.
ax.set_xlabel(i)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()複製代碼
plot_all_noise(all_noise)複製代碼
紅色像素顯示正噪聲值,藍色像素顯示負噪聲值。
在其中一些噪聲圖像中,你能夠看到數字的痕跡。例如,目標類型0的噪聲顯示了一個被藍色包圍的紅圈。這說明會以圓形狀將一些噪聲添加到圖像中,並抑制其餘像素。這足以讓MNIST數據集中的大部分圖像被誤分類成0。另一個例子是3的噪聲,圖像的紅色像素也顯示了數字3的痕跡。但其餘類別的噪聲不太明顯。
如今試着讓神經網絡對對抗噪聲免疫。咱們從新訓練神經網絡,使其忽略對抗噪聲。這個過程能夠重複屢次。
這是使神經網絡對對抗噪聲免疫的幫助函數。首先運行優化來找到對抗噪聲。接着執行常規優化使神經網絡對該噪聲免疫。
def make_immune(target_cls, num_iterations_adversary=500, num_iterations_immune=200):
print("Target-class:", target_cls)
print("Finding adversarial noise ...")
# Find the adversarial noise.
optimize(num_iterations=num_iterations_adversary,
adversary_target_cls=target_cls)
# Newline.
print()
# Print classification accuracy.
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)
# Newline.
print()
print("Making the neural network immune to the noise ...")
# Try and make the neural network immune to this noise.
# Note that the adversarial noise has not been reset to zero
# so the x_noise variable still holds the noise.
# So we are training the neural network to ignore the noise.
optimize(num_iterations=num_iterations_immune)
# Newline.
print()
# Print classification accuracy.
print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False)複製代碼
首先嚐試使神經網絡對目標類型3的對抗噪聲免疫。
咱們先找到致使神經網絡誤分類測試集上大多數圖像的對抗噪聲。接着執行常規優化,其變量通過微調從而忽略噪聲,使得分類準確率再次達到95-97%。
make_immune(target_cls=3)複製代碼
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 3.1%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 93.8%
Optimization Iteration: 300, Training Accuracy: 96.9%
Optimization Iteration: 400, Training Accuracy: 96.9%
Optimization Iteration: 499, Training Accuracy: 96.9%
Time usage: 0:00:02Accuracy on Test-Set: 14.4% (1443 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 42.2%
Optimization Iteration: 100, Training Accuracy: 90.6%
Optimization Iteration: 199, Training Accuracy: 89.1%
Time usage: 0:00:01Accuracy on Test-Set: 95.3% (9529 / 10000)
如今試着再次運行它。 如今更難爲目標類別3找到對抗噪聲。神經網絡彷佛已經變得對對抗噪聲有些免疫。
make_immune(target_cls=3)複製代碼
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 32.8%
Optimization Iteration: 200, Training Accuracy: 32.8%
Optimization Iteration: 300, Training Accuracy: 29.7%
Optimization Iteration: 400, Training Accuracy: 34.4%
Optimization Iteration: 499, Training Accuracy: 26.6%
Time usage: 0:00:02Accuracy on Test-Set: 72.1% (7207 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 75.0%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 92.2%
Time usage: 0:00:01Accuracy on Test-Set: 95.2% (9519 / 10000)
如今,試着使神經網絡對全部目標類型的噪聲免疫。不幸的是,看起來並不太好。
for i in range(10):
make_immune(target_cls=i)
# Print newline.
print()複製代碼
Target-class: 0
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 73.4%
Optimization Iteration: 200, Training Accuracy: 75.0%
Optimization Iteration: 300, Training Accuracy: 85.9%
Optimization Iteration: 400, Training Accuracy: 81.2%
Optimization Iteration: 499, Training Accuracy: 90.6%
Time usage: 0:00:02Accuracy on Test-Set: 23.3% (2326 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 34.4%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 95.6% (9559 / 10000)
Target-class: 1
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 57.8%
Optimization Iteration: 200, Training Accuracy: 62.5%
Optimization Iteration: 300, Training Accuracy: 62.5%
Optimization Iteration: 400, Training Accuracy: 67.2%
Optimization Iteration: 499, Training Accuracy: 67.2%
Time usage: 0:00:02Accuracy on Test-Set: 42.2% (4218 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 59.4%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 95.5% (9555 / 10000)
Target-class: 2
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 43.8%
Optimization Iteration: 200, Training Accuracy: 57.8%
Optimization Iteration: 300, Training Accuracy: 70.3%
Optimization Iteration: 400, Training Accuracy: 68.8%
Optimization Iteration: 499, Training Accuracy: 71.9%
Time usage: 0:00:02Accuracy on Test-Set: 46.4% (4639 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 59.4%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 92.2%
Time usage: 0:00:01Accuracy on Test-Set: 95.5% (9545 / 10000)
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 48.4%
Optimization Iteration: 200, Training Accuracy: 46.9%
Optimization Iteration: 300, Training Accuracy: 53.1%
Optimization Iteration: 400, Training Accuracy: 50.0%
Optimization Iteration: 499, Training Accuracy: 48.4%
Time usage: 0:00:02Accuracy on Test-Set: 56.5% (5648 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 54.7%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 95.8% (9581 / 10000)
Target-class: 4
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 85.9%
Optimization Iteration: 200, Training Accuracy: 85.9%
Optimization Iteration: 300, Training Accuracy: 87.5%
Optimization Iteration: 400, Training Accuracy: 95.3%
Optimization Iteration: 499, Training Accuracy: 92.2%
Time usage: 0:00:02Accuracy on Test-Set: 15.6% (1557 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 18.8%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 95.6% (9557 / 10000)
Target-class: 5
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 18.8%
Optimization Iteration: 100, Training Accuracy: 71.9%
Optimization Iteration: 200, Training Accuracy: 90.6%
Optimization Iteration: 300, Training Accuracy: 95.3%
Optimization Iteration: 400, Training Accuracy: 89.1%
Optimization Iteration: 499, Training Accuracy: 92.2%
Time usage: 0:00:02Accuracy on Test-Set: 17.4% (1745 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 15.6%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.0% (9601 / 10000)
Target-class: 6
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 10.9%
Optimization Iteration: 100, Training Accuracy: 81.2%
Optimization Iteration: 200, Training Accuracy: 93.8%
Optimization Iteration: 300, Training Accuracy: 92.2%
Optimization Iteration: 400, Training Accuracy: 89.1%
Optimization Iteration: 499, Training Accuracy: 92.2%
Time usage: 0:00:02Accuracy on Test-Set: 17.6% (1762 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 20.3%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 95.7% (9570 / 10000)
Target-class: 7
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 14.1%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 200, Training Accuracy: 98.4%
Optimization Iteration: 300, Training Accuracy: 100.0%
Optimization Iteration: 400, Training Accuracy: 96.9%
Optimization Iteration: 499, Training Accuracy: 100.0%
Time usage: 0:00:02Accuracy on Test-Set: 12.8% (1281 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 95.9% (9587 / 10000)
Target-class: 8
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 64.1%
Optimization Iteration: 200, Training Accuracy: 81.2%
Optimization Iteration: 300, Training Accuracy: 71.9%
Optimization Iteration: 400, Training Accuracy: 78.1%
Optimization Iteration: 499, Training Accuracy: 84.4%
Time usage: 0:00:02Accuracy on Test-Set: 24.9% (2493 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 25.0%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 96.0% (9601 / 10000)
Target-class: 9
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 48.4%
Optimization Iteration: 200, Training Accuracy: 50.0%
Optimization Iteration: 300, Training Accuracy: 53.1%
Optimization Iteration: 400, Training Accuracy: 64.1%
Optimization Iteration: 499, Training Accuracy: 65.6%
Time usage: 0:00:02Accuracy on Test-Set: 45.5% (4546 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 51.6%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.2% (9615 / 10000)
如今試着執行兩次,使神經網絡對全部目標類別的噪聲免疫。不幸的是,結果也不太好。
使神經網絡免受一個對抗目標類型的影響,彷佛使得它對另一個目標類型失去了免疫。
for i in range(10):
make_immune(target_cls=i)
# Print newline.
print()
make_immune(target_cls=i)
# Print newline.
print()複製代碼
Target-class: 0
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 53.1%
Optimization Iteration: 200, Training Accuracy: 73.4%
Optimization Iteration: 300, Training Accuracy: 79.7%
Optimization Iteration: 400, Training Accuracy: 84.4%
Optimization Iteration: 499, Training Accuracy: 95.3%
Time usage: 0:00:02複製代碼
Accuracy on Test-Set: 29.2% (2921 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 29.7%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.2% (9619 / 10000)
Target-class: 0
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 1.6%
Optimization Iteration: 100, Training Accuracy: 12.5%
Optimization Iteration: 200, Training Accuracy: 7.8%
Optimization Iteration: 300, Training Accuracy: 18.8%
Optimization Iteration: 400, Training Accuracy: 9.4%
Optimization Iteration: 499, Training Accuracy: 9.4%
Time usage: 0:00:02Accuracy on Test-Set: 94.4% (9437 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 89.1%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 96.4% (9635 / 10000)
Target-class: 1
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 42.2%
Optimization Iteration: 200, Training Accuracy: 60.9%
Optimization Iteration: 300, Training Accuracy: 75.0%
Optimization Iteration: 400, Training Accuracy: 70.3%
Optimization Iteration: 499, Training Accuracy: 85.9%
Time usage: 0:00:02Accuracy on Test-Set: 28.7% (2875 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 39.1%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.4% (9643 / 10000)
Target-class: 1
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 15.6%
Optimization Iteration: 200, Training Accuracy: 18.8%
Optimization Iteration: 300, Training Accuracy: 12.5%
Optimization Iteration: 400, Training Accuracy: 9.4%
Optimization Iteration: 499, Training Accuracy: 12.5%
Time usage: 0:00:02Accuracy on Test-Set: 94.3% (9428 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 95.3%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 92.2%
Time usage: 0:00:01Accuracy on Test-Set: 96.9% (9685 / 10000)
Target-class: 2
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 60.9%
Optimization Iteration: 200, Training Accuracy: 64.1%
Optimization Iteration: 300, Training Accuracy: 71.9%
Optimization Iteration: 400, Training Accuracy: 75.0%
Optimization Iteration: 499, Training Accuracy: 82.8%
Time usage: 0:00:02Accuracy on Test-Set: 34.3% (3427 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 31.2%
Optimization Iteration: 100, Training Accuracy: 100.0%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 96.6% (9657 / 10000)
Target-class: 2
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 9.4%
Optimization Iteration: 200, Training Accuracy: 14.1%
Optimization Iteration: 300, Training Accuracy: 10.9%
Optimization Iteration: 400, Training Accuracy: 7.8%
Optimization Iteration: 499, Training Accuracy: 17.2%
Time usage: 0:00:02Accuracy on Test-Set: 94.3% (9435 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 96.9%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 96.6% (9664 / 10000)
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 14.1%
Optimization Iteration: 100, Training Accuracy: 20.3%
Optimization Iteration: 200, Training Accuracy: 40.6%
Optimization Iteration: 300, Training Accuracy: 57.8%
Optimization Iteration: 400, Training Accuracy: 54.7%
Optimization Iteration: 499, Training Accuracy: 64.1%
Time usage: 0:00:02Accuracy on Test-Set: 48.4% (4837 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 54.7%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 100.0%
Time usage: 0:00:01Accuracy on Test-Set: 96.5% (9650 / 10000)
Target-class: 3
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 10.9%
Optimization Iteration: 200, Training Accuracy: 17.2%
Optimization Iteration: 300, Training Accuracy: 15.6%
Optimization Iteration: 400, Training Accuracy: 1.6%
Optimization Iteration: 499, Training Accuracy: 9.4%
Time usage: 0:00:02Accuracy on Test-Set: 95.7% (9570 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 95.3%
Optimization Iteration: 100, Training Accuracy: 90.6%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 96.7% (9667 / 10000)
Target-class: 4
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 67.2%
Optimization Iteration: 200, Training Accuracy: 78.1%
Optimization Iteration: 300, Training Accuracy: 79.7%
Optimization Iteration: 400, Training Accuracy: 81.2%
Optimization Iteration: 499, Training Accuracy: 96.9%
Time usage: 0:00:02Accuracy on Test-Set: 23.7% (2373 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 26.6%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 96.9%
Time usage: 0:00:01Accuracy on Test-Set: 96.3% (9632 / 10000)
Target-class: 4
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 7.8%
Optimization Iteration: 200, Training Accuracy: 12.5%
Optimization Iteration: 300, Training Accuracy: 15.6%
Optimization Iteration: 400, Training Accuracy: 7.8%
Optimization Iteration: 499, Training Accuracy: 14.1%
Time usage: 0:00:02Accuracy on Test-Set: 92.0% (9197 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 92.2%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.3% (9632 / 10000)
Target-class: 5
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 57.8%
Optimization Iteration: 200, Training Accuracy: 76.6%
Optimization Iteration: 300, Training Accuracy: 85.9%
Optimization Iteration: 400, Training Accuracy: 89.1%
Optimization Iteration: 499, Training Accuracy: 85.9%
Time usage: 0:00:02Accuracy on Test-Set: 23.0% (2297 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 28.1%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 96.6% (9663 / 10000)
Target-class: 5
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 6.2%
Optimization Iteration: 100, Training Accuracy: 10.9%
Optimization Iteration: 200, Training Accuracy: 18.8%
Optimization Iteration: 300, Training Accuracy: 18.8%
Optimization Iteration: 400, Training Accuracy: 20.3%
Optimization Iteration: 499, Training Accuracy: 21.9%
Time usage: 0:00:02Accuracy on Test-Set: 88.2% (8824 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 93.8%
Optimization Iteration: 100, Training Accuracy: 93.8%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 96.7% (9665 / 10000)
Target-class: 6
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 40.6%
Optimization Iteration: 200, Training Accuracy: 53.1%
Optimization Iteration: 300, Training Accuracy: 51.6%
Optimization Iteration: 400, Training Accuracy: 56.2%
Optimization Iteration: 499, Training Accuracy: 62.5%
Time usage: 0:00:02Accuracy on Test-Set: 44.0% (4400 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 39.1%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 96.4% (9642 / 10000)
Target-class: 6
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 17.2%
Optimization Iteration: 200, Training Accuracy: 12.5%
Optimization Iteration: 300, Training Accuracy: 14.1%
Optimization Iteration: 400, Training Accuracy: 20.3%
Optimization Iteration: 499, Training Accuracy: 7.8%
Time usage: 0:00:02Accuracy on Test-Set: 94.6% (9457 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 93.8%
Optimization Iteration: 100, Training Accuracy: 100.0%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 96.8% (9682 / 10000)
Target-class: 7
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 4.7%
Optimization Iteration: 100, Training Accuracy: 65.6%
Optimization Iteration: 200, Training Accuracy: 89.1%
Optimization Iteration: 300, Training Accuracy: 82.8%
Optimization Iteration: 400, Training Accuracy: 85.9%
Optimization Iteration: 499, Training Accuracy: 90.6%
Time usage: 0:00:02Accuracy on Test-Set: 18.1% (1809 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 23.4%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 96.8% (9682 / 10000)
Target-class: 7
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 12.5%
Optimization Iteration: 100, Training Accuracy: 10.9%
Optimization Iteration: 200, Training Accuracy: 18.8%
Optimization Iteration: 300, Training Accuracy: 18.8%
Optimization Iteration: 400, Training Accuracy: 28.1%
Optimization Iteration: 499, Training Accuracy: 18.8%
Time usage: 0:00:02Accuracy on Test-Set: 84.1% (8412 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 84.4%
Optimization Iteration: 100, Training Accuracy: 100.0%
Optimization Iteration: 199, Training Accuracy: 100.0%
Time usage: 0:00:01Accuracy on Test-Set: 97.0% (9699 / 10000)
Target-class: 8
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 48.4%
Optimization Iteration: 200, Training Accuracy: 46.9%
Optimization Iteration: 300, Training Accuracy: 71.9%
Optimization Iteration: 400, Training Accuracy: 70.3%
Optimization Iteration: 499, Training Accuracy: 75.0%
Time usage: 0:00:02Accuracy on Test-Set: 36.8% (3678 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 48.4%
Optimization Iteration: 100, Training Accuracy: 96.9%
Optimization Iteration: 199, Training Accuracy: 93.8%
Time usage: 0:00:01Accuracy on Test-Set: 97.0% (9699 / 10000)
Target-class: 8
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 7.8%
Optimization Iteration: 100, Training Accuracy: 14.1%
Optimization Iteration: 200, Training Accuracy: 12.5%
Optimization Iteration: 300, Training Accuracy: 7.8%
Optimization Iteration: 400, Training Accuracy: 4.7%
Optimization Iteration: 499, Training Accuracy: 9.4%
Time usage: 0:00:02Accuracy on Test-Set: 96.2% (9625 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 96.9%
Optimization Iteration: 100, Training Accuracy: 98.4%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 97.2% (9720 / 10000)
Target-class: 9
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 23.4%
Optimization Iteration: 200, Training Accuracy: 43.8%
Optimization Iteration: 300, Training Accuracy: 37.5%
Optimization Iteration: 400, Training Accuracy: 45.3%
Optimization Iteration: 499, Training Accuracy: 39.1%
Time usage: 0:00:02Accuracy on Test-Set: 64.9% (6494 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 67.2%
Optimization Iteration: 100, Training Accuracy: 95.3%
Optimization Iteration: 199, Training Accuracy: 98.4%
Time usage: 0:00:01Accuracy on Test-Set: 97.5% (9746 / 10000)
Target-class: 9
Finding adversarial noise ...
Optimization Iteration: 0, Training Accuracy: 9.4%
Optimization Iteration: 100, Training Accuracy: 7.8%
Optimization Iteration: 200, Training Accuracy: 10.9%
Optimization Iteration: 300, Training Accuracy: 15.6%
Optimization Iteration: 400, Training Accuracy: 12.5%
Optimization Iteration: 499, Training Accuracy: 4.7%
Time usage: 0:00:02Accuracy on Test-Set: 97.1% (9709 / 10000)
Making the neural network immune to the noise ...
Optimization Iteration: 0, Training Accuracy: 98.4%
Optimization Iteration: 100, Training Accuracy: 100.0%
Optimization Iteration: 199, Training Accuracy: 95.3%
Time usage: 0:00:01Accuracy on Test-Set: 97.7% (9768 / 10000)
如今咱們已經對神經網絡和對抗網絡都進行了不少優化。讓咱們看看對抗噪聲長什麼樣。
plot_noise()複製代碼
Noise:
- Min: -0.35 - Max: 0.35 - Std: 0.270488複製代碼
有趣的是,相比優化以前的乾淨圖像,神經網絡在噪聲圖像上有更高的分類準確率。
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)複製代碼
Accuracy on Test-Set: 97.7% (9768 / 10000)
Example errors:
Confusion Matrix:
[[ 972 0 1 0 0 0 2 1 3 1]
[ 0 1119 4 0 0 2 2 0 8 0]
[ 3 0 1006 9 1 1 1 5 4 2]
[ 1 0 1 997 0 5 0 4 2 0]
[ 0 1 3 0 955 0 3 1 2 17]
[ 1 0 0 9 0 876 3 0 2 1]
[ 6 4 0 0 3 6 934 0 5 0]
[ 2 4 18 3 1 0 0 985 2 13]
[ 4 0 4 3 4 1 1 3 950 4]
[ 6 6 0 7 4 5 0 4 3 974]]
如今將對抗噪聲重置爲零,看看神經網絡在乾淨圖像上的表現。
init_noise()複製代碼
相比噪聲圖像,神經網絡在乾淨圖像上表現的要更差一點。
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)複製代碼
Accuracy on Test-Set: 92.2% (9222 / 10000)
Example errors:
Confusion Matrix:
[[ 970 0 1 0 0 1 8 0 0 0]
[ 0 1121 5 0 0 0 9 0 0 0]
[ 2 1 1028 0 0 0 1 0 0 0]
[ 1 0 27 964 0 13 2 2 1 0]
[ 0 2 3 0 957 0 20 0 0 0]
[ 3 0 2 2 0 875 10 0 0 0]
[ 4 1 0 0 1 1 951 0 0 0]
[ 10 21 61 3 14 3 0 913 3 0]
[ 29 2 91 7 7 26 70 1 741 0]
[ 20 18 10 12 150 65 11 12 9 702]]
如今咱們已經用TensorFlow完成了任務,關閉session,釋放資源。
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()複製代碼
在上面的實驗中能夠看到,咱們可以使神經網絡對單個目標類別的對抗噪聲免疫。這使得不可能找到引發誤分類到目標類型的對抗噪聲。可是,顯然也不可能使神經網絡同時對全部目標類別免疫。可能用其餘方法可以作到這一點。
一種建議是對不一樣目標類型進行交叉的免疫訓練,而不是依次對每一個目標類型進行徹底的優化。對上面的代碼作些小修改就能作到這一點。
另外一個建議是設置兩層神經網絡,共11個網絡。第一層網絡用來對輸入圖像進行分類。這個網絡沒有對對抗噪聲免疫。而後根據第一層的預測類型選擇第二層的另外一個網絡。第二層中的網絡對各自目標類型的對抗噪聲免疫。所以,一個對抗樣本可能糊弄第一層的網絡,但第二層中的網絡會免於特定目標類型噪聲的影響。
這可能使用了類型數量比較少的狀況,但若是數量很大就變得不可行,好比ImageNet有1000個類別,這樣咱們在第二層中須要訓練1000個神經網絡,這並不實際。
這篇教程展現瞭如何找到MNIST數據集手寫數字的對抗噪聲。 每一個目標類別都找到了一個單一的噪聲模式,它致使幾乎全部的輸入圖像都被誤分類爲目標類別。
MNIST數據集的噪聲模式對人眼清晰可見。但可能在高分辨率圖像上(好比ImageNet數據集)工做的大型神經網絡能夠找到更細微的噪聲模式。
本教程也嘗試了使神經網絡免受對抗噪聲影響的方法。 這對單個目標類別有效,但所測試的方法沒法使神經網絡同時對全部對抗目標類別免疫。
下面使一些可能會讓你提高TensorFlow技能的一些建議練習。爲了學習如何更合適地使用TensorFlow,實踐經驗是很重要的。
在你對這個Notebook進行修改以前,可能須要先備份一下。
noise_limit
和 noise_l2_weight
。這會如何影響對抗噪聲以及分類準確率?