卷积网络搭建

基础

互相关运算与卷积

在深度学习中的卷积运算常常使用的是互相关来替代,免去了翻转卷积核的过程。

1593424760272.png

在二维互相关运算中,卷积窗口从输入数组的最左上方开始,按从左往右、从上往下的顺序,依次在输入数组上滑动。当卷积窗口滑动到某一位置时,窗口中的输入子数组与核数组按元素相乘并求和,得到输出数组中相应位置的元素。图5.1中的输出数组高和宽分别为2,其中的4个元素由二维互相关运算得出:

下面函数实现了这个互相关运算:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import torch 
from torch import nn

def corr2d(X, K): #
h, w = K.shape
Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1)) # 卷积输出形状
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
Y[i, j] = (X[i: i + h, j: j + w] * K).sum()
# 对应点相乘再求和
# Y[i, j] = X[i:i + h, j: j + w].mul(K).sum()
# mul 原是矩阵和标量相乘,广播到原形状,这里使用相同形状的矩阵相当于对应位置相乘, 上面*与它相同
return Y

X = torch.tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
K = torch.tensor([[0, 1], [2, 3]])
corr2d(X, K)

卷积层

继承Module定义一个卷积层

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import torch.nn as nn
#
class Conv2D(nn.Module):
def __init__(self, kernel_size):
super(Conv2D, self).__init__()
self.weight = nn.Parameter(torch.randn(kernel_size))
self.bias = nn.Parameter(torch.randn(1))

def forward(self, x):
return corr2d(x, self.weight) + self.bias



# 测试一个竖直方向的边缘检测核
X = torch.ones(6, 8)
X[:, 2:6] = 0
X

K = torch.tensor([[1, -1]]) # 当水平相邻的数相等时得到0

Y = corr2d(X, K)
Y

尝试通过Y训练来得到这个检测核

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 构造一个核数组形状是(1, 2)的二维卷积层实例
conv2d = Conv2D(kernel_size=(1, 2))

step = 20
lr = 0.01
for i in range(step):
Y_hat = conv2d(X) #
l = ((Y_hat - Y) ** 2).sum() # loss
l.backward()

# 梯度下降 更新卷积层参数
conv2d.weight.data -= lr * conv2d.weight.grad
conv2d.bias.data -= lr * conv2d.bias.grad

# 梯度清0
conv2d.weight.grad.fill_(0)
conv2d.bias.grad.fill_(0)
if (i + 1) % 5 == 0:
print('Step %d, loss %.3f' % (i + 1, l.item()))

print("weight: ", conv2d.weight.data)
print("bias: ", conv2d.bias.data)

输出:

1
2
3
4
5
6
7
>Step 5, loss 12.517
>Step 10, loss 2.785
>Step 15, loss 0.698
>Step 20, loss 0.186
>weight: tensor([[ 0.9031, -0.8815]])
>bias: tensor([-0.0121])
>

得到weight参数接近前面定义的[1, -1] 卷积核

填充和步幅

  • 填充

对于n n 的数据和 k k的卷积核,计算结果是 (n - k + 1) * (n - k + 1)的输出特征,也就是在每个维度会少k - 1个,当k是奇数时,少的就是偶数个。深度学习中常用的卷积核都是奇数的,正好可以在两端填充 (k - 1) / 2 个数据(常为0)来使得输出特征和原数据保持相同维度。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import torch
from torch import nn

# 定义一个函数来计算卷积层。它对输入和输出做相应的升维和降维
def comp_conv2d(conv2d, X):
# (1, 1)代表批量大小和通道数(“多输入通道和多输出通道”一节将介绍)均为1
X = X.view((1, 1) + X.shape) # 元组可直接相加
Y = conv2d(X)
return Y.view(Y.shape[2:]) # 排除不关心的前两维:批量和通道

# 注意这里是两侧分别填充1行或列,所以在两侧一共填充2行或列
conv2d = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, padding=1)

X = torch.rand(8, 8)
print(comp_conv2d(conv2d, X).shape)

# 使用高为5、宽为3的卷积核。在高和宽两侧的填充数分别为2和1
conv2d = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=(5, 3), padding=(2, 1))
print(comp_conv2d(conv2d, X).shape)
  • 步幅

对于n n数据、k k卷积核、填充为k、步幅为s的一般情况,输出形状为:

若置p为常用的k - 1,则输出形状简化为:

如果输入的n 能被 s整除,那么形状为:

对于一般情况下,分子减去k加上步幅s可以理解为:初始便占用了k个大小的数据,它的输出为1个,加上步幅s便能让整除s后多1个。

例:

满足p = k - 1,(注意这里padding=1表示两边都添加1,因此k = 2 * padding),且s 能被 n整除,于是输出形状为 8 / 2 = 4

1
2
conv2d = nn.Conv2d(1, 1, kernel_size=3, padding=1, stride=2)
comp_conv2d(conv2d, X).shape

输出:

torch.Size([4, 4])

按照一般情况的公式,第一维: (n - k + p - s) / s = (8 - 3 + 0 + 3) / 3 = 8 / 3 ,下取整得2 ,第二维同理。

1
2
conv2d = nn.Conv2d(1, 1, kernel_size=(3, 5), padding=(0, 1), stride=(3, 4))
comp_conv2d(conv2d, X).shape

输出:

torch.Size([2, 2])

多通道

  • 多通道输入

当输入数据含多个通道时,我们需要构造一个输入通道数与输入数据的通道数相同的卷积核,从而能够与含多通道的输入数据做互相关运算。例如彩色图像具有3个通道。多通道卷积核只需将每个通道与对应的数据分别计算,最后将不同通道的特征对应相加即可。

1593444829228.png

多输入通道的互相关计算:

1
2
3
4
5
6
7
8
9
10
11
12
def corr2d_multi_in(X, K):
# 沿着X和K的第0维(通道维)分别计算再相加
res = corr2d(X[0, :, :], K[0, :, :])
for i in range(1, X.shape[0]):
res += corr2d(X[i, :, :], K[i, :, :])
return res

X = torch.tensor([[[0, 1, 2], [3, 4, 5], [6, 7, 8]],
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]])
K = torch.tensor([[[0, 1], [2, 3]], [[1, 2], [3, 4]]])

corr2d_multi_in(X, K)

输出:

tensor([[ 56., 72.],
[104., 120.]])

  • 多输出通道

无论输入通道是多少维,输出都是1维通道。若要得到Co维的多通道的输出,则需在多输入通道卷积核的基础上,创建Co个多输入通道卷积核。将它们得到1维通道叠加得到Co维的多通道输出。

1
2
3
4
5
6
7
8
9
def corr2d_multi_in_out(X, K):
# 对K的第0维遍历,每次同输入X做互相关计算。所有结果使用stack函数合并在一起
return torch.stack([corr2d_multi_in(X, k) for k in K])

# 简单将卷积核依次+1得到3个不同的多输入卷积核
K = torch.stack([K, K + 1, K + 2])
K.shape # torch.Size([3, 2, 2, 2])

corr2d_multi_in_out(X, K)

输出:

1
2
3
4
5
6
7
8
9
> tensor([[[ 56.,  72.],
> [104., 120.]],
>
> [[ 76., 100.],
> [148., 172.]],
>
> [[ 96., 128.],
> [192., 224.]]])
>
  • $1 \times 1$卷积层

$1 \times 1$卷积层常用与多通道情况,相当于把不同通道对应位置按权重加在一起。如果将不同通道看作特征维度,而输入h * w的数据看作一个单位样本,则$1 \times 1$ 卷积层相当于全连接层。

按照全连接层的思路,将输入通道看作特征维度,以全连接层的矩阵乘法来实现:

1
2
3
4
5
6
7
def corr2d_multi_in_out_1x1(X, K):
c_i, h, w = X.shape
c_o = K.shape[0]
X = X.view(c_i, h * w)
K = K.view(c_o, c_i)
Y = torch.mm(K, X) # 全连接层的矩阵乘法, (c_o, c_i) * (c_i, h*w),矩阵乘法在C_i维度上求和。
return Y.view(c_o, h, w)
1
2
3
4
5
6
7
X = torch.rand(3, 3, 3)
K = torch.rand(2, 3, 1, 1)

Y1 = corr2d_multi_in_out_1x1(X, K)
Y2 = corr2d_multi_in_out(X, K)

(Y1 - Y2).norm().item() < 1e-6

输出是True,可以看出由全连接方式和1* 1卷积核得到结果是相同的。

因此,$1\times 1$卷积层可以被当作保持高和宽维度形状不变的全连接层使用。于是,我们可以通过调整网络层之间的通道数,控制模型复杂度。

池化层

池化的计算与卷积类似,不过在池化窗口中求最大或均值。仿照corr2d实现一个二维池化:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import torch
from torch import nn

def pool2d(X, pool_size, mode='max'):
X = X.float()
p_h, p_w = pool_size
Y = torch.zeros(X.shape[0] - p_h + 1, X.shape[1] - p_w + 1)
for i in range(Y.shape[0]):
for j in range(Y.shape[1]):
if mode == 'max':
Y[i, j] = X[i: i + p_h, j: j + p_w].max()
elif mode == 'avg':
Y[i, j] = X[i: i + p_h, j: j + p_w].mean()
return Y

X = torch.tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
pool2d(X, (2, 2))

下面是使用nn模块中的最大池化:

1
2
3
4
5
6
7
8
9
10
11
X = torch.arange(16, dtype=torch.float).view((1, 1, 4, 4))
X

pool2d = nn.MaxPool2d(3) # 默认为(3,3)的池化窗口使用(3,3)的步幅
print(pool2d(X))

pool2d = nn.MaxPool2d(3, padding=1, stride=2)
print(pool2d(X))

pool2d = nn.MaxPool2d((2, 4), padding=(1, 2), stride=(2, 3))
print(pool2d(X))
  • 多通道

池化中的多通道与卷积层不同,它的输出通道与输入通道保持一致,不会在不同通道之间求和:

1
2
3
4
5
6
# 多通道
X = torch.cat((X, X + 1), dim=1)
X

pool2d = nn.MaxPool2d(3, padding=1, stride=2)
pool2d(X)

输出通道仍是2:

1
2
3
4
5
tensor([[[[ 5.,  7.],
[13., 15.]],

[[ 6., 8.],
[14., 16.]]]])

卷积神经网络(LeNet)

LeNet的网络结构如下:

1593483283502.png

​ 卷积层块里的基本单位是卷积层后接最大池化层:卷积层用来识别图像里的空间模式,如线条和物体局部,之后的最大池化层则用来降低卷积层对位置的敏感性。卷积层块由两个这样的基本单位重复堆叠构成。在卷积层块中,每个卷积层都使用5×5的窗口,并在输出上使用sigmoid激活函数。第一个卷积层输出通道数为6,第二个卷积层输出通道数则增加到16。这是因为第二个卷积层比第一个卷积层的输入的高和宽要小,所以增加输出通道使两个卷积层的参数尺寸类似。卷积层块的两个最大池化层的窗口形状均为2×2,且步幅为2。由于池化窗口与步幅形状相同,池化窗口在输入上每次滑动所覆盖的区域互不重叠。

​ 卷积层块的输出形状为(批量大小, 通道, 高, 宽)。当卷积层块的输出传入全连接层块时,全连接层块会将小批量中每个样本变平(flatten)。也就是说,全连接层的输入形状将变成二维,其中第一维是小批量中的样本,第二维是每个样本变平后的向量表示,且向量长度为通道、高和宽的乘积。全连接层块含3个全连接层。它们的输出个数分别是120、84和10,其中10为输出的类别个数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import time
import torch
from torch import nn, optim

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

class LeNet(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Sequential(
nn.Conv2d(1, 6, 5), # in_channels, out_ch, kernel_size
nn.Sigmoid(), # activate
nn.MaxPool2d(2, 2), # kelnel_size, stride
nn.Conv2d(6, 16, 5),
nn.Sigmoid(),
nn.MaxPool2d(2, 2)
)
self.fc = nn.Sequential(
nn.Linear(16 * 4 * 4, 120), # 此处用的输入图像是28*28,
nn.Sigmoid(),
nn.Linear(120, 84),
nn.Sigmoid(),
nn.Linear(84, 10)
)

def forward(self, img):
feature = self.conv(img)
output = self.fc(feature.view(img.shape[0], -1))
return output

net = LeNet()
print(net)

PS:这里解释下在创建网络时,使用nn下的类和nn.functional下的函数的区别,一般来说,nn中 的类会自动帮我们维护模型需要训练的参数(weight、bias、stride等),而functional只是实现了对应的函数计算。因此functional更灵活,nn类会帮你管理参数。通常情况下,需要参数的(如conv2d, linear, batch_norm)用nn类,而不需要参数的(如maxpool, loss func, activation func)用functional。而dropout最好用nn,虽然它只是计算不包含参数,但是在nn定义下的model.eval()属性可以控制它的开关。

所以常常在net的继承Module方式搭建中看到nn被写在init()中,而functional方法被写在forward中。而对于使用Sequential方法搭建的网络,必须都是Module子类,所以只能用nn类。

数据读取,与之前相同

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import sys
import torchvision
import torchvision.transforms as transforms

def load_data_fashion_mnist(batch_size, resize=None, root='~/Datasets/FashionMNIST'):
"""Download the fashion mnist dataset and then load into memory."""
trans = []
if resize:
trans.append(torchvision.transforms.Resize(size=resize))
trans.append(torchvision.transforms.ToTensor())

transform = torchvision.transforms.Compose(trans)
mnist_train = torchvision.datasets.FashionMNIST(root=root, train=True, download=True, transform=transform)
mnist_test = torchvision.datasets.FashionMNIST(root=root, train=False, download=True, transform=transform)

train_iter = torch.utils.data.DataLoader(mnist_train, batch_size=batch_size, shuffle=True, num_workers=4)
test_iter = torch.utils.data.DataLoader(mnist_test, batch_size=batch_size, shuffle=False, num_workers=4)

return train_iter, test_iter

batch_size = 256
train_iter, test_iter = load_data_fashion_mnist(batch_size)
  • 模型评估和训练
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 模型评估
def evaluate_accuracy(data_iter, net, device=None):
if device is None and isinstance(net, torch.nn.Module):
# 如果没指定device就使用net的device
device = list(net.parameters())[0].device
acc_sum, n = 0.0, 0
with torch.no_grad():
for X, y in data_iter:

net.eval() # 评估模式, 这会关闭dropout
acc_sum += (net(X.to(device)).argmax(dim=1) == y.to(device)).float().sum().cpu().item()
net.train() # 改回训练模式

n += y.shape[0]
return acc_sum / n


# 模型训练
def train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs):
net = net.to(device) # 模型移动到对应设备
print("training on ", device)
loss = torch.nn.CrossEntropyLoss()
for epoch in range(num_epochs):
train_l_sum, train_acc_sum, n, batch_count, start = 0.0, 0.0, 0, 0, time.time()
for X, y in train_iter:
X = X.to(device) # 计算数据移动到对应设备
y = y.to(device)
y_hat = net(X)
l = loss(y_hat, y)
optimizer.zero_grad()
l.backward()
optimizer.step()
train_l_sum += l.cpu().item()
train_acc_sum += (y_hat.argmax(dim=1) == y).sum().cpu().item()
n += y.shape[0]
batch_count += 1
test_acc = evaluate_accuracy(test_iter, net) # 评估
print('epoch %d, loss %.4f, train acc %.3f, test acc %.3f, time %.1f sec'
% (epoch + 1, train_l_sum / batch_count, train_acc_sum / n, test_acc, time.time() - start))


lr, num_epochs = 0.001, 40
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)

VGG网络

1593512753698.png

VGG块的组成规律是:连续使用数个相同的填充为1、窗口形状为3×33\times 33×3的卷积层后接上一个步幅为2、窗口形状为2×22\times 22×2的最大池化层。卷积层保持输入的高和宽不变,而池化层则对其减半。我们使用vgg_block函数来实现这个基础的VGG块,它可以指定卷积层的数量和输入输出通道数。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
 def vgg_block(num_convs, in_channels, out_channels):
blk = []
for i in range(num_convs):
if i == 0:
blk.append(nn.Conv2d(in_channels, out_channels, kernel_size = 3, padding = 1))
else:
blk.append(nn.Conv2d(out_channels, out_channels, kernel_size = 3, padding = 1))
blk.append(nn.ReLU())
blk.append(nn.MaxPool2d(2, 2))
return nn.Sequential(*blk)

conv_arch = ((1, 1, 64), (1, 64, 128), (2, 128, 256), (2, 256, 512), (2, 512, 512))
# 经过5个vgg_block, 宽高会减半5次, 变成 224/32 = 7
fc_features = 512 * 7 * 7 # c * w * h
fc_hidden_units = 4096 # 任意

VGG块就是VGG网络的主要特点,前面通过几个VGG块,后面添加全连接层便组成了VGG网络。

VGG网络:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34

class FlattenLayer(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x): # x shape: (batch, *, *, ...)
return x.view(x.shape[0], -1)

def vgg(conv_arch, fc_features, fc_hidden_units=4096):
net = nn.Sequential()
for i, (num_convs, in_channels, out_channels) in enumerate(conv_arch):
# 每个vgg_blck会使高宽减半
net.add_module("vgg_block_" + str(i + 1), vgg_block(num_convs, in_channels, out_channels))
# fc
net.add_module("fc", nn.Sequential(
FlattenLayer(),
nn.Linear(fc_features, fc_hidden_units),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(fc_hidden_units, fc_hidden_units),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(fc_hidden_units, 10)
))
return net

net = vgg(conv_arch, fc_features, fc_hidden_units)
X = torch.rand(1, 1, 224, 224)

# named_children获取一级子模块及其名字(named_modules会返回所有子模块,包括子模块的子模块)
for name, blk in net.named_children():
X = blk(X)
print(name, 'output shape: ', X.shape)

net

输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
vgg_block_1 output shape:  torch.Size([1, 64, 112, 112])
vgg_block_2 output shape: torch.Size([1, 128, 56, 56])
vgg_block_3 output shape: torch.Size([1, 256, 28, 28])
vgg_block_4 output shape: torch.Size([1, 512, 14, 14])
vgg_block_5 output shape: torch.Size([1, 512, 7, 7])
fc output shape: torch.Size([1, 10])

Sequential(
(vgg_block_1): Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(vgg_block_2): Sequential(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(vgg_block_3): Sequential(
(0): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(vgg_block_4): Sequential(
(0): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(vgg_block_5): Sequential(
(0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU()
(2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU()
(4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(fc): Sequential(
(0): FlattenLayer()
(1): Linear(in_features=25088, out_features=4096, bias=True)
(2): ReLU()
(3): Dropout(p=0.5, inplace=False)
(4): Linear(in_features=4096, out_features=4096, bias=True)
(5): ReLU()
(6): Dropout(p=0.5, inplace=False)
(7): Linear(in_features=4096, out_features=10, bias=True)
)
)

训练方法与之前相同

NiN网络

卷积层的输入和输出通常是四维数组(样本,通道,高,宽),而全连接层的输入和输出则通常是二维数组(样本,特征)。如果想在全连接层后再接上卷积层,则需要将全连接层的输出变换为四维。前面说过1×1卷积层可以看成全连接层,其中空间维度(高和宽)上的每个元素相当于样本,通道相当于特征。因此,NiN使用1×1卷积层来替代全连接层,从而使空间信息能够自然传递到后面的层中去。下图对比了NiN同AlexNet和VGG等网络在结构上的主要区别。

1593513068342.png

与VGG类似,NiN也有组成它的基础块,由一个卷积层加两个充当全连接层的1×1卷积层串联而成。其中第一个卷积层的超参数可以自行设置,而第二和第三个卷积层的超参数一般是固定的。

  • NiN块
1
2
3
4
5
6
7
8
9
def nin_block(in_channels, out_channels, kernel_size, stride, padding):
blk = nn.Sequential(nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding),
nn.ReLU(),
nn.Conv2d(out_channels, out_channels, kernel_size = 1), # 后面两个1 * 1 卷积
nn.ReLU(),
nn.Conv2d(out_channels, out_channels, kernel_size = 1),
nn.ReLU()
)
return blk
  • NiN网络

NiN是在AlexNet问世不久后提出的。它们的卷积层设定有类似之处。NiN使用卷积窗口形状分别为11×11、5×5和3×3的卷积层,相应的输出通道数也与AlexNet中的一致。每个NiN块后接一个步幅为2、窗口形状为3×3的最大池化层。

除使用NiN块以外,NiN还有一个设计与AlexNet显著不同:NiN去掉了AlexNet最后的3个全连接层,取而代之地,NiN使用了输出通道数等于标签类别数的NiN块,然后使用全局平均池化层对每个通道中所有元素求平均并直接用于分类。这里的全局平均池化层即窗口形状等于输入空间维形状的平均池化层。NiN的这个设计的好处是可以显著减小模型参数尺寸,从而缓解过拟合。然而,该设计有时会造成获得有效模型的训练时间的增加。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

import torch.nn.functional as F
class GlobalAvgPool2d(nn.Module):
# 全局平均池化层可通过将池化窗口形状设置成输入的高和宽实现
def __init__(self):
super(GlobalAvgPool2d, self).__init__()
def forward(self, x):
return F.avg_pool2d(x, kernel_size=x.size()[2:]) ## 后两维即h * w

net = nn.Sequential(
nin_block(1, 96, kernel_size=11, stride=4, padding=0),
nn.MaxPool2d(kernel_size=3, stride=2),
nin_block(96, 256, kernel_size=5, stride=1, padding=2),
nn.MaxPool2d(kernel_size=3, stride=2),
nin_block(256, 384, kernel_size=3, stride=1, padding=1),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Dropout(0.5),
# 标签类别数是10
nin_block(384, 10, kernel_size=3, stride=1, padding=1),
GlobalAvgPool2d(), # 对10个通道做全局平均池化进行分类
# 将四维的输出转成二维的输出,其形状为(批量大小, 10)
FlattenLayer())

输出:

1
2
3
4
5
6
7
8
9
10
0 output shape:  torch.Size([1, 96, 54, 54])
1 output shape: torch.Size([1, 96, 26, 26])
2 output shape: torch.Size([1, 256, 26, 26])
3 output shape: torch.Size([1, 256, 12, 12])
4 output shape: torch.Size([1, 384, 12, 12])
5 output shape: torch.Size([1, 384, 5, 5])
6 output shape: torch.Size([1, 384, 5, 5])
7 output shape: torch.Size([1, 10, 5, 5])
8 output shape: torch.Size([1, 10, 1, 1])
9 output shape: torch.Size([1, 10])

训练方法同上

GoogleNet

GoogleNet借鉴NiN串联网络的思想,提出了一种并联网络,它同样有一种基础块,叫做Inception块。如图:

1593522526004.png

Inception块里有4条并行的线路。前3条线路使用窗口大小分别是1×1、3×3和5×5的卷积层来抽取不同空间尺寸下的信息,其中中间2个线路会对输入先做1×1卷积来减少输入通道数,以降低模型复杂度。第四条线路则使用3×3最大池化层,后接1×1卷积层来改变通道数。4条线路都使用了合适的填充来使输入与输出的高和宽一致。最后我们将每条线路的输出在通道维上连结,并输入接下来的层中去。

Inception块中可以自定义的超参数是每个层的输出通道数,我们以此来控制模型复杂度。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import torch
import torch.nn as nn

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

class Inception(nn.Module):
# c1 - c4为每条线路里的层的输出通道数
def __init__(self, in_c, c1, c2, c3, c4):
super(Inception, self).__init__()
# 线路1,单1 x 1卷积层
self.p1_1 = nn.Conv2d(in_c, c1, kernel_size=1)
# 线路2,1 x 1卷积层后接3 x 3卷积层
self.p2_1 = nn.Conv2d(in_c, c2[0], kernel_size=1) # 1×1卷积改变通道数
self.p2_2 = nn.Conv2d(c2[0], c2[1], kernel_size=3, padding=1)
# 线路3,1 x 1卷积层后接5 x 5卷积层
self.p3_1 = nn.Conv2d(in_c, c3[0], kernel_size=1)
self.p3_2 = nn.Conv2d(c3[0], c3[1], kernel_size=5, padding=2)
# 线路4,3 x 3最大池化层后接1 x 1卷积层
self.p4_1 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
self.p4_2 = nn.Conv2d(in_c, c4, kernel_size=1)

def forward(self, x):
# 并联结构
p1 = F.relu(self.p1_1(x))
p2 = F.relu(self.p2_2(F.relu(self.p2_1(x))))
p3 = F.relu(self.p3_2(F.relu(self.p3_1(x))))
p4 = F.relu(self.p4_2(self.p4_1(x)))
return torch.cat((p1, p2, p3, p4), dim=1) # 在通道维上连结输出

完整的GoogleNet网络包括了5个模块,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# 第一模块使用一个64通道的7×7卷积层。
b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding = 1))

# 第二模块使用2个卷积层:首先是64通道的1×1卷积层,然后是将通道增大3倍的3×3卷积层。它对应Inception块中的第二条线路。
b2 = nn.Sequential(nn.Conv2d(64, 64, kernel_size = 1),
nn.Conv2d(64, 192, kernel_size = 3, padding = 1),
nn.MaxPool2d(kernel_size = 3, stride = 2, padding = 1))

# 第三模块串联2个完整的Inception块
b3 = nn.Sequential(Inception(192, 64, (96, 128), (16, 32), 32),
Inception(256, 128, (128, 192), (32, 96), 64),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))

b4 = nn.Sequential(Inception(480, 192, (96, 208), (16, 48), 64),
Inception(512, 160, (112, 224), (24, 64), 64),
Inception(512, 128, (128, 256), (24, 64), 64),
Inception(512, 112, (144, 288), (32, 64), 64),
Inception(528, 256, (160, 320), (32, 128), 128),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))

b5 = nn.Sequential(Inception(832, 256, (160, 320), (32, 128), 128),
Inception(832, 384, (192, 384), (48, 128), 128),
GlobalAvgPool2d())

net = nn.Sequential(b1, b2, b3, b4, b5,
FlattenLayer(), nn.Linear(1024, 10))
1
2
3
4
5
net = nn.Sequential(b1, b2, b3, b4, b5, FlattenLayer(), nn.Linear(1024, 10))
X = torch.rand(1, 1, 96, 96) # 这里更改模型参数比较麻烦,因此改图像大小为96×96
for blk in net.children():
X = blk(X)
print('output shape: ', X.shape)

输出

1
2
3
4
5
6
7
output shape:  torch.Size([1, 64, 24, 24])
output shape: torch.Size([1, 192, 12, 12])
output shape: torch.Size([1, 480, 6, 6])
output shape: torch.Size([1, 832, 3, 3])
output shape: torch.Size([1, 1024, 1, 1])
output shape: torch.Size([1, 1024])
output shape: torch.Size([1, 10])

批量归一化(batch normalization)

批量归一化层常用于激活函数之前,对全连接层来说,它位于仿射变换和激活函数之间。实际上就是先对一个批次的样本进行标准化后,再使用两个可学习的模型参数(拉伸和偏移),其形状与标准化后的数据相同。拉伸参数与数据按元素相乘,而偏移相加。全连接层和卷积层的标准化稍有不同,FC层标准化对于每个特征的所有样本,即对每个特征进行求均值和方差。而卷积层的通道相当于全连接层的特征,因此卷积层对通道进行求均值和方差,这就需要对样本、高、宽数据都进行求和。

手动实现BN

  • 实现batch_norm函数,定义了BN计算,其中moving参数用于预测时作为全局均值和方差
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
import time
import torch
from torch import nn, optim
import torch.nn.functional as F

import sys

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

def batch_norm(is_training, X, gamma, beta, moving_mean, moving_var, eps, momentum):
# 判断当前模式是训练模式还是预测模式
if not is_training:
# 如果是在预测模式下,直接使用传入的移动平均所得的均值和方差
X_hat = (X - moving_mean) / torch.sqrt(moving_var + eps)
else:
assert len(X.shape) in (2, 4) # 二维是全连接,四维是卷积
if len(X.shape) == 2:c
# 使用全连接层的情况,计算特征维(dim=1)上的均值和方差,即每个特征对应所有样本的均值
mean = X.mean(dim=0)
var = ((X - mean) ** 2).mean(dim=0)
else:
# 使用二维卷积层的情况,计算通道维上(axis=1)的均值和方差。这里我们需要保持
# X的形状以便后面可以做广播运算
mean = X.mean(dim=0, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
var = ((X - mean) ** 2).mean(dim=0, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)
# 训练模式下用当前的均值和方差做标准化
X_hat = (X - mean) / torch.sqrt(var + eps)
# 更新移动平均的均值和方差
moving_mean = momentum * moving_mean + (1.0 - momentum) * mean
moving_var = momentum * moving_var + (1.0 - momentum) * var
Y = gamma * X_hat + beta # 拉伸和偏移
return Y, moving_mean, moving_var
  • BN网络模型,定义可训练参数
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
class BatchNorm(nn.Module):
def __init__(self, num_features, num_dims): # 输出特征数、维度(指全连接或卷积)2或4
super(BatchNorm, self).__init__()
if num_dims == 2:
shape = (1, num_features) # BN参数只对于特征和通道
else:
shape = (1, num_features, 1, 1)
# 参与求梯度和迭代的拉伸和偏移参数,分别初始化成0和1
self.gamma = nn.Parameter(torch.ones(shape))
self.beta = nn.Parameter(torch.zeros(shape))
# 不参与求梯度和迭代的变量,全在内存上初始化成0
self.moving_mean = torch.zeros(shape)
self.moving_var = torch.zeros(shape)

def forward(self, X):
# 如果X不在内存上,将moving_mean和moving_var复制到X所在显存上
if self.moving_mean.device != X.device:
self.moving_mean = self.moving_mean.to(X.device)
self.moving_var = self.moving_var.to(X.device)
# 保存更新过的moving_mean和moving_var, Module实例的traning属性默认为true, 调用.eval()后设成false
Y, self.moving_mean, self.moving_var = batch_norm(self.training,
X, self.gamma, self.beta, self.moving_mean,
self.moving_var, eps=1e-5, momentum=0.9)
return Y
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34

class FlattenLayer(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x): # x shape: (batch, *, *, ...)
return x.view(x.shape[0], -1)
# 在LeNet中加入BN
net = nn.Sequential(
nn.Conv2d(1, 6, 5), # in_channels, out_channels, kernel_size
BatchNorm(6, num_dims=4),
nn.Sigmoid(),
nn.MaxPool2d(2, 2), # kernel_size, stride
nn.Conv2d(6, 16, 5),
BatchNorm(16, num_dims=4),
nn.Sigmoid(),
nn.MaxPool2d(2, 2),
FlattenLayer(),
nn.Linear(16*4*4, 120),
BatchNorm(120, num_dims=2),
nn.Sigmoid(),
nn.Linear(120, 84),
BatchNorm(84, num_dims=2),
nn.Sigmoid(),
nn.Linear(84, 10)
)

batch_size = 256
train_iter, test_iter = load_data_fashion_mnist(batch_size=batch_size)

lr, num_epochs = 0.001, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
train_ch5(net, train_iter, test_iter, batch_size, optimizer, device, num_epochs)

net[1].gamma.view((-1,)), net[1].beta.view((-1,))

输出:第一层学到的拉伸和偏移参数

1
2
3
4
(tensor([1.1579, 1.0431, 1.1592, 1.0162, 1.1995, 1.1381], device='cuda:0',
grad_fn=<ViewBackward>),
tensor([ 0.0797, -0.0031, 0.3353, -0.4526, 0.0626, 0.2471], device='cuda:0',
grad_fn=<ViewBackward>))

简洁实现

,Pytorch中nn模块定义的BatchNorm1dBatchNorm2d类使用起来更加简单,二者分别用于全连接层和卷积层,都需要指定输入的num_features参数值。下面我们用PyTorch实现使用批量归一化的LeNet。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
net = nn.Sequential(
nn.Conv2d(1, 6, 5), # in_channels, out_channels, kernel_size
nn.BatchNorm2d(6),
nn.Sigmoid(),
nn.MaxPool2d(2, 2), # kernel_size, stride
nn.Conv2d(6, 16, 5),
nn.BatchNorm2d(16),
nn.Sigmoid(),
nn.MaxPool2d(2, 2),
d2l.FlattenLayer(),
nn.Linear(16*4*4, 120),
nn.BatchNorm1d(120),
nn.Sigmoid(),
nn.Linear(120, 84),
nn.BatchNorm1d(84),
nn.Sigmoid(),
nn.Linear(84, 10)
)

ResNet(残差网络)

1593535171504.png

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
class Residual(nn.Module):  # 本类已保存在d2lzh_pytorch包中方便以后使用
def __init__(self, in_channels, out_channels, use_1x1conv=False, stride=1):
super(Residual, self).__init__()
self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1, stride=stride)
self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1)
if use_1x1conv:
self.conv3 = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride)
else:
self.conv3 = None
self.bn1 = nn.BatchNorm2d(out_channels)
self.bn2 = nn.BatchNorm2d(out_channels)

def forward(self, X):
Y = F.relu(self.bn1(self.conv1(X))) # 参考上图
Y = self.bn2(self.conv2(Y))
if self.conv3:
X = self.conv3(X) # Y的形状变化,则可用它来使X与它保持一致
return F.relu(Y - X)

# 可选择是否保持原输入形状,若不保持需用1*1卷积改变其形状(stride)
blk = Residual(3, 3)
X = torch.rand((4, 3, 6, 6))
blk(X).shape # torch.Size([4, 3, 6, 6])

blk = Residual(3, 6, use_1x1conv=True, stride=2)
blk(X).shape # torch.Size([4, 6, 3, 3])

ResNet构造与GoogleNet差不多,包含多个模块,其中前两个是正常模块,中间是由残差模块组成的,其中第一个模块输入输出通道相同,后面的模块依次让通道翻倍,高宽缩小一倍。且每个残差模块由2个残差网络组成。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
net = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))

def resnet_block(in_channels, out_channels, num_residuals, first_block=False):
if first_block:
assert in_channels == out_channels # 第一个模块的通道数同输入通道数一致
blk = []
for i in range(num_residuals):
if i == 0 and not first_block:
blk.append(Residual(in_channels, out_channels, use_1x1conv=True, stride=2))
else:
blk.append(Residual(out_channels, out_channels))
return nn.Sequential(*blk)

net.add_module("resnet_block1", resnet_block(64, 64, 2, first_block=True))
net.add_module("resnet_block2", resnet_block(64, 128, 2))
net.add_module("resnet_block3", resnet_block(128, 256, 2))
net.add_module("resnet_block4", resnet_block(256, 512, 2))

net.add_module("global_avg_pool", GlobalAvgPool2d()) # GlobalAvgPool2d的输出: (Batch, 512, 1, 1)
net.add_module("fc", nn.Sequential(FlattenLayer(), nn.Linear(512, 10)))

X = torch.rand((1, 1, 224, 224))
for name, layer in net.named_children():
X = layer(X)
print(name, ' output shape:\t', X.shape)

输出:

1
2
3
4
5
6
7
8
9
10
0  output shape:	 torch.Size([1, 64, 112, 112])
1 output shape: torch.Size([1, 64, 112, 112])
2 output shape: torch.Size([1, 64, 112, 112])
3 output shape: torch.Size([1, 64, 56, 56])
resnet_block1 output shape: torch.Size([1, 64, 56, 56])
resnet_block2 output shape: torch.Size([1, 128, 28, 28])
resnet_block3 output shape: torch.Size([1, 256, 14, 14])
resnet_block4 output shape: torch.Size([1, 512, 7, 7])
global_avg_pool output shape: torch.Size([1, 512, 1, 1])
fc output shape: torch.Size([1, 10])

DenseNet

1593588801710.png

与ResNet的主要区别在于,DenseNet里模块BBB的输出不是像ResNet那样和模块AAA的输出相加,而是在通道维上连结。这样模块AAA的输出可以直接传入模块BBB后面的层。在这个设计里,模块AAA直接跟模块BBB后面的所有层连接在了一起。这也是它被称为“稠密连接”的原因。

DenseNet的主要构建模块是稠密块(dense block)和过渡层(transition layer)。前者定义了输入和输出是如何连结的,后者则用来控制通道数,使之不过大。

  • Dense块
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# 单个卷积的固定模块
def conv_block(in_channels, out_channels):
blk = nn.Sequential(nn.BatchNorm2d(in_channels),
nn.ReLU(),
nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1))
return blk

# Dense块由多个卷积块组成,最后输出时在通道维连接
class DenseBlock(nn.Module):
def __init__(self, num_convs, in_channels, out_channels): #
super(DenseBlock, self).__init__()
net = []
for i in range(num_convs):
in_c = in_channels + i * out_channels
# 对每个卷积块,其输入通道是上一层的输出+上一层的输入,这里输出固定为out_ch,因此直接用最开始的输入 + i * out_ch
net.append(conv_block(in_c, out_channels))
self.net = nn.ModuleList(net)
self.out_channels = in_channels + num_convs * out_channels # 计算输出通道数

def forward(self, X):
for blk in self.net:
Y = blk(X)
X = torch.cat((X, Y), dim=1) # 对每个卷积块,在通道维上将其输入和输出连结
return X


blk = DenseBlock(2, 3, 10)
X = torch.rand(4, 3, 8, 8)
Y = blk(X)
Y.shape # torch.Size([4, 23, 8, 8])
  • 过渡层
1
2
3
4
5
6
7
8
9
10
11
# 由于Dense块是串联的,过多使用会导致通道数过大,过渡层用1*1卷积减少通道数,并用步幅2减半高宽
def transition_block(in_channels, out_channels):
blk = nn.Sequential(
nn.BatchNorm2d(in_channels),
nn.ReLU(),
nn.Conv2d(in_channels, out_channels, kernel_size=1), # 1*1 卷积
nn.AvgPool2d(kernel_size=2, stride=2)) # 步幅2的平均池化
return blk

blk = transition_block(23, 10)
blk(Y).shape # torch.Size([4, 10, 4, 4])
  • 实现DenseNet

与之前类似,都是将模块进行组合得到,且在前端后端用适当的输入输出处理层。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import torch.nn as nn
# 常规输入卷积
net = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1))

num_channels, growth_rate = 64, 32 # num_channels为当前的通道数
num_convs_in_dense_blocks = [4, 4, 4, 4]

# 添加Dense块
for i, num_convs in enumerate(num_convs_in_dense_blocks):
DB = DenseBlock(num_convs, num_channels, growth_rate) #卷积块个数,输入通道数、单个卷积块输出通道数(增长数)
net.add_module("DenseBlosk_%d" % i, DB)
# 上一个稠密块的输出通道数
num_channels = DB.out_channels
# 在稠密块之间加入通道数减半的过渡层
if i != len(num_convs_in_dense_blocks) - 1: # 不是最后一层
net.add_module("transition_block_%d" % i, transition_block(num_channels, num_channels // 2))
num_channels = num_channels // 2

# 接全局池化和FC
net.add_module("BN", nn.BatchNorm2d(num_channels))
net.add_module("relu", nn.ReLU())
net.add_module("global_avg_pool", GlobalAvgPool2d()) # GlobalAvgPool2d的输出: (Batch, num_channels, 1, 1)
net.add_module("fc", nn.Sequential(FlattenLayer(), nn.Linear(num_channels, 10)))


X = torch.rand((1, 1, 96, 96))
for name, layer in net.named_children():
X = layer(X)
print(name, ' output shape:\t', X.shape)

输出:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
0  output shape:	 torch.Size([1, 64, 48, 48])
1 output shape: torch.Size([1, 64, 48, 48])
2 output shape: torch.Size([1, 64, 48, 48])
3 output shape: torch.Size([1, 64, 24, 24])
DenseBlosk_0 output shape: torch.Size([1, 192, 24, 24])
transition_block_0 output shape: torch.Size([1, 96, 12, 12])
DenseBlosk_1 output shape: torch.Size([1, 224, 12, 12])
transition_block_1 output shape: torch.Size([1, 112, 6, 6])
DenseBlosk_2 output shape: torch.Size([1, 240, 6, 6])
transition_block_2 output shape: torch.Size([1, 120, 3, 3])
DenseBlosk_3 output shape: torch.Size([1, 248, 3, 3])
BN output shape: torch.Size([1, 248, 3, 3])
relu output shape: torch.Size([1, 248, 3, 3])
global_avg_pool output shape: torch.Size([1, 248, 1, 1])
fc output shape: torch.Size([1, 10])

对比网络结构:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Sequential(
(0): Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
(3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(DenseBlosk_0): DenseBlock(
(net): ModuleList(
(0): Sequential(
(0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(1): Sequential(
(0): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(1): ReLU()
(2): Conv2d(96, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))