当前位置: 移动技术网 > IT编程>脚本编程>Python > PyTorch CNN实战之MNIST手写数字识别示例

PyTorch CNN实战之MNIST手写数字识别示例

2018年08月20日  | 移动技术网IT编程  | 我要评论

修真逍遥行,fs2you coat,水浒人物评价

简介

卷积神经网络(convolutional neural network, cnn)是深度学习技术中极具代表的网络结构之一,在图像处理领域取得了很大的成功,在国际标准的imagenet数据集上,许多成功的模型都是基于cnn的。

卷积神经网络cnn的结构一般包含这几个层:

  1. 输入层:用于数据的输入
  2. 卷积层:使用卷积核进行特征提取和特征映射
  3. 激励层:由于卷积也是一种线性运算,因此需要增加非线性映射
  4. 池化层:进行下采样,对特征图稀疏处理,减少数据运算量。
  5. 全连接层:通常在cnn的尾部进行重新拟合,减少特征信息的损失
  6. 输出层:用于输出结果

pytorch实战

本文选用上篇的数据集mnist手写数字识别实践cnn。

import torch
import torch.nn as nn
import torch.nn.functional as f
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import variable

# training settings
batch_size = 64

# mnist dataset
train_dataset = datasets.mnist(root='./data/',
                train=true,
                transform=transforms.totensor(),
                download=true)

test_dataset = datasets.mnist(root='./data/',
               train=false,
               transform=transforms.totensor())

# data loader (input pipeline)
train_loader = torch.utils.data.dataloader(dataset=train_dataset,
                      batch_size=batch_size,
                      shuffle=true)

test_loader = torch.utils.data.dataloader(dataset=test_dataset,
                     batch_size=batch_size,
                     shuffle=false)


class net(nn.module):
  def __init__(self):
    super(net, self).__init__()
    # 输入1通道,输出10通道,kernel 5*5
    self.conv1 = nn.conv2d(1, 10, kernel_size=5)
    self.conv2 = nn.conv2d(10, 20, kernel_size=5)
    self.mp = nn.maxpool2d(2)
    # fully connect
    self.fc = nn.linear(320, 10)

  def forward(self, x):
    # in_size = 64
    in_size = x.size(0) # one batch
    # x: 64*10*12*12
    x = f.relu(self.mp(self.conv1(x)))
    # x: 64*20*4*4
    x = f.relu(self.mp(self.conv2(x)))
    # x: 64*320
    x = x.view(in_size, -1) # flatten the tensor
    # x: 64*10
    x = self.fc(x)
    return f.log_softmax(x)


model = net()

optimizer = optim.sgd(model.parameters(), lr=0.01, momentum=0.5)

def train(epoch):
  for batch_idx, (data, target) in enumerate(train_loader):
    data, target = variable(data), variable(target)
    optimizer.zero_grad()
    output = model(data)
    loss = f.nll_loss(output, target)
    loss.backward()
    optimizer.step()
    if batch_idx % 200 == 0:
      print('train epoch: {} [{}/{} ({:.0f}%)]\tloss: {:.6f}'.format(
        epoch, batch_idx * len(data), len(train_loader.dataset),
        100. * batch_idx / len(train_loader), loss.data[0]))


def test():
  test_loss = 0
  correct = 0
  for data, target in test_loader:
    data, target = variable(data, volatile=true), variable(target)
    output = model(data)
    # sum up batch loss
    test_loss += f.nll_loss(output, target, size_average=false).data[0]
    # get the index of the max log-probability
    pred = output.data.max(1, keepdim=true)[1]
    correct += pred.eq(target.data.view_as(pred)).cpu().sum()

  test_loss /= len(test_loader.dataset)
  print('\ntest set: average loss: {:.4f}, accuracy: {}/{} ({:.0f}%)\n'.format(
    test_loss, correct, len(test_loader.dataset),
    100. * correct / len(test_loader.dataset)))


for epoch in range(1, 10):
  train(epoch)
  test()

输出结果:

train epoch: 1 [0/60000 (0%)]   loss: 2.315724
train epoch: 1 [12800/60000 (21%)]  loss: 1.931551
train epoch: 1 [25600/60000 (43%)]  loss: 0.733935
train epoch: 1 [38400/60000 (64%)]  loss: 0.165043
train epoch: 1 [51200/60000 (85%)]  loss: 0.235188

test set: average loss: 0.1935, accuracy: 9421/10000 (94%)

train epoch: 2 [0/60000 (0%)]   loss: 0.333513
train epoch: 2 [12800/60000 (21%)]  loss: 0.163156
train epoch: 2 [25600/60000 (43%)]  loss: 0.213840
train epoch: 2 [38400/60000 (64%)]  loss: 0.141114
train epoch: 2 [51200/60000 (85%)]  loss: 0.128191

test set: average loss: 0.1180, accuracy: 9645/10000 (96%)

train epoch: 3 [0/60000 (0%)]   loss: 0.206469
train epoch: 3 [12800/60000 (21%)]  loss: 0.234443
train epoch: 3 [25600/60000 (43%)]  loss: 0.061048
train epoch: 3 [38400/60000 (64%)]  loss: 0.192217
train epoch: 3 [51200/60000 (85%)]  loss: 0.089190

test set: average loss: 0.0938, accuracy: 9723/10000 (97%)

train epoch: 4 [0/60000 (0%)]   loss: 0.086325
train epoch: 4 [12800/60000 (21%)]  loss: 0.117741
train epoch: 4 [25600/60000 (43%)]  loss: 0.188178
train epoch: 4 [38400/60000 (64%)]  loss: 0.049807
train epoch: 4 [51200/60000 (85%)]  loss: 0.174097

test set: average loss: 0.0743, accuracy: 9767/10000 (98%)

train epoch: 5 [0/60000 (0%)]   loss: 0.063171
train epoch: 5 [12800/60000 (21%)]  loss: 0.061265
train epoch: 5 [25600/60000 (43%)]  loss: 0.103549
train epoch: 5 [38400/60000 (64%)]  loss: 0.019137
train epoch: 5 [51200/60000 (85%)]  loss: 0.067103

test set: average loss: 0.0720, accuracy: 9781/10000 (98%)

train epoch: 6 [0/60000 (0%)]   loss: 0.069251
train epoch: 6 [12800/60000 (21%)]  loss: 0.075502
train epoch: 6 [25600/60000 (43%)]  loss: 0.052337
train epoch: 6 [38400/60000 (64%)]  loss: 0.015375
train epoch: 6 [51200/60000 (85%)]  loss: 0.028996

test set: average loss: 0.0694, accuracy: 9783/10000 (98%)

train epoch: 7 [0/60000 (0%)]   loss: 0.171613
train epoch: 7 [12800/60000 (21%)]  loss: 0.078520
train epoch: 7 [25600/60000 (43%)]  loss: 0.149186
train epoch: 7 [38400/60000 (64%)]  loss: 0.026692
train epoch: 7 [51200/60000 (85%)]  loss: 0.108824

test set: average loss: 0.0672, accuracy: 9793/10000 (98%)

train epoch: 8 [0/60000 (0%)]   loss: 0.029188
train epoch: 8 [12800/60000 (21%)]  loss: 0.031202
train epoch: 8 [25600/60000 (43%)]  loss: 0.194858
train epoch: 8 [38400/60000 (64%)]  loss: 0.051497
train epoch: 8 [51200/60000 (85%)]  loss: 0.024832

test set: average loss: 0.0535, accuracy: 9837/10000 (98%)

train epoch: 9 [0/60000 (0%)]   loss: 0.026706
train epoch: 9 [12800/60000 (21%)]  loss: 0.057807
train epoch: 9 [25600/60000 (43%)]  loss: 0.065225
train epoch: 9 [38400/60000 (64%)]  loss: 0.037004
train epoch: 9 [51200/60000 (85%)]  loss: 0.057822

test set: average loss: 0.0538, accuracy: 9829/10000 (98%)

process finished with exit code 0

参考:https://github.com/hunkim/pytorchzerotoall

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持移动技术网。

如对本文有疑问,请在下面进行留言讨论,广大热心网友会与你互动!! 点击进行留言回复

相关文章:

验证码:
移动技术网