1. 相關軟件版本python
xshell:linux
xmanager:shell
pycharm:windows
pycharm破解服務器:https://jetlicense.nss.im/服務器
2. 將相應的軟件安裝(pojie好)app
a> 啓動xmanager passive,這個是用來接受linux轉發過來的x11的:dom
b> 設置xshell,使用ssh隧道將x11轉發到windows機器上ssh
在被設置的服務器上執行echo $DISPLAY,以下:測試
c> 經過設置後,就能夠將linux中的圖形界面轉發到windows機器上了,例如,在linux命令行中使用eog 1.jpg能夠將1.jpg顯示在window系統中:this
能夠看到linux已經經過ssh隧道將x11成功的轉發到window,而背後支撐x11的服務器正是xmanager提供的:
3. 配置pycharm,使其可以經過ssh協議遠程的使用linux上的python環境,而且進行遠程調試、代碼同步:
a> 新建一個pythonproject,選擇好項目的位置:
b> 在File->settings中,設置遠程linux服務器:
選擇Add..,而後填寫好遠程服務器的ip地址以及用戶名和密碼:
在path mapping中設置windows目錄和linux目錄的映射關係,這樣方便進行windows和linux之間進行代碼同步:
在settings=>Tools=>python Scientific頁面中,將show plots in tool window關閉,這樣是爲了使pycharm可以正常的將linux的中的x11協議轉發到xmanger中:
c> 設置源代碼同步:
到此,已經設置好了pycharm、xshell、xmanager相互交互的環境。下面在pycharm中寫一段測試代碼,這個測試代碼經過pycharm提交到linux機器上,而後linux經過x11協議轉發到windows上,在windows上顯示一張圖片,line.py代碼以下:
import matplotlib.pyplot as plt import numpy as np plt.plot(range(5), linestyle='--', linewidth=3, color='purple') plt.ylabel('Numbers from 1-5') plt.xlabel('Love yohanna') plt.show() #plt.clf() #plt.close() N = 1000 x = np.random.randn(N) y = np.random.randn(N) z = np.random.randn(N) plt.scatter(z, y, color='yellow', linewidths=0.05) plt.scatter(x, y, color='green', linewidths=0.2) plt.axis([-4, 4, -4, 4]) plt.show()
設置pycharm運行line.py的環境變量:
在環境變量列表中加入DISPLAY變量,值爲localhost:11.0(具體值,在設置好xshell x11轉發規則後,經過linux shell中的echo $DISPLAY得到)
點擊運行按鈕,能夠看到在windows中顯示了畫圖的效果:
4. 設置pycharm使用emacs方式編輯:File->settings->keyMap->Emacs
設置文件結尾爲unix格式:
5. 搭建pytorch+cuda的環境:
a> 安裝pytorch使用,sudo pip install pytorch
b> 下載和安裝cuda:
https://developer.nvidia.com/cuda-toolkit-archive
cuda的默認安裝位置:/usr/local/cuda-8.0
在安裝的過程當中若是遇到X dispaly的問題,
It appears that an X server is running. Please exit X before installation. If you're sure that X is not running, but are getting this error, please delete any X lock files in /tmp.
那麼能夠嘗試使用/etc/init.d/lightdm stop
而後在嘗試安裝,安裝成功後日志以下:
c> 下載和安裝cudnn:
https://developer.nvidia.com/rdp/cudnn-download
libcudnn的安裝,先安裝runtime,而後再安裝dev包。
d>
將cuda和cudnn安裝成功後,發現torch已經支持cuda了:torch.cuda.is_available() --> true
6. 如今pytorch+cuda的環境已經搭建好,能夠跑一個簡單的minst例子了,首先將代碼下載好torch_minist.py:
# This file will train minist dataset , using pytorch from __future__ import print_function import argparse import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms import pdb # Training settings parser = argparse.ArgumentParser(description='PyTorch MNIST Example') parser.add_argument('--batch-size', type=int, default=64, metavar='N', help='input batch size for training(default: 64)') parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N', help='input batch size for testing (default: 1000)') parser.add_argument('--epochs', type=int, default=10, metavar='N', help='number of epochs to train (default: 10)') parser.add_argument('--lr', type=float, default=0.01, metavar='LR', help='learning rate(default: 0.01)') parser.add_argument('--momentum', type=float, default=0.5, metavar='M', help='SGD momentum (default: 0.5)') parser.add_argument('--no-cuda', action='store_true', default=False, help='disables CUDA training') parser.add_argument('--seed', type=int, default=1, metavar='S', help='random seed (default: 1)') parser.add_argument('--log-interval', type=int, default=10, metavar='N', help='how many batches to wait before logging training status') args = parser.parse_args() use_cuda = not args.no_cuda and torch.cuda.is_available() torch.manual_seed(args.seed) device = torch.device('cuda' if use_cuda else 'cpu') kwargs = {'num_workers':1, 'pin_memory': True} if use_cuda else {} train_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=True, download=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,),(0.3081,))])), batch_size=args.batch_size, shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.MNIST('../data', train=False, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,),(0.3081,))])), batch_size=args.batch_size, shuffle=True, **kwargs) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x, dim=1) model = Net().to(device) optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) def train(epoch): #pdb.set_trace() model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item() )) def test(): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, size_average=False).item() # sum up batch loss pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset) )) for epoch in range(1, args.epochs+1): train(epoch) test()
而後設置torch_minst.py運行時的環境變量DISPLAY=localhost:11.0,點擊run按鈕能夠看到運行效果:
圖終於截完。