Windows Theano GPU 版配置

由於本身在上Coursera的Advanced Machine Learning, 裏面第四周的Assignment要用到PYMC3,而後這個彷佛是基於theano後端的。然而CPU版TMD太慢了,跑個馬爾科夫蒙特卡洛要10個小時,簡直不能忍了。因此妥妥換gpu版。html

爲了避免把環境搞壞,我在Anaconda裏面新建了一個環境。(關於Anaconda,能夠看我以前翻譯的文章)python

Conda Create -n theano-gpu python=3.4

(theano GPU版貌似不支持最新版,保險起見裝了舊版)git

conda install theano pygpu

這裏面會涉及不少依賴,應該conda會給你搞好,缺什麼的話本身按官方文檔去裝。github

而後至於Cuda和Cudnn的安裝,能夠看我寫的關於TF安裝的教程segmentfault

和TF不一樣的是,Theano不分gpu和cpu版,用哪一個看配置文件設置,這一點是翻博客瞭解到的:
配置好Theano環境以後,只要 C:Users你的用戶名 的路徑下添加 .theanorc.txt 文件。後端

.theanorc.txt 文件內容:dom

[global]

openmp=False

device = cuda

floatX = float32

base_compiler = C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin

allow_input_downcast=True 

[lib]

cnmem = 0.75

[blas]

ldflags=

[gcc]

cxxflags=-IC:\Users\lyh\Anaconda2\MinGW

[nvcc]

fastmath = True

flags = -LC:\Users\lyh\Anaconda2\libs

compiler_bindir = C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\bin

flags =  -arch=sm_30

注意在新版本中,聲明用gpu從device=gpu改成device=cudaoop

而後測試是否成功:post

from theano import function, config, shared, tensor
import numpy
import time

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], tensor.exp(x))
print(f.maker.fgraph.toposort())
t0 = time.time()
for i in range(iters):
    r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, tensor.Elemwise) and
              ('Gpu' not in type(x.op).__name__)
              for x in f.maker.fgraph.toposort()]):
    print('Used the cpu')
else:
    print('Used the gpu')

輸出:測試

[GpuElemwise{exp,no_inplace}(<GpuArrayType<None>(float32, vector)>), HostFromGpu(gpuarray)(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 0.377000 seconds
Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
  1.62323296]
Used the gpu

到這裏就算配好了

而後在做業裏面,顯示Quadro卡啓用

clipboard.png

可是仍是有個warning

WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.

這個真不知道怎麼處理

而後後面運行到:

with pm.Model() as logistic_model:
    # Since it is unlikely that the dependency between the age and salary is linear, we will include age squared
    # into features so that we can model dependency that favors certain ages.
    # Train Bayesian logistic regression model on the following features: sex, age, age^2, educ, hours
    # Use pm.sample to run MCMC to train this model.
    # To specify the particular sampler method (Metropolis-Hastings) to pm.sample,
    # use `pm.Metropolis`.
    # Train your model for 400 samples.
    # Save the output of pm.sample to a variable: this is the trace of the sampling procedure and will be used
    # to estimate the statistics of the posterior distribution.
    
    #### YOUR CODE HERE ####
    
    pm.glm.GLM.from_formula('income_more_50K ~  sex+age + age_square + educ + hours', data, family=pm.glm.families.Binomial())
    with logistic_model:
        trace = pm.sample(400, step=[pm.Metropolis()]) #nchains=1 works for gpu model
        
    ### END OF YOUR CODE ###

這裏出現的報錯:

GpuArrayException: cuMemcpyDtoHAsync(dst, src->ptr + srcoff, sz, ctx->mem_s): CUDA_ERROR_INVALID_VALUE: invalid argument

這個問題最後github大神解決了:
So njobs will spawn multiple chains to run in parallel. If the model uses the GPU there will be a conflict. We recently added nchains where you can still run multiple chains. So I think running pm.sample(niter, nchains=4, njobs=1) should give you what you want.
我把:

trace = pm.sample(400, step=[pm.Metropolis()]) #nchains=1 works for gpu model

加上nchains就行了,應該是並行方面的問題

trace = pm.sample(400, step=[pm.Metropolis()],nchains=1, njobs=1) #nchains=1 works for gpu model

另外

plot_traces(trace, burnin=200)

出現pm.df_summary報錯,把pm.df_summary 換成 pm.summary就行了,也是github搜出來的。

相關文章
相關標籤/搜索