Error描述:git
aita@aita-Alienware-Area-51-R5:~/AITA2/daisida/ssd-github/caffe$ make runtest -j8 .build_release/tools/caffe caffe: command line brew usage: caffe <command> <args> commands: train train or finetune a model test score a model device_query show GPU diagnostic information time benchmark model execution time Flags from tools/caffe.cpp: -gpu (Optional; run in GPU mode on given device IDs separated by ','.Use '-gpu all' to run on all available GPUs. The effective training batch size is multiplied by the number of devices.) type: string default: "" -iterations (The number of iterations to run.) type: int32 default: 50 -level (Optional; network level.) type: int32 default: 0 -model (The model definition protocol buffer text file.) type: string default: "" -phase (Optional; network phase (TRAIN or TEST). Only used for 'time'.) type: string default: "" -sighup_effect (Optional; action to take when a SIGHUP signal is received: snapshot, stop or none.) type: string default: "snapshot" -sigint_effect (Optional; action to take when a SIGINT signal is received: snapshot, stop or none.) type: string default: "stop" -snapshot (Optional; the snapshot solver state to resume training.) type: string default: "" -solver (The solver definition protocol buffer text file.) type: string default: "" -stage (Optional; network stages (not to be confused with phase), separated by ','.) type: string default: "" -weights (Optional; the pretrained weights to initialize finetuning, separated by ','. Cannot be set simultaneously with snapshot.) type: string default: "" .build_release/test/test_all.testbin 0 --gtest_shuffle Cuda number of devices: 3 Setting to use device 0 Current device id: 0 Current device name: GeForce GTX 1080 Ti Note: Randomizing tests' orders with a seed of 48866 . [==========] Running 2361 tests from 309 test cases. [----------] Global test environment set-up. [----------] 7 tests from DetectionOutputLayerTest/2, where TypeParam = caffe::GPUDevice<float> [ RUN ] DetectionOutputLayerTest/2.TestForwardShareLocationTopK E0103 00:37:53.042623 19470 common.cpp:113] Cannot create Cublas handle. Cublas won't be available. [ OK ] DetectionOutputLayerTest/2.TestForwardShareLocationTopK (219 ms) [ RUN ] DetectionOutputLayerTest/2.TestForwardNoShareLocationNeg0TopK [ OK ] DetectionOutputLayerTest/2.TestForwardNoShareLocationNeg0TopK (2 ms) [ RUN ] DetectionOutputLayerTest/2.TestSetup [ OK ] DetectionOutputLayerTest/2.TestSetup (1 ms) [ RUN ] DetectionOutputLayerTest/2.TestForwardNoShareLocationNeg0 [ OK ] DetectionOutputLayerTest/2.TestForwardNoShareLocationNeg0 (2 ms) [ RUN ] DetectionOutputLayerTest/2.TestForwardNoShareLocation [ OK ] DetectionOutputLayerTest/2.TestForwardNoShareLocation (2 ms) [ RUN ] DetectionOutputLayerTest/2.TestForwardShareLocation [ OK ] DetectionOutputLayerTest/2.TestForwardShareLocation (1 ms) [ RUN ] DetectionOutputLayerTest/2.TestForwardNoShareLocationTopK [ OK ] DetectionOutputLayerTest/2.TestForwardNoShareLocationTopK (2 ms) [----------] 7 tests from DetectionOutputLayerTest/2 (229 ms total) [----------] 2 tests from EuclideanLossLayerTest/2, where TypeParam = caffe::GPUDevice<float> [ RUN ] EuclideanLossLayerTest/2.TestGradient F0103 00:37:53.068140 19470 math_functions.cu:110] Check failed: status == CUBLAS_STATUS_SUCCESS (1 vs. 0) CUBLAS_STATUS_NOT_INITIALIZED *** Check failure stack trace: *** @ 0x7f9210daf5cd google::LogMessage::Fail() @ 0x7f9210db1433 google::LogMessage::SendToLog() @ 0x7f9210daf15b google::LogMessage::Flush() @ 0x7f9210db1e1e google::LogMessageFatal::~LogMessageFatal() @ 0x7f920c7ad43a caffe::caffe_gpu_dot<>() @ 0x7f920c7ec7c3 caffe::EuclideanLossLayer<>::Forward_gpu() @ 0x48ae96 caffe::Layer<>::Forward() @ 0x48d445 caffe::GradientChecker<>::CheckGradientSingle() @ 0x4aea53 caffe::GradientChecker<>::CheckGradientExhaustive() @ 0x848f0c caffe::EuclideanLossLayerTest_TestGradient_Test<>::TestBody() @ 0xa17c23 testing::internal::HandleExceptionsInMethodIfSupported<>() @ 0xa1123a testing::Test::Run() @ 0xa11388 testing::TestInfo::Run() @ 0xa11465 testing::TestCase::Run() @ 0xa1273f testing::internal::UnitTestImpl::RunAllTests() @ 0xa12a63 testing::UnitTest::Run() @ 0x47a98d main @ 0x7f920ba46830 __libc_start_main @ 0x483b49 _start @ (nil) (unknown) Makefile:526: recipe for target 'runtest' failed make: *** [runtest] Aborted (core dumped)
解決方案1:github
sudo rm -rf .nv/
解決方案2:bash
I realized that there was an error with my CUDA installation, specifically with the cuBLAS library. You can check if yours has the same problem by running the sample program simpleCUBLAS:
dom
- cd /usr/local/cuda/samples/7_CUDALibraries/simpleCUBLAS # check if your samples are in the same directory
- make
- ./simpleCUBLAS
I was getting an error when I tried to run it, so I reinstalled CUDA 8.0 and it solved the issue.ide
此前嘗試過:ui
CUDA_VISIBLE_DEVICES=2google
緣由是這個Demo默認使用全部探測到的CUDA 設備,而實驗室的CUDA設備還有不少人在用,這會形成問題(多是衝突或者資源不夠,或者不被容許我的使用這麼多個?)idea
因此在運行的命令行前面要加上必定的限制:spa
CUDA_VISIBLE_DEVICES=2 ./build/examples/openpose/openpose.bin --net_resolution "160x80" --video examples/media/video.avi
使得被探測到的設備數量只有兩個。命令行
而後發現仍是運行不了,爲何呢?
由於OS X上的SHELL並不具有顯示遠程窗口的功能,
因此使用MobaXterm(在WIN10下),而後就成功了,由於這個軟件直接內置了X server的接口