Differences between OpenCV JavaCV and OpenCV4Android

最近我在考慮是否要改變XFace項目的技術方案,通過一番調研後我獲得下面的結果。html

本文將介紹OpenCV,JavaCV以及OpenCV for Android(如下簡稱OpenCV4Android)之間的區別,並以一我的臉識別的Android應用爲例,詳細介紹能夠採用的實踐方案。java

OpenCV: http://docs.opencv.org/index.htmlandroid

OpenCV4Android: OpenCV4Android_SDK.htmlgit

JavaCV: https://github.com/bytedeco/javacvgithub

OpenCV,JavaCV,OpenCV4Android

(1) JavaCV和OpenCV4Android沒有關係

OpenCV是C++版本的開源計算機視覺庫;JavaCV是對OpenCV的Java封裝,開發團隊和OpenCV開發團隊沒有關係;OpenCV4Android也是對OpenCV的封裝以使其可以應用於Android平臺,開發團隊是OpenCV開發團隊的一部分,也就是OpenCV4Android和JavaCV沒有任何關係!算法

參考網址:https://groups.google.com/forum/#!topic/javacv/qJmBLvpV7cM數組

android-opencv has no relation to JavaCV, so you should ask somewhere else for questions about it.. The philosophy of android-opencv (and of the OpenCV team as general) is to make OpenCV run on Android, which forces them to use Java, but otherwise they prefer to use C++ or Python. With JavaCV, my hope is to have it run on as many platforms as possible, including Android, since it supports (some sort of) Java, so we can use sane(r) and more efficient languages such as the Java and Scala languages. Take your pick!android-studio

(2) JavaCV和OpenCV的性能比較

大多數時候二者性能相差不大,某些OpenCV函數可以並行化處理而JavaCV不行,可是JavaCV還綁定了不少其餘的圖像處理庫,功能也足夠強大。app

參考網址:http://stackoverflow.com/questions/21207755/opencv-javacv-vs-opencv-c-c-interfaceside

I'd like to add a couple of things to @ejbs's answer.
First of all, you concerned 2 separate issues:
Java vs. C++ performance
OpenCV vs JavaCV
Java vs. C++ performance is a long, long story. On one hand, C++ programs are compiled to a highly optimized native code. They start quickly and run fast all the time without pausing for garbage collection or other VM duties (as Java do). On other hand, once compiled, program in C++ can't change, no matter on what machine they are run, while Java bytecode is compiled "just-in-time" and is always optimized for processor architecture they run on. In modern world, with so many different devices (and processor architectures) this may be really significant. Moreover, some JVMs (e.g. Oracle Hotspot) can optimize even the code that is already compiled to native code! VM collect data about program execution and from time to time tries to rewrite code in such a way that it is optimized for this specific execution. So in such complicated circumstances the only real way to compare performance of implementations in different programming languages is to just run them and see the result.
OpenCV vs. JavaCV is another story. First you need to understand stack of technologies behind these libraries.
OpenCV was originally created in 1999 in Intel research labs and was written in C. Since that time, it changed the maintainer several times, became open source and reached 3rd version (upcoming release). At the moment, core of the library is written in C++ with popular interface in Python and a number of wrappers in other programming languages.
JavaCV is one of such wrappers. So in most cases when you run program with JavaCV you actually use OpenCV too, just call it via another interface. But JavaCV provides more than just one-to-one wrapper around OpenCV. In fact, it bundles the whole number of image processing libraries, including FFmpeg, OpenKinect and others. (Note, that in C++ you can bind these libraries too).
So, in general it doesn't matter what you are using - OpenCV or JavaCV, you will get just about same performance. It more depends on your main task - is it Java or C++ which is better suited for your needs.
There's one more important point about performance. Using OpenCV (directly or via wrapper) you will sometimes find that OpenCV functions overcome other implementations by several orders. This is because of heavy use of low-level optimizations in its core. For example, OpenCV's filter2D function is SIMD-accelerated and thus can process several sets of data in parallel. And when it comes to computer vision, such optimizations of common functions may easily lead to significant speedup.

(3) 人臉識別的Android應用

對人臉識別算法的支持

目前OpenCV的最新版本是2.4.10,OpenCV4Android是2.4.9,JavaCV的版本是0.9

OpenCV天然支持人臉識別算法,詳細的使用教程看這裏

OpenCV4Android暫時不支持,可是能夠經過創建一層簡單的封裝來實現,封裝的方法看這裏

JavaCV如今已經支持人臉識別算法了,在Samples中能夠找到一份樣例代碼OpenCVFaceRecognizer.java

不可忽視的攝像頭!

由於是移動應用,因此要可以從移動設備中獲取攝像頭返回的數據是關鍵!而這個偏偏是這類應用要考慮的一個重要因素,由於它直接決定了你的應用須要使用的技術方案!

關於攝像頭的使用其實我已經在前面的博文Android Ndk and Opencv Development 3中詳細介紹過了,這裏我引用部份內容,若是想了解更多的話,不妨先看下前面的內容。 [下面提到的OpenCV libraryOpenCV4Android SDK 的一部分]

[其實還有一種獲取攝像頭數據的方式,那就是直接在Native層操做攝像頭,OpenCV4Android SDK的Samples中提供了一個樣例native-activity,這種方式實際上是極其不推薦使用的,一方面代碼很差寫,不便操做;另外一方面聽說這部分的API常常變化,不便維護]

(1) 關於如何進行和OpenCV有關的攝像頭開發

在沒有OpenCV library的狀況下,也就是咱們直接使用Android中的Camera API的話,獲取獲得的圖像幀是YUV格式的,咱們在處理以前每每要先轉換成RGB(A)格式的才行。

若是有了OpenCV library的話攝像頭的開發就簡單多了,能夠參見OpenCV for Android中的三個Tutorial(CameraPreview, MixingProcessingCameraControl),源碼都在OpenCV-Android sdk的samples目錄下,這裏簡單介紹下:OpenCV Library中提供了兩種攝像頭,一種是Java攝像頭-org.OpenCV.Android.JavaCameraView,另外一種是Native攝像頭-org.OpenCV.Android.NativeCameraView (能夠運行CameraPreview這個項目來體驗下二者的不一樣,其實差很少)。二者都繼承自CameraBridgeViewBase這個抽象類,可是JavaCamera使用的就是Android SDK中的Camera,而NativeCamera使用的是OpenCV中的VideoCapture

(2) 關於如何傳遞攝像頭預覽的圖像數據給Native層

這個很重要!我曾經試過不少的方式,大體思路有:

①傳遞圖片路徑:這是最差的方式,我使用過,速度很慢,實時性不好,主要用於前期開發的時候進行測試,測試Java層和Native層的互調是否正常。

②傳遞預覽圖像的字節數組到Native層,而後將字節數組處理成RGB或者RGBA的格式[具體哪一種格式要看你的圖像處理函數可否處理RGBA格式的,若是能夠的話推薦轉換成RGBA格式,由於返回的也是RGBA格式的]。網上有不少的文章討論如何轉換:一種方式是使用一個自定義的函數進行編碼轉換(能夠搜索到這個函數,例如這篇文章Camera image->NDK->OpenGL texture),另外一個種方式是使用OpenCV中的MatcvtColor函數進行轉換,接着調用圖像處理函數,處理完成以後,將處理的結果保存在一個整形數組中(實際上就是RGB或者RGBA格式的圖像數據),最後調用Bitmap的方法將其轉換成bitmap返回。這種方法速度也比較慢,可是比第一種方案要快了很多,具體實現過程能夠看推薦書籍《Mastering OpenCV with Practical Computer Vision Projects》,第一章Cartoonifer and Skin Changer for Android就是一個Android的應用實例。

③使用OpenCV的攝像頭:JavaCamera或者NativeCamera都行,好處是它進行了不少的封裝,能夠直接將預覽圖像的Mat結構傳遞給Native層,這種傳遞是使用Mat的內存地址(long型),Native層只要根據這個地址將其封裝成Mat就能夠進行處理了,另外,它的回調函數的返回值也是Mat,很是方便!這種方式速度較快。具體過程能夠參考OpenCV-Android sdk的samples項目中的Tutorial2-MixedProcessing

可選方案有哪些?

綜上所述,咱們來總結下若是想要開發一我的臉識別的Android應用程序,大體會有哪些技術方案呢?

(1) 攝像頭使用純Android Camera API,將YUV格式的數據傳入到Native層,轉換成RGB(A) 格式,而後調用OpenCV人臉識別算法進行處理,最後將處理結果RGB(A) 格式數據返回給Java層。優勢是對其餘內容的依賴較少,靈活性好,開發者甚至能夠對內部算法進行修改,缺點天然是須要開發者具備很強的技術水平,要同時熟練OpenCV和Android NDK開發,在三星Galaxy I9000上測試比較慢,有明顯卡頓延遲。

這種方式能夠參考書籍《Mastering OpenCV with Practical Computer Vision Projects》 的第一章Cartoonifer and Skin Changer for Android 的實現方式。 >>> 我測試經過的源碼下載

(2) 攝像頭使用純Android Camera API,將YUV格式的數據直接在Java層轉換成RGB(A) 格式,直接傳給JavaCV人臉識別算法進行處理,而後返回識別結果便可。優勢是隻依賴了JavaCV,缺點是從OpenCV算法轉成JavaCV實現須要些工做量。

這種方式我沒有試驗過,轉換的方式能夠參考這裏 [我會盡快試驗一下,若是可行我會將代碼公開]

(3) 攝像頭使用OpenCV4Android Library,將獲得的數據Mat 的內存地址傳給Native層,Native層經過地址還原成Mat,而後調用OpenCV人臉識別算法進行處理,最後將處理結果RGB(A) 格式數據返回給Java層。優勢是靈活性好,缺點是依賴了OpenCV4Android Library和OpenCV,因此須要掌握OpenCV和Android NDK開發,在三星Galaxy I9000上測試還行,若是算法處理比較慢的話會慢1-3s左右才返回結果。

這種方式能夠參考OpenCV-Android sdk的samples項目中的Tutorial2-MixedProcessing [個人開源項目XFace採用的正是這種方案]

(4) 攝像頭使用OpenCV4Android Library,Native層對OpenCV人臉識別算法類進行簡單封裝,而後將攝像頭獲得的數據Mat 直接傳給OpenCV4Android Library的人臉識別算法,而後返回識別結果便可。優勢是依賴還不算多並且可能要寫的Native層代碼也很少。

這種方式我試驗過,利用前面提到過封裝的方法,能夠參考這裏,注意按照答案的例子在加載facerec 庫以前要記得加載opencv_java 庫才行! >>>我測試經過的源碼下載

(5) 攝像頭使用OpenCV4Android Library,而後將攝像頭獲得的數據Mat 直接傳給JavaCV的人臉識別算法,而後返回識別結果便可。優勢是看起來方案很不錯,只須要寫Java代碼就好了,Native層可能只須要導入一些*so 文件到jniLibs 目錄中就好了,缺點是依賴太多了!

這種方式能夠參考Github上的這個項目 >>> 我測試經過的源碼下載

各類方案各有利弊,一方面要考慮技術方案是否可行,另外一方面還要考慮該技術方案是否便於開發!哎,碼農真是傷不起啊!

補充部分

這裏假設你是按照我上一篇文章Android NDK and OpenCV Development With Android Studio 的方式來建立的項目。

(1) 方案1中的部分代碼

實現將YUV 格式數據轉換成 RGBA 格式數據的Native層代碼

// Just show the plain camera image without modifying it.
JNIEXPORT void JNICALL Java_com_Cartoonifier_CartoonifierView_ShowPreview(JNIEnv* env, jobject,
        jint width, jint height, jbyteArray yuv, jintArray bgra)
{
    // Get native access to the given Java arrays.
    jbyte* _yuv  = env->GetByteArrayElements(yuv, 0);
    jint*  _bgra = env->GetIntArrayElements(bgra, 0);

    // Prepare a cv::Mat that points to the YUV420sp data.
    Mat myuv(height + height/2, width, CV_8UC1, (uchar *)_yuv);
    // Prepare a cv::Mat that points to the BGRA output data.
    Mat mbgra(height, width, CV_8UC4, (uchar *)_bgra);

    // Convert the color format from the camera's
    // NV21 "YUV420sp" format to an Android BGRA color image.
    cvtColor(myuv, mbgra, CV_YUV420sp2BGRA);

    // OpenCV can now access/modify the BGRA image if we want ...

    // Release the native lock we placed on the Java arrays.
    env->ReleaseIntArrayElements(bgra, _bgra, 0);
    env->ReleaseByteArrayElements(yuv, _yuv, 0);
}

(2) 方案4中的部分代碼

Android.mk 文件

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)

#opencv
OPENCVROOT:= /Volumes/hujiawei/Users/hujiawei/Android/opencv_sdk
OPENCV_CAMERA_MODULES:=on
OPENCV_INSTALL_MODULES:=on
OPENCV_LIB_TYPE:=SHARED
include ${OPENCVROOT}/sdk/native/jni/OpenCV.mk

LOCAL_SRC_FILES := facerec.cpp

LOCAL_LDLIBS += -llog
LOCAL_MODULE := facerec

include $(BUILD_SHARED_LIBRARY)

Application.mk 文件

APP_STL := gnustl_static
APP_CPPFLAGS := -frtti -fexceptions
APP_ABI := armeabi
APP_PLATFORM := android-16

FisherFaceRecognizer 文件

package com.android.hacks.ndkdemo;

import org.opencv.contrib.FaceRecognizer;

public class FisherFaceRecognizer extends FaceRecognizer {

    static {
        System.loadLibrary("opencv_java");//
        System.loadLibrary("facerec");//
    }

    private static native long createFisherFaceRecognizer0();

    private static native long createFisherFaceRecognizer1(int num_components);

    private static native long createFisherFaceRecognizer2(int num_components, double threshold);

    public FisherFaceRecognizer() {
        super(createFisherFaceRecognizer0());
    }

    public FisherFaceRecognizer(int num_components) {
        super(createFisherFaceRecognizer1(num_components));
    }

    public FisherFaceRecognizer(int num_components, double threshold) {
        super(createFisherFaceRecognizer2(num_components, threshold));
    }
}

以後你能夠測試,固然你還能夠作一個完整的例子來測試這個算法是否正確

facerec = new FisherFaceRecognizer();
textView.setText(String.valueOf(facerec.getDouble("threshold")));//1.7976xxxx
相關文章
相關標籤/搜索