pytorch模型部署在MacOS或者IOS

pytorch訓練出.pth模型如何在MacOS上或者IOS部署,這是個問題。html

然而咱們有了onnx,一樣咱們也有了coreML。python

 

ONNX:ios

onnx是一種針對機器學習設計的開放式文件格式,用來存儲訓練好的模型,並進行多種框架模型間的轉換。git

 

coreML:github

Apple在2017年 MacOS 10.13以及IOS11+系統上推出了coreML1.0,官網地址:https://developer.apple.com/documentation/coreml 。macos

2018年又推出MacOS 10.14以及IOS12系統上的coreML2.0  https://www.appcoda.com/coreml2/xcode

coreML框架能夠方便的進行深度學習模型的部署,利用模型進行預測,讓深度學習能夠在apple的移動設備上發光發熱。而開發者須要作的僅僅是將model.mlModel拖進xcode工程,xcode工程會自動生成以模型名稱命名的object-c類以及多種進行預測所需的類接口。app

 

pytorch -- ONNX -- coreML框架

沒錯,就是這個流程。咱們有訓練好的.pth模型,經過pytorch.onnx.export() 轉化爲 .onnx模型,而後利用 onnx_coreml.convert()將 .onnx轉換爲 .mlModel。將.mlModel拖進xcode工程編寫預測代碼就能夠了。python2.7

 

1.  pytorch -- ONNX

請先查看pytorch官網的onnx模塊:https://pytorch.org/docs/stable/onnx.html  。 主要的代碼就這一個API, 各個參數意義請查閱文檔。

torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names);

轉換部分代碼以下:
 batch_size=1
 onnx_model_path = "onnx_model.onnx"
 dummy_input = V(torch.randn(batch_size, 3, 224, 224), requires_grad=True)
 torch_out= torch.onnx.export(pytorch_model, dummy_input , onnx_model_path, verbose=True,input_names=['image'], output_names=['outTensor'], export_params=True, training=False )

 

 這裏有一個須要注意的地方就是input_names和output_names的設置,若是不設置的狀況,輸入層和輸出層pytorch會自動分配一個數字編號。好比下圖(用netron工具查看,真是一個很好用的工具 https://pypi.org/project/netron/)。 自動分配的輸入名稱和輸出名稱是0 和 199。 這樣轉換成coreML模型後加載到xcode中會出現"initwith0"這樣的編譯錯誤,就是模型初始化的時候不能正確處理這個輸入名稱0。所以最好是在export的時候將其修改一個名稱。

  

 

修改以後的模型是這樣的,能夠看到模型的輸入和輸出名稱都發生的修改:

 

 2. onnx -- mlModel

  這一部分須要安裝onnx, github地址: https://github.com/onnx/onnx  以及安裝一個轉換工具onnx_coreML,github地址:https://github.com/onnx/onnx-coreml  。裏面用到了一個coremltools : https://pypi.org/project/coremltools/,這個tool目前僅支持python2.7環境下使用。

  安裝好後, import onnx , import onnx_coreML 就可使用。轉換代碼以下:

onnx_model = onnx.load("onnx_model.onnx")
cml_model= onnx_coreml.convert(onnx_model)
cml_model.save("coreML_model.mlmodel")

 

   固然, onnx_coreml.convert有不少參數,能夠用來預處理,設置bgr順序等,請參看github文檔介紹。

  如今將coreML_model.mlModel拖進xcode工程裏,會自動生成一個coreML_model類,這個類有初始化模型,輸入 預測 輸出等API,編寫預測代碼便可。

 

3. 在最新的coreML2.0中,支持模型的量化. coreML1.0中處理模型是32位,而在coreML2.0中能夠將模型量化爲16bit, 8bit, 4bit甚至是2bit,而且能夠設置量化的方法。 具體請看apple WWDC視頻以及PPT。

  模型量化仍然是使用coreMLtool2.0工具,具體代碼請查閱這篇博客,寫的很詳細:https://www.jianshu.com/p/b6e3cb7338bf。 兩句代碼便可完成量化轉換,代碼以下:

import coremltools
from coremltools.models.neural_network.quantization_utils import *

model = coremltools.models.MLModel('Model.mlmodel')
lin_quant_model = quantize_weights(model, 8, "linear")
lin_quant_model.save('Model_8bit.mlmodel')

 

時間倉促,寫的粗糙,隨後更新。

———————————今天來更新————————————————

1.  將模型拖進xcode工程後,點擊模型將在右側頁面看到這樣的信息,包括模型的名稱、尺寸、輸入、輸出等信息,而且會提示已經自動生成Objective-c的模型類文件:

 

  點擊Model右側的箭頭能夠跳轉到model類的頭文件model.h。裏面包含了模型初始化以及預測等接口。以下:

//
// Model.h
//
// This file was automatically generated and should not be edited.
//

#import <Foundation/Foundation.h>
#import <CoreML/CoreML.h>
#include <stdint.h>

NS_ASSUME_NONNULL_BEGIN


/// Model Prediction Input Type
API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) __attribute__((visibility("hidden")))
@interface ModelInput : NSObject<MLFeatureProvider>

/// image as color (kCVPixelFormatType_32BGRA) image buffer, 224 pixels wide by 224 pixels high
@property (readwrite, nonatomic) CVPixelBufferRef image;
- (instancetype)init NS_UNAVAILABLE;
- (instancetype)initWithImage:(CVPixelBufferRef)image;
@end


/// Model Prediction Output Type
API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) __attribute__((visibility("hidden")))
@interface ModelOutput : NSObject<MLFeatureProvider>

/// MultiArray of shape (1, 1, 10, 1, 1). The first and second dimensions correspond to sequence and batch size, respectively as multidimensional array of floats
@property (readwrite, nonatomic, strong) MLMultiArray * outTensor;
- (instancetype)init NS_UNAVAILABLE;
- (instancetype)initWithOutTensor:(MLMultiArray *)outTensor;
@end


/// Class for model loading and prediction
API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) __attribute__((visibility("hidden")))
@interface Model : NSObject
@property (readonly, nonatomic, nullable) MLModel * model;
- (nullable instancetype)init;
- (nullable instancetype)initWithContentsOfURL:(NSURL *)url error:(NSError * _Nullable * _Nullable)error;
- (nullable instancetype)initWithConfiguration:(MLModelConfiguration *)configuration error:(NSError * _Nullable * _Nullable)error API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0)) __attribute__((visibility("hidden")));
- (nullable instancetype)initWithContentsOfURL:(NSURL *)url configuration:(MLModelConfiguration *)configuration error:(NSError * _Nullable * _Nullable)error API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0)) __attribute__((visibility("hidden")));

/**
    Make a prediction using the standard interface
    @param input an instance of ModelInput to predict from
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the prediction as ModelOutput
*/
- (nullable ModelOutput *)predictionFromFeatures:(ModelInput *)input error:(NSError * _Nullable * _Nullable)error;

/**
    Make a prediction using the standard interface
    @param input an instance of ModelInput to predict from
    @param options prediction options
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the prediction as ModelOutput
*/
- (nullable ModelOutput *)predictionFromFeatures:(ModelInput *)input options:(MLPredictionOptions *)options error:(NSError * _Nullable * _Nullable)error;

/**
    Make a prediction using the convenience interface
    @param image as color (kCVPixelFormatType_32BGRA) image buffer, 224 pixels wide by 224 pixels high:
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the prediction as ModelOutput
*/
- (nullable ModelOutput *)predictionFromImage:(CVPixelBufferRef)image error:(NSError * _Nullable * _Nullable)error;

/**
    Batch prediction
    @param inputArray array of ModelInput instances to obtain predictions from
    @param options prediction options
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the predictions as NSArray<ModelOutput *>
*/
- (nullable NSArray<ModelOutput *> *)predictionsFromInputs:(NSArray<ModelInput*> *)inputArray options:(MLPredictionOptions *)options error:(NSError * _Nullable * _Nullable)error API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0)) __attribute__((visibility("hidden")));
@end

NS_ASSUME_NONNULL_END
View Code

其中帶有這樣標誌的表示是coreML1.0版本的接口以及對應的系統版本:

API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) 

下面的是coreML2.0新增的接口以及對應的系統版本:

API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0))

2.   說一下幾個經常使用API的使用:

① 首先是模型的初始化加載,初始化接口以下幾個:

- (nullable instancetype)init;
- (nullable instancetype)initWithContentsOfURL:(NSURL *)url error:(NSError * _Nullable * _Nullable)error;
- (nullable instancetype)initWithConfiguration:(MLModelConfiguration *)configuration error:(NSError * _Nullable * _Nullable)error API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0)) __attribute__((visibility("hidden")));
- (nullable instancetype)initWithContentsOfURL:(NSURL *)url configuration:(MLModelConfiguration *)configuration error:(NSError * _Nullable * _Nullable)error API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0)) __attribute__((visibility("hidden")));

 

固然,最簡單的初始化方法就是第一個:

model = [ [Model alloc] init ];

 

若是須要設置設備上計算單元的話,就去調用coreML2.0新增的initWithConfiguration接口,多的兩個參數分別是(MLModelConfiguration *)configuration 以及 NSError *error。調用方法以下:

MLModelConfiguration *config = [ [MLModelConfiguration alloc] init];
config.computeUnits = MLComputeUnitsCpuOnly;//
    
NSError *error;
model = [ [nimaModel alloc] initWithConfiguration:config error:&error];

 

其中config值有三種狀況能夠選擇,分別是

MLComputeUnitsCPUOnly  ----僅僅使用CPU計算;
MLComputeUnitsCPUAndGPU ----使用CPU和GPU計算;
MLComputeUnitsAll  -----使用全部計算單元進行計算(主要指A11以及A12仿生芯片中的netrual engine)

具體見以下的MLModelConfiguration.h文件
#import <Foundation/Foundation.h>
#import <CoreML/MLExport.h>

NS_ASSUME_NONNULL_BEGIN

typedef NS_ENUM(NSInteger, MLComputeUnits) {
    MLComputeUnitsCPUOnly = 0,
    MLComputeUnitsCPUAndGPU = 1

    ,
    MLComputeUnitsAll = 2

} API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0));

/*!
 * An object to hold options for loading a model.
 */
API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0))
ML_EXPORT
@interface MLModelConfiguration : NSObject <NSCopying>

@property (readwrite) MLComputeUnits computeUnits;

@end

NS_ASSUME_NONNULL_END
View Code

初始化的兩個兩個API用的很少,暫且按住不表。

 

②  模型的輸入API。

/// Model Prediction Input Type
API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) __attribute__((visibility("hidden")))
@interface ModelInput : NSObject<MLFeatureProvider>

/// image as color (kCVPixelFormatType_32BGRA) image buffer, 224 pixels wide by 224 pixels high
@property (readwrite, nonatomic) CVPixelBufferRef image;
- (instancetype)init NS_UNAVAILABLE;
- (instancetype)initWithImage:(CVPixelBufferRef)image;
@end

 

ModelInput類的API很簡單,就用一個initWithImage就能夠了,多作的一步就是須要將UIImage或者OpenCV加載的圖像轉換爲CVPixelBuffer。而後使用API建立ModelInput:

ModelInput *input = [[ModelInput alloc] initWithImage:buffer];

 

③ 模型的預測API。

/**
    Make a prediction using the standard interface
    @param input an instance of ModelInput to predict from
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the prediction as ModelOutput
*/
- (nullable ModelOutput *)predictionFromFeatures:(ModelInput *)input error:(NSError * _Nullable * _Nullable)error;

/**
    Make a prediction using the standard interface
    @param input an instance of ModelInput to predict from
    @param options prediction options
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the prediction as ModelOutput
*/
- (nullable ModelOutput *)predictionFromFeatures:(ModelInput *)input options:(MLPredictionOptions *)options error:(NSError * _Nullable * _Nullable)error;

/**
    Make a prediction using the convenience interface
    @param image as color (kCVPixelFormatType_32BGRA) image buffer, 224 pixels wide by 224 pixels high:
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the prediction as ModelOutput
*/
- (nullable ModelOutput *)predictionFromImage:(CVPixelBufferRef)image error:(NSError * _Nullable * _Nullable)error;

/**
    Batch prediction
    @param inputArray array of ModelInput instances to obtain predictions from
    @param options prediction options
    @param error If an error occurs, upon return contains an NSError object that describes the problem. If you are not interested in possible errors, pass in NULL.
    @return the predictions as NSArray<ModelOutput *>
*/
- (nullable NSArray<ModelOutput *> *)predictionsFromInputs:(NSArray<ModelInput*> *)inputArray options:(MLPredictionOptions *)options error:(NSError * _Nullable * _Nullable)error API_AVAILABLE(macos(10.14), ios(12.0), watchos(5.0), tvos(12.0)) __attribute__((visibility("hidden")));
@end

 

前面兩個是標準的API。他倆的差別在於 Options: (MLPredictionOptions *)options 這個參數。第2個API中能夠設置Options參數,這個是coreML1.0中就有的,具體見以下的MLPredictionOptions.h文件。

//
//  MLPredictionOptions.h
//  CoreML
//
//  Copyright © 2017 Apple Inc. All rights reserved.
//
#import <Foundation/Foundation.h>
#import <CoreML/MLExport.h>

NS_ASSUME_NONNULL_BEGIN

/*!
 * MLPredictionOptions
 *
 * An object to hold options / controls / parameters of how
 * model prediction is performed
 */
API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0))
ML_EXPORT
@interface MLPredictionOptions : NSObject

// Set to YES to force computation to be on the CPU only
@property (readwrite, nonatomic) BOOL usesCPUOnly;

@end

NS_ASSUME_NONNULL_END
View Code

 

Option值默認不設置則爲No,也就是說默認使用GPU,若是想要只使用CPU,調用時將其值設置爲Yes便可。

 MLPredictionOptions *option = [ [MLPredictionOptions alloc] init];
 option.usesCPUOnly = yes;

  第三個API沒有Options選項,可是能夠直接將CVPixelBuffer做爲輸入,不用建立ModelInput。是一種更爲簡便的API接口。

  而第四個API是coreML2.0新增的batchPrediction接口。前面三個API一次只能輸入一幅圖像預測。而這個接口能夠將多個輸入組織爲NSArray,做爲一個Batch一次性傳入。並獲得NSArray形式的輸出。更加快速,不用再去寫for循環。

     以第三個形式的預測API爲例,調用代碼以下:

ModelOutput *output = [model predictionFromFeatures:input options:option error:&error];

 

④ 模型輸出API:

/// Model Prediction Output Type
API_AVAILABLE(macos(10.13), ios(11.0), watchos(4.0), tvos(11.0)) __attribute__((visibility("hidden")))
@interface ModelOutput : NSObject<MLFeatureProvider>

/// MultiArray of shape (1, 1, 10, 1, 1). The first and second dimensions correspond to sequence and batch size, respectively as multidimensional array of floats
@property (readwrite, nonatomic, strong) MLMultiArray * outTensor;
- (instancetype)init NS_UNAVAILABLE;
- (instancetype)initWithOutTensor:(MLMultiArray *)outTensor;
@end

 

modelOutput默認是MLMultiArray類型,當模型有多個輸出的時候,根據輸出output.outTensor, outTensor.feature  .... 等等 依次獲取。示例模型只有一個輸出,所以只有一個outTensor。

 

夜已深,就此打住。

相關文章
相關標籤/搜索