摘要:本文旨在分享Pytorch->Caffe->om模型轉換流程。
標準網絡
Baseline:PytorchToCaffehtml
主要功能代碼在:git
PytorchToCaffe +-- Caffe | +-- caffe.proto | +-- layer_param.py +-- example | +-- resnet_pytorch_2_caffe.py +-- pytorch_to_caffe.py
直接使用能夠參考resnet_pytorch_2_caffe.py,若是網絡中的操做Baseline中都已經實現,則能夠直接轉換到Caffe模型。github
添加自定義操做
若是遇到沒有實現的操做,則要分爲兩種狀況來考慮。網絡
Caffe中有對應操做
以arg_max爲例分享一下添加操做的方式。ui
首先要查看Caffe中對應層的參數:caffe.proto爲對應版本caffe層與參數的定義,能夠看到ArgMax定義了out_max_val、top_k、axis三個參數:url
message ArgMaxParameter { // If true produce pairs (argmax, maxval) optional bool out_max_val = 1 [default = false]; optional uint32 top_k = 2 [default = 1]; // The axis along which to maximise -- may be negative to index from the // end (e.g., -1 for the last axis). // By default ArgMaxLayer maximizes over the flattened trailing dimensions // for each index of the first / num dimension. optional int32 axis = 3; }
與Caffe算子邊界中的參數是一致的。spa
layer_param.py構建了具體轉換時參數類的實例,實現了操做參數從Pytorch到Caffe的傳遞:.net
def argmax_param(self, out_max_val=None, top_k=None, dim=1): argmax_param = pb.ArgMaxParameter() if out_max_val is not None: argmax_param.out_max_val = out_max_val if top_k is not None: argmax_param.top_k = top_k if dim is not None: argmax_param.axis = dim self.param.argmax_param.CopyFrom(argmax_param)
pytorch_to_caffe.py中定義了Rp類,用來實現Pytorch操做到Caffe操做的變換:code
class Rp(object): def __init__(self, raw, replace, **kwargs): self.obj = replace self.raw = raw def __call__(self, *args, **kwargs): if not NET_INITTED: return self.raw(*args, **kwargs) for stack in traceback.walk_stack(None): if 'self' in stack[0].f_locals: layer = stack[0].f_locals['self'] if layer in layer_names: log.pytorch_layer_name = layer_names[layer] print('984', layer_names[layer]) break out = self.obj(self.raw, *args, **kwargs) return out
在添加操做時,要使用Rp類替換操做:orm
torch.argmax = Rp(torch.argmax, torch_argmax)
接下來,要具體實現該操做:
def torch_argmax(raw, input, dim=1): x = raw(input, dim=dim) layer_name = log.add_layer(name='argmax') top_blobs = log.add_blobs([x], name='argmax_blob'.format(type)) layer = caffe_net.Layer_param(name=layer_name, type='ArgMax', bottom=[log.blobs(input)], top=top_blobs) layer.argmax_param(dim=dim) log.cnet.add_layer(layer) return x
即實現了argmax操做Pytorch到Caffe的轉換。
Caffe中無直接對應操做
若是要轉換的操做在Caffe中無直接對應的層實現,解決思路主要有兩個:
1)在Pytorch中將不支持的操做分解爲支持的操做:
如nn.InstanceNorm2d,實例歸一化在轉換時是用BatchNorm作的,不支持 affine=True 或者track_running_stats=True,默認use_global_stats:false,但om轉換時use_global_stats必須爲true,因此能夠轉到Caffe,但再轉om不友好。
InstanceNorm是在featuremap的每一個Channel上進行歸一化操做,所以,能夠實現nn.InstanceNorm2d爲:
class InstanceNormalization(nn.Module): def __init__(self, dim, eps=1e-5): super(InstanceNormalization, self).__init__() self.gamma = nn.Parameter(torch.FloatTensor(dim)) self.beta = nn.Parameter(torch.FloatTensor(dim)) self.eps = eps self._reset_parameters() def _reset_parameters(self): self.gamma.data.uniform_() self.beta.data.zero_() def __call__(self, x): n = x.size(2) * x.size(3) t = x.view(x.size(0), x.size(1), n) mean = torch.mean(t, 2).unsqueeze(2).unsqueeze(3).expand_as(x) var = torch.var(t, 2).unsqueeze(2).unsqueeze(3).expand_as(x) gamma_broadcast = self.gamma.unsqueeze(1).unsqueeze(1).unsqueeze(0).expand_as(x) beta_broadcast = self.beta.unsqueeze(1).unsqueeze(1).unsqueeze(0).expand_as(x) out = (x - mean) / torch.sqrt(var + self.eps) out = out * gamma_broadcast + beta_broadcast return out
但在驗證HiLens Caffe算子邊界中發現, om模型轉換不支持Channle維度以外的求和或求均值操做,爲了規避這個操做,咱們能夠經過支持的算子從新實現nn.InstanceNorm2d:
class InstanceNormalization(nn.Module): def __init__(self, dim, eps=1e-5): super(InstanceNormalization, self).__init__() self.gamma = torch.FloatTensor(dim) self.beta = torch.FloatTensor(dim) self.eps = eps self.adavg = nn.AdaptiveAvgPool2d(1) def forward(self, x): n, c, h, w = x.shape mean = nn.Upsample(scale_factor=h)(self.adavg(x)) var = nn.Upsample(scale_factor=h)(self.adavg((x - mean).pow(2))) gamma_broadcast = self.gamma.unsqueeze(1).unsqueeze(1).unsqueeze(0).expand_as(x) beta_broadcast = self.beta.unsqueeze(1).unsqueeze(1).unsqueeze(0).expand_as(x) out = (x - mean) / torch.sqrt(var + self.eps) out = out * gamma_broadcast + beta_broadcast return out
通過驗證,與原操做等價,能夠轉爲Caffe模型
2)在Caffe中經過利用現有操做實現:
在Pytorch轉Caffe的過程當中發現,若是存在featuremap + 6這種涉及到常數的操做,轉換過程當中會出現找不到blob的問題。咱們首先查看pytorch_to_caffe.py中add操做的具體轉換方法:
def _add(input, *args): x = raw__add__(input, *args) if not NET_INITTED: return x layer_name = log.add_layer(name='add') top_blobs = log.add_blobs([x], name='add_blob') if log.blobs(args[0]) == None: log.add_blobs([args[0]], name='extra_blob') else: layer = caffe_net.Layer_param(name=layer_name, type='Eltwise', bottom=[log.blobs(input),log.blobs(args[0])], top=top_blobs) layer.param.eltwise_param.operation = 1 # sum is 1 log.cnet.add_layer(layer) return x
能夠看到對於blob不存在的狀況進行了判斷,咱們只須要在log.blobs(args[0]) == None條件下進行修改,一個天然的想法是利用Scale層實現add操做:
def _add(input, *args): x = raw__add__(input, *args) if not NET_INITTED: return x layer_name = log.add_layer(name='add') top_blobs = log.add_blobs([x], name='add_blob') if log.blobs(args[0]) == None: layer = caffe_net.Layer_param(name=layer_name, type='Scale', bottom=[log.blobs(input)], top=top_blobs) layer.param.scale_param.bias_term = True weight = torch.ones((input.shape[1])) bias = torch.tensor(args[0]).squeeze().expand_as(weight) layer.add_data(weight.cpu().data.numpy(), bias.cpu().data.numpy()) log.cnet.add_layer(layer) else: layer = caffe_net.Layer_param(name=layer_name, type='Eltwise', bottom=[log.blobs(input), log.blobs(args[0])], top=top_blobs) layer.param.eltwise_param.operation = 1 # sum is 1 log.cnet.add_layer(layer) return x
相似的,featuremap * 6這種簡單乘法也能夠經過一樣的方法實現。
踩過的坑
- Pooling:Pytorch默認 ceil_mode=false,Caffe默認 ceil_mode=true,可能會致使維度變化,若是出現尺寸不匹配的問題能夠檢查一下Pooling參數是否正確。另外,雖然文檔上沒有看到,可是 kernel_size > 32 後模型雖然能夠轉換,但推理會報錯,這時能夠分兩層進行Pooling操做。
- Upsample :om邊界算子中的Upsample 層scale_factor參數必須是int,不能是size。若是已有模型參數爲size也會正常跑完Pytorch轉Caffe的流程,但此時Upsample參數是空的。參數爲size的狀況能夠考慮轉爲scale_factor或用Deconvolution來實現。
- Transpose2d:Pytorch中 output_padding 參數會加在輸出的大小上,但Caffe不會,輸出特徵圖相對會變小,此時反捲積以後的featuremap會變大一點,能夠經過Crop層進行裁剪,使其大小與Pytorch對應層一致。另外,om中反捲積推理速度較慢,最好是不要使用,能夠用Upsample+Convolution替代。
- Pad:Pytorch中Pad操做不少樣,但Caffe中只能進行H與W維度上的對稱pad,若是Pytorch網絡中有h = F.pad(x, (1, 2, 1, 2), "constant", 0)這種不對稱的pad操做,解決思路爲:
- 若是不對稱pad的層不存在後續的維度不匹配的問題,能夠先判斷一下pad對結果的影響,一些任務受pad的影響很小,那麼就不須要修改。
- 若是存在維度不匹配的問題,能夠考慮按照較大的參數充分pad以後進行Crop,或是將先後兩個(0, 0, 1, 1)與(1, 1, 0, 0)的pad合爲一個(1, 1, 1, 1),這要看具體的網絡結構肯定。
- 若是是Channel維度上的pad如F.pad(x, (0, 0, 0, 0, 0, channel_pad), "constant", 0),能夠考慮零卷積後cat到featuremap上:
zero = nn.Conv2d(in_channels, self.channel_pad, kernel_size=3, padding=1, bias=False) nn.init.constant(self.zero.weight, 0) pad_tensor = zero(x) x = torch.cat([x, pad_tensor], dim=1)
- 一些操做能夠轉到Caffe,但om並不支持標準Caffe的全部操做,若是要再轉到om要對照文檔確認好邊界算子。
本文分享自華爲雲社區《Pytorch->Caffe模型轉換》,原文做者:杜甫蓋房子 。