[MetalKit]37-Using-ARKit-with-Metal使用ARKit與Metal

本系列文章是對 metalkit.org 上面MetalKit內容的全面翻譯和學習.c++

MetalKit系統文章目錄git


Augmented Reality加強現實提供了一種疊加虛擬內容到攝像頭獲取到的真實世界視圖上的方法.上個月在WWDC2017當看到Apple的新的ARKit框架時咱們都很興奮,這是一個高級API,工做在運行iOS11A9設備或更新設備上.有些ARKit實驗確實很是傑出,好比下面這個: 程序員

ARKit.gif

ARKit應用中有三種不一樣的圖層:github

  • Tracking追蹤 - 使用視覺慣性里程計來實現世界追蹤,無需額外設置.
  • Scene Understanding場景理解 - 使用平面檢測,點擊測試和光照估計來探測場景屬性的能力.
  • Rendering渲染 - 能夠輕鬆整合,由於AR視圖模板是由SpriteKitSceneKit提供的,能夠用Metal自定義.全部預渲染過程由ARKit處理完成,它還同時負責用AVFoundationCoreMotion進行圖像捕捉.

在本系列的第一章節裏,咱們將主要關注Metal中的渲染,其他兩步將在本系列下一章節中討論.在一個AR應用中,Tracking追蹤Scene Understanding場景理解是徹底由ARKit框架處理的,但渲染能夠用SpriteKit, SceneKitMetal來處理: swift

ARKit1.png

開始,咱們須要有一個ARSession實例,它是用一個ARSessionConfiguration對象來建立的.而後,咱們調用run() 函數來配置.這個會話管理着同時運行的AVCaptureSessionCMMotionManager對象,來獲取圖像和運動數據來實現追蹤.最後,會話將輸出當前幀到一個ARFrame對象: session

ARKit2.png

ARSessionConfiguration對象包含了關於追蹤類型的信息.ARSessionConfiguration的基礎類提供3個自由度的追蹤,而它的子類,ARWorldTrackingSessionConfiguration提供6個自由度的追蹤(設備位置旋轉方向). app

ARKit4.png

當一個設備不支持世界追蹤時,回落到基礎配置:框架

if ARWorldTrackingSessionConfiguration.isSupported { 
    configuration = ARWorldTrackingSessionConfiguration()
} else {
    configuration = ARSessionConfiguration() 
}
複製代碼

ARFrame包含了捕捉到的圖像,追蹤信息和場景信息,場景信息經過包含真實世界位置和旋轉信息的ARAnchor對象來獲取,這個對象能夠輕易從會話中被添加,更新或移除.Tracking追蹤是實時肯定物理位置的能力.World Tracking,能同時肯定位置和朝向,它使用物理距離,與起始位置相關聯並提供3D特徵點.ide

ARFrame的最後一個組件是ARCamera對象,它處理變換(平移,旋轉,縮放)並攜帶了追蹤狀態和相機本體.追蹤的質量強烈依賴於不間斷的傳感器數據,穩定的場景,而且當場景中有大量複雜紋理時會更加精確.追蹤狀態有三個值:Not Available不可用(相機只有單位矩陣),Limited受限(場景中特徵不足或不夠穩定),還有Normal正常(相機數據正常).當相機輸入不可用時或當追蹤中止時,會引起會話打斷:函數

func session(_ session: ARSession, cameraDidChangeTrackingState camera: ARCamera) { 
    if case .limited(let reason) = camera.trackingState {
        // Notify user of limited tracking state
    } 
}
func sessionWasInterrupted(_ session: ARSession) { 
    showOverlay()
}
func sessionInterruptionEnded(_ session: ARSession) { 
    hideOverlay()
    // Optionally restart experience
}
複製代碼

Rendering能夠在SceneKit中完成,它使用ARSCNView的代理來添加,更新或移除節點.相似的,Rendering也能夠在SpriteKit中完成,它使用ARSKView代理來佈局SKNodesARAnchor對象.由於SpriteKit2D的,它不能使用真實世界的相機位置,因此它是投影錨點位置到ARSKView,而後在這個被投影的位置做爲廣告牌(平面)來渲染點精靈的,因此點精靈老是面對着攝像機.對於Metal,沒有定製的AR視圖,因此這個責任落到了程序員手裏.爲了處理渲染出的圖像,咱們須要:

  • 繪製背景相機圖像(從像素緩衝器生成一個紋理)
  • 更新虛擬攝像機
  • 更新光照
  • 更新幾何體的變換

全部這些信息都在ARFrame對象中.爲訪問這個幀,有兩種設置:polling輪詢或使用delegate代理.咱們將使用後者.我拿出ARKitMetal準備的模板,並精簡到最簡,這樣我能更好地理解它是如何工做的.我作的第一件事就是移除全部C語言的依賴項,這樣就不在須要橋接了.保留這些類型和枚舉常量在之後可能會頗有用,能用來在API代碼和着色器之間共享這些類型和枚舉,可是對於本文來講這是不須要的.

下一步,到ViewController中,它將做爲咱們MTKViewARSession的代理.咱們建立一個Renderer實例,它將與代理協做,實時更新應用:

var session: ARSession!
var renderer: Renderer!

override func viewDidLoad() {
    super.viewDidLoad()
    session = ARSession()
    session.delegate = self
    if let view = self.view as? MTKView {
        view.device = MTLCreateSystemDefaultDevice()
        view.delegate = self
        renderer = Renderer(session: session, metalDevice: view.device!, renderDestination: view)
        renderer.drawRectResized(size: view.bounds.size)
    }
    let tapGesture = UITapGestureRecognizer(target: self, action: #selector(self.handleTap(gestureRecognize:)))
    view.addGestureRecognizer(tapGesture)
}
複製代碼

正如你看到的,咱們將添加一個手勢識別器,咱們用它來添加虛擬內容到視圖中.咱們首先拿到會話的當前幀,而後建立一個轉換來將咱們的物體放到相機前(本例中爲0.3米),最後用這個變換來添加一個新的錨點到會話中:

func handleTap(gestureRecognize: UITapGestureRecognizer) {
    if let currentFrame = session.currentFrame {
        var translation = matrix_identity_float4x4
        translation.columns.3.z = -0.3
        let transform = simd_mul(currentFrame.camera.transform, translation)
        let anchor = ARAnchor(transform: transform)
        session.add(anchor: anchor)
    }
}
複製代碼

咱們使用**viewWillAppear()viewWillDisappear()**方法來開始和暫停會話:

override func viewWillAppear(_ animated: Bool) {
    super.viewWillAppear(animated)
    let configuration = ARWorldTrackingSessionConfiguration()
    session.run(configuration)
}

override func viewWillDisappear(_ animated: Bool) {
    super.viewWillDisappear(animated)
    session.pause()
}
複製代碼

剩下的只有響應視圖更新或會話錯誤及打斷的代理方法:

func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
    renderer.drawRectResized(size: size)
}

func draw(in view: MTKView) {
    renderer.update()
}

func session(_ session: ARSession, didFailWithError error: Error) {}

func sessionWasInterrupted(_ session: ARSession) {}

func sessionInterruptionEnded(_ session: ARSession) {}
複製代碼

讓咱們如今轉到Renderer.swift文件中.須要注意的第一件事情是,使用了一個很是有用的協議,它將讓咱們能訪問咱們稍後繪製調用中須要的全部MTKView屬性:

protocol RenderDestinationProvider {
    var currentRenderPassDescriptor: MTLRenderPassDescriptor? { get }
    var currentDrawable: CAMetalDrawable? { get }
    var colorPixelFormat: MTLPixelFormat { get set }
    var depthStencilPixelFormat: MTLPixelFormat { get set }
    var sampleCount: Int { get set }
}
複製代碼

如今你能夠只需擴展MTKView類(在ViewController中),就讓它遵照了這個協議:

extension MTKView : RenderDestinationProvider {}
複製代碼

要想有一個Renderer類的高級視圖,下面是它的僞代碼:

init() {
    setupPipeline()
    setupAssets()
}
    
func update() {
    updateBufferStates()
    updateSharedUniforms()
    updateAnchors()
    updateCapturedImageTextures()
    updateImagePlane()
    drawCapturedImage()
    drawAnchorGeometry()
}
複製代碼

像之前同樣,咱們首先建立管線,這裏用setupPipeline() 函數.而後,在setupAssets() 裏,咱們建立咱們的模型,當咱們的點擊手勢識別時就會加載出來.當繪製調用時或須要更新時,MTKView代理將會調用update() 函數.讓咱們仔細看看它們.首先咱們用了updateBufferStates(),它更新咱們爲當前幀(本例中,咱們使用3個空位的環形緩衝器)寫入到緩衝器中的位置.

func updateBufferStates() {
    uniformBufferIndex = (uniformBufferIndex + 1) % maxBuffersInFlight
    sharedUniformBufferOffset = alignedSharedUniformSize * uniformBufferIndex
    anchorUniformBufferOffset = alignedInstanceUniformSize * uniformBufferIndex
    sharedUniformBufferAddress = sharedUniformBuffer.contents().advanced(by: sharedUniformBufferOffset)
    anchorUniformBufferAddress = anchorUniformBuffer.contents().advanced(by: anchorUniformBufferOffset)
}
複製代碼

下一步,在updateSharedUniforms() 中咱們更新該幀的共享的uniforms,併爲場景設置光照:

func updateSharedUniforms(frame: ARFrame) {
    let uniforms = sharedUniformBufferAddress.assumingMemoryBound(to: SharedUniforms.self)
    uniforms.pointee.viewMatrix = simd_inverse(frame.camera.transform)
    uniforms.pointee.projectionMatrix = frame.camera.projectionMatrix(withViewportSize: viewportSize, orientation: .landscapeRight, zNear: 0.001, zFar: 1000)
    var ambientIntensity: Float = 1.0
    if let lightEstimate = frame.lightEstimate {
        ambientIntensity = Float(lightEstimate.ambientIntensity) / 1000.0
    }
    let ambientLightColor: vector_float3 = vector3(0.5, 0.5, 0.5)
    uniforms.pointee.ambientLightColor = ambientLightColor * ambientIntensity
    var directionalLightDirection : vector_float3 = vector3(0.0, 0.0, -1.0)
    directionalLightDirection = simd_normalize(directionalLightDirection)
    uniforms.pointee.directionalLightDirection = directionalLightDirection
    let directionalLightColor: vector_float3 = vector3(0.6, 0.6, 0.6)
    uniforms.pointee.directionalLightColor = directionalLightColor * ambientIntensity
    uniforms.pointee.materialShininess = 30
}
複製代碼

下一步,在updateAnchors() 中咱們用當前幀的錨點的變換來更新錨點uniform緩衝器:

func updateAnchors(frame: ARFrame) {
    anchorInstanceCount = min(frame.anchors.count, maxAnchorInstanceCount)
    var anchorOffset: Int = 0
    if anchorInstanceCount == maxAnchorInstanceCount {
        anchorOffset = max(frame.anchors.count - maxAnchorInstanceCount, 0)
    }
    for index in 0..<anchorInstanceCount {
        let anchor = frame.anchors[index + anchorOffset]
        var coordinateSpaceTransform = matrix_identity_float4x4
        coordinateSpaceTransform.columns.2.z = -1.0
        let modelMatrix = simd_mul(anchor.transform, coordinateSpaceTransform)
        let anchorUniforms = anchorUniformBufferAddress.assumingMemoryBound(to: InstanceUniforms.self).advanced(by: index)
        anchorUniforms.pointee.modelMatrix = modelMatrix
    }
}
複製代碼

下一步,在updateCapturedImageTextures() 咱們從提供的幀的捕捉圖像裏,建立兩個紋理:

func updateCapturedImageTextures(frame: ARFrame) {
    let pixelBuffer = frame.capturedImage
    if (CVPixelBufferGetPlaneCount(pixelBuffer) < 2) { return }
    capturedImageTextureY = createTexture(fromPixelBuffer: pixelBuffer, pixelFormat:.r8Unorm, planeIndex:0)!
    capturedImageTextureCbCr = createTexture(fromPixelBuffer: pixelBuffer, pixelFormat:.rg8Unorm, planeIndex:1)!
}
複製代碼

下一步,在updateImagePlane() 中,咱們更新圖像平面的紋理座標來適應視口:

func updateImagePlane(frame: ARFrame) {
    let displayToCameraTransform = frame.displayTransform(withViewportSize: viewportSize, orientation: .landscapeRight).inverted()
    let vertexData = imagePlaneVertexBuffer.contents().assumingMemoryBound(to: Float.self)
    for index in 0...3 {
        let textureCoordIndex = 4 * index + 2
        let textureCoord = CGPoint(x: CGFloat(planeVertexData[textureCoordIndex]), y: CGFloat(planeVertexData[textureCoordIndex + 1]))
        let transformedCoord = textureCoord.applying(displayToCameraTransform)
        vertexData[textureCoordIndex] = Float(transformedCoord.x)
        vertexData[textureCoordIndex + 1] = Float(transformedCoord.y)
    }
}
複製代碼

下一步,在drawCapturedImage() 中咱們在場景中繪製來自相機的畫面:

func drawCapturedImage(renderEncoder: MTLRenderCommandEncoder) {
    guard capturedImageTextureY != nil && capturedImageTextureCbCr != nil else { return }
    renderEncoder.pushDebugGroup("DrawCapturedImage")
    renderEncoder.setCullMode(.none)
    renderEncoder.setRenderPipelineState(capturedImagePipelineState)
    renderEncoder.setDepthStencilState(capturedImageDepthState)
    renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)
    renderEncoder.setFragmentTexture(capturedImageTextureY, index: 1)
    renderEncoder.setFragmentTexture(capturedImageTextureCbCr, index: 2)
    renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
    renderEncoder.popDebugGroup()
}
複製代碼

最終,在drawAnchorGeometry() 中,咱們爲建立的虛擬內容繪製錨點:

func drawAnchorGeometry(renderEncoder: MTLRenderCommandEncoder) {
    guard anchorInstanceCount > 0 else { return }
    renderEncoder.pushDebugGroup("DrawAnchors")
    renderEncoder.setCullMode(.back)
    renderEncoder.setRenderPipelineState(anchorPipelineState)
    renderEncoder.setDepthStencilState(anchorDepthState)
    renderEncoder.setVertexBuffer(anchorUniformBuffer, offset: anchorUniformBufferOffset, index: 2)
    renderEncoder.setVertexBuffer(sharedUniformBuffer, offset: sharedUniformBufferOffset, index: 3)
    renderEncoder.setFragmentBuffer(sharedUniformBuffer, offset: sharedUniformBufferOffset, index: 3)
    for bufferIndex in 0..<mesh.vertexBuffers.count {
        let vertexBuffer = mesh.vertexBuffers[bufferIndex]
        renderEncoder.setVertexBuffer(vertexBuffer.buffer, offset: vertexBuffer.offset, index:bufferIndex)
    }
    for submesh in mesh.submeshes {
        renderEncoder.drawIndexedPrimitives(type: submesh.primitiveType, indexCount: submesh.indexCount, indexType: submesh.indexType, indexBuffer: submesh.indexBuffer.buffer, indexBufferOffset: submesh.indexBuffer.offset, instanceCount: anchorInstanceCount)
    }
    renderEncoder.popDebugGroup()
}
複製代碼

回到前面提到的setupPipeline() 函數.建立兩個渲染管線狀態對象,一個用於捕捉圖像(從相機接收),一個用於咱們在場景中放置虛擬物體時建立的錨點.正如所料,每個狀態對象都將有一對本身的頂點函數和片斷函數 - 讓咱們轉到最後一個須要關注的文件 - 在Shaders.metal文件中.用來捕捉圖像的第一對着色器中,咱們在頂點着色器中傳遞圖像的頂點位置和紋理座標:

vertex ImageColorInOut capturedImageVertexTransform(ImageVertex in [[stage_in]]) {
    ImageColorInOut out;
    out.position = float4(in.position, 0.0, 1.0);
    out.texCoord = in.texCoord;
    return out;
}
複製代碼

在片斷着色器中,咱們採樣兩個紋理來獲得給定紋理座標處的顏色,而後返回修改過的RGB顏色:

fragment float4 capturedImageFragmentShader(ImageColorInOut in [[stage_in]],
                                            texture2d<float, access::sample> textureY [[ texture(1) ]],
                                            texture2d<float, access::sample> textureCbCr [[ texture(2) ]]) {
    constexpr sampler colorSampler(mip_filter::linear, mag_filter::linear, min_filter::linear);
    const float4x4 ycbcrToRGBTransform = float4x4(float4(+1.0000f, +1.0000f, +1.0000f, +0.0000f),
                                                  float4(+0.0000f, -0.3441f, +1.7720f, +0.0000f),
                                                  float4(+1.4020f, -0.7141f, +0.0000f, +0.0000f),
                                                  float4(-0.7010f, +0.5291f, -0.8860f, +1.0000f));
    float4 ycbcr = float4(textureY.sample(colorSampler, in.texCoord).r, textureCbCr.sample(colorSampler, in.texCoord).rg, 1.0);
    return ycbcrToRGBTransform * ycbcr;
}
複製代碼

給錨點幾何體的第二對着色器中,頂點着色器中咱們在裁剪空間裏計算頂點的位置並輸出,以供裁剪和光柵化,而後給每一個面着上不一樣顏色,而後計算咱們頂點在觀察空間的位置,並最終旋轉法線到世界座標系:

vertex ColorInOut anchorGeometryVertexTransform(Vertex in [[stage_in]],
                                                constant SharedUniforms &sharedUniforms [[ buffer(3) ]],
                                                constant InstanceUniforms *instanceUniforms [[ buffer(2) ]],
                                                ushort vid [[vertex_id]],
                                                ushort iid [[instance_id]]) {
    ColorInOut out;
    float4 position = float4(in.position, 1.0);
    float4x4 modelMatrix = instanceUniforms[iid].modelMatrix;
    float4x4 modelViewMatrix = sharedUniforms.viewMatrix * modelMatrix;
    out.position = sharedUniforms.projectionMatrix * modelViewMatrix * position;
    ushort colorID = vid / 4 % 6;
    out.color = colorID == 0 ? float4(0.0, 1.0, 0.0, 1.0)  // Right face
              : colorID == 1 ? float4(1.0, 0.0, 0.0, 1.0)  // Left face
              : colorID == 2 ? float4(0.0, 0.0, 1.0, 1.0)  // Top face
              : colorID == 3 ? float4(1.0, 0.5, 0.0, 1.0)  // Bottom face
              : colorID == 4 ? float4(1.0, 1.0, 0.0, 1.0)  // Back face
              :                float4(1.0, 1.0, 1.0, 1.0); // Front face
    out.eyePosition = half3((modelViewMatrix * position).xyz);
    float4 normal = modelMatrix * float4(in.normal.x, in.normal.y, in.normal.z, 0.0f);
    out.normal = normalize(half3(normal.xyz));
    return out;
}
複製代碼

在片斷着色器中,咱們計算方向光的貢獻值,使用漫反射和高光項目的總和,而後經過將從顏色地圖的採樣與片斷的光照值相乘來計算最終顏色,最後,用剛計算出來的顏色和顏色地圖的透明通道給片斷的透明度值:

fragment float4 anchorGeometryFragmentLighting(ColorInOut in [[stage_in]],
                                               constant SharedUniforms &uniforms [[ buffer(3) ]]) {
    float3 normal = float3(in.normal);
    float3 directionalContribution = float3(0);
    {
        float nDotL = saturate(dot(normal, -uniforms.directionalLightDirection));
        float3 diffuseTerm = uniforms.directionalLightColor * nDotL;
        float3 halfwayVector = normalize(-uniforms.directionalLightDirection - float3(in.eyePosition));
        float reflectionAngle = saturate(dot(normal, halfwayVector));
        float specularIntensity = saturate(powr(reflectionAngle, uniforms.materialShininess));
        float3 specularTerm = uniforms.directionalLightColor * specularIntensity;
        directionalContribution = diffuseTerm + specularTerm;
    }
    float3 ambientContribution = uniforms.ambientLightColor;
    float3 lightContributions = ambientContribution + directionalContribution;
    float3 color = in.color.rgb * lightContributions;
    return float4(color, in.color.w);
}
複製代碼

若是你運行應用,將可以經過點擊屏幕來添加立方體到相機視圖上,處處移動或湊近或環繞立方體來觀察每一個面的不一樣顏色,好比這樣:

ARKit5.gif

在本系列的下一章節,咱們將更深刻學習Tracking追蹤Scene Understanding場景理解,並看看平面檢測,點擊測試,碰撞和物理效果是如何讓咱們的經歷更美好的. 源代碼source code已發佈在Github上.

下次見!

相關文章
相關標籤/搜索