ARKit系列文章目錄node
本文是Ray Wenderlich上《ARKit by Tutorials》的讀書筆記,主要講內容概要和讀後感 swift
沒錯,本文主要講iPhone X的前置TrueDepth攝像頭的AR效果!主要功能:session
面部AR的啓動和其餘的差很少,只須要將配置項換成ARFaceTrackingConfiguration()
就能夠了,錯誤處理也相似:app
func resetTracking() {
// 1
guard ARFaceTrackingConfiguration.isSupported else {
updateMessage(text: "Face Tracking Not Supported.")
return
}
// 2
updateMessage(text: "Looking for a face.")
// 3
let configuration = ARFaceTrackingConfiguration()
configuration.isLightEstimationEnabled = true /* default setting */
configuration.providesAudioData = false /* default setting */
// 4
session.run(configuration, options:
[.resetTracking, .removeExistingAnchors])
}
複製代碼
func session(_ session: ARSession, didFailWithError error: Error) {
print("** didFailWithError")
updateMessage(text: "Session failed.")
}
func sessionWasInterrupted(_ session: ARSession) {
print("** sessionWasInterrupted")
updateMessage(text: "Session interrupted.")
}
func sessionInterruptionEnded(_ session: ARSession) {
print("** sessionInterruptionEnded")
updateMessage(text: "Session interruption ended.")
}
複製代碼
ARKit在檢測到人臉時,會添加一個ARFaceAnchor
到場景中,咱們就能夠用這個錨點來實現定位和追蹤功能.async
若是有兩張人臉,ARKit只會追蹤最大,最具備辨識度的那張臉.同時會在人臉上添加一我的臉座標系: ide
ARSCNFaceGeometry
,併爲其建立一個SCNNode對象
mask
來持有幾何體:
func createFaceGeometry() {
updateMessage(text: "Creating face geometry.")
let device = sceneView.device!
let maskGeometry = ARSCNFaceGeometry(device: device)!
mask = Mask(geometry: maskGeometry)
}
複製代碼
這個方法能夠在viewDidLoad
裏面直接調用,無需等到人臉出現再建立.post
最後將mask
添加到視圖上以顯示出來:動畫
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
anchorNode = node
//將mask顯示出來
setupFaceNodeContent()
}
func setupFaceNodeContent() {
guard let node = anchorNode else { return }
node.childNodes.forEach { $0.removeFromParentNode() }
if let content = mask {
node.addChildNode(content)
}
}
複製代碼
前面建立的幾何體ARSCNFaceGeometry
由於尚未識別到人臉,多是空的.因此咱們須要在後面的update方法中更新.ui
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor else { return }
updateMessage(text: "Tracking your face.")
// 根據錨點更新幾何體
mask?.update(withFaceAnchor: faceAnchor)
}
// Tag: ARFaceAnchor Update
func update(withFaceAnchor anchor: ARFaceAnchor) {
let faceGeometry = geometry as! ARSCNFaceGeometry
faceGeometry.update(from: anchor.geometry)
}
複製代碼
默認狀況下燈光設置是這樣:spa
/* default settings */
sceneView.automaticallyUpdatesLighting = true
sceneView.autoenablesDefaultLighting = false
sceneView.scene.lightingEnvironment.intensity = 1.0
複製代碼
但咱們也能夠根據環境調整,以實現不一樣的效果:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// 1
guard let estimate = session.currentFrame?.lightEstimate else {
return
}
// 2 在ARKit中1000意味着光強爲中等
let intensity = estimate.ambientIntensity / 1000.0
sceneView.scene.lightingEnvironment.intensity = intensity
// 3
let intensityStr = String(format: "%.2f", intensity)
let sceneLighting = String(format: "%.2f",
sceneView.scene.lightingEnvironment.intensity)
// 4
print("Intensity: \(intensityStr) - \(sceneLighting)")
}
複製代碼
如何實現Animojis中的表情動畫效果呢?這就用到了blend shapes,它本質是一個字典,key是ARFaceAnchor.BlendShapeLocation
常量,而value則是浮點數,範圍0.0(天然狀態)~1.0(最大移動狀態).
好比,咱們想添加一個眨眼的效果,取出eyeBlinkLeft
:
這個字典的是隨着ARFaceAnchor更新而更新的:
// - Tag: ARFaceAnchor Update
func update(withFaceAnchor anchor: ARFaceAnchor) {
blendShapes = anchor.blendShapes
}
// - Tag: BlendShapeAnimation
var blendShapes: [ARFaceAnchor.BlendShapeLocation: Any] = [:] {
didSet {
guard
// Brow
let browInnerUp = blendShapes[.browInnerUp] as? Float
// Right eye
let eyeLookInRight = blendShapes[.eyeLookInRight] as? Float,
let eyeLookOutRight = blendShapes[.eyeLookOutRight] as? Float,
let eyeLookUpRight = blendShapes[.eyeLookUpRight] as? Float,
let eyeLookDownRight = blendShapes[.eyeLookDownRight] as? Float,
let eyeBlinkRight = blendShapes[.eyeBlinkRight] as? Float
else { return }
// 在處理動畫
}
複製代碼
實現後的效果
ReplayKit是在iOS 9引入的,用來錄製音頻,視頻和麥克風,主要有兩個類:
主要代碼
let sharedRecorder = RPScreenRecorder.shared()
private var isRecording = false
複製代碼
// Private functions
private func startRecording() {
// 1
self.sharedRecorder.isMicrophoneEnabled = true
// 2
sharedRecorder.startRecording( handler: { error in
guard error == nil else {
print("There was an error starting the recording: \(String(describing: error?.localizedDescription))")
return
}
// 3
print("Started Recording Successfully")
self.isRecording = true
// 4
DispatchQueue.main.async {
self.recordButton.setTitle("[ STOP RECORDING ]", for: .normal)
self.recordButton.backgroundColor = UIColor.red
}
})
}
複製代碼
主要代碼
func stopRecording() {
// 1
self.sharedRecorder.isMicrophoneEnabled = false
// 2
sharedRecorder.stopRecording( handler: {
previewViewController, error in
guard error == nil else {
print("There was an error stopping the recording: \(String(describing: error?.localizedDescription))")
return
}
// 3
if let unwrappedPreview = previewViewController {
unwrappedPreview.previewControllerDelegate = self
self.present(unwrappedPreview, animated: true, completion: {})
}
})
// 4
self.isRecording = false
DispatchQueue.main.async {
self.recordButton.setTitle("[ RECORD ]", for: .normal)
self.recordButton.backgroundColor = UIColor(red: 0.0039,
green: 0.5882, blue: 1, alpha: 1.0) /* #0196ff */
}
}
複製代碼
效果如圖:
另外,還能夠實現sharedRecorder
的代理方法,監聽錯誤回調和狀態改變:
// RPScreenRecorderDelegate methods
func screenRecorder(_ screenRecorder: RPScreenRecorder, didStopRecordingWith previewViewController: RPPreviewViewController?, error: Error?) {
guard error == nil else {
print("There was an error recording: \(String(describing: error?.localizedDescription))")
self.isRecording = false
return
}
}
func screenRecorderDidChangeAvailability(_ screenRecorder: RPScreenRecorder) {
recordButton.isEnabled = sharedRecorder.isAvailable
if !recordButton.isEnabled {
self.isRecording = false
}
}
複製代碼
使用方法不算太難,此處再也不過多展開,詳情可購買閱讀正版書籍.
第四部分讀書筆記結束!