Face Tracking with RealityKit
Learn how to leverage RealityKit and Reality Composer to build engaging AR experiences focused on tracking facial movement and expression, add props and behaviors to face anchors in Reality Composer, and drive animations for 3D content in augmented reality with your facial expressions. By Catie Catterwaul.
Who is this for?
This course is for experienced iOS developers who are comfortable with Swift and have some familiarity with AR technology.
Covered concepts
- RealityKit
- Reality Composer
- ARSession & ARSessionDelegate
- Face Anchors
- Blend Shapes
- Reality Composer Behaviors
Part 1: Face Tracking with RealityKit
Find out what RealityKit has to offer and how to set up a RealityKit project with the required permissions.
Learn how to get around in Reality Composer, set up your first face anchor, and add props from Xcode’s built-in library.
Start an ARSession and configure it for face tracking with help from ARKit. Access objects from Reality Composer with Swift.
Set up the app to handle switching between different props by adding and removing anchors from the ARView.
Set up an ARSessionDelegate to handle live updates to face anchors and drive animation based on where a user is looking.
Learn how to access a ton of information about a user’s facial movement via the face anchor’s blend shapes.
Use the movement of a user’s jaw to drive the jaw animation of a 3D robot head! Learn a bit about quaternions.
Use the movement of a user’s eyes and eyebrows to drive the eyelid animation of a 3D robot head! Apply multiple rotations to a single object.
Try out Reality Composer’s behavior system to add animation and sound effects to your robot experience.
Learn how to trigger behaviors from Reality Composer from your Swift code. Add some lights!