Using Face Topology for better facial expressions

The geometry property provides an ARFaceGeometry object representing detailed topology for the face, which conforms a generic face model to match the dimensions, shape, and current expression of the detected face.

You can use this model as the basis for overlaying content that follows the shape of the user’s face—for example, to apply virtual makeup or tattoos. You can also use this model to create occlusion geometry—a 3D model that doesn’t render any visible content (allowing the camera image to show through), but that obstructs the camera’s view of other virtual content in the scene.

Using Face Geometry

var geometry: ARFaceGeometry

A coarse triangle mesh representing the topology of the detected face.

Using Blend Shapes

var blendShapes: [ARFaceAnchor.BlendShapeLocation : NSNumber]

A dictionary of named coefficients representing the detected facial expression in terms of the movement of specific facial features.

struct ARFaceAnchor.BlendShapeLocation

Identifiers for specific facial features, for use with coefficients describing the relative movements of those features.

Tracking Eye Movement

var leftEyeTransform: simd_float4x4

A transform matrix indicating the position and orientation of the face’s left eye.

var rightEyeTransform: simd_float4x4

A transform matrix indicating the position and orientation of the face’s right eye.

var lookAtPoint: simd_float3

A position in face coordinate space estimating the direction of the face’s gaze.

ARKit can analyze and understand the scene captured by the device’s camera, detecting flat surfaces like floors and tables. This allows developers to place virtual objects realistically within the environment, ensuring they interact appropriately with real-world

Leave a comment

Your email address will not be published. Required fields are marked *