XRView.eye when creating cameras for an XR device’s views – as well as finalising for cameras beyond the first two.
Remove the explicit cameraL and cameraR variables, as well as the separate cameras array in favour of directly using cameraXR.cameras, and relying entirely on the device’s reported XRViewerPose. The knock-on of this is that all the cameras are created on receiving the device’s first XRViewerPose, rather than at the instantiation of WebXRManager..
This leads to two additional assumptions:
- WebXR devices provide the correct number of views, with the correct
eyeset on theirXRViews. - That it’s okay for views with
eyeset to"none"to have the ‘left’ content rendered, which I think is reasonable based on this from immersive-web:
The combination of these two things mean that the main regression I’ve been able to envision would be that, for a device which doesn’t report its eyes correctly, the right eye would be a duplicate of the left eye (whereas currently we’re assuming that the second view is always the right eye).
The upside is that for devices with more than two views (even if those views are all "none"-eyed), pre-rendered stereo content (e.g. webxr_vr_video) will be rendered correctly.
If these assumptions are sound, I think that moving the camera creation after frame.getViewerPose() is reasonable, as it allows the XRManager to rely on the spec for eye differentiation – but I’m happy to scale this back to just ensure that all the child cameras’ layers are kept in sync if this seems too sweeping.