The simplest and possibly most common is the Google Cardboard style of VR which is basically a phone put into a $5 – $50 face mask. This kind of VR has no controller so people have to come up with creative solutions for allowing user input.
The most common solution is “look to select” where if the user points their head at something for a moment it gets selected.
Let’s implement “look to select”! We’ll start with an example from the previous article and to do it we’ll add the PickHelper we made in the article on picking. Here it is.
class PickHelper {
constructor() {
this.raycaster = new THREE.Raycaster();
this.pickedObject = null;
this.pickedObjectSavedColor = 0;
}
pick(normalizedPosition, scene, camera, time) {
// restore the color if there is a picked object
if (this.pickedObject) {
this.pickedObject.material.emissive.setHex(this.pickedObjectSavedColor);
this.pickedObject = undefined;
}
// cast a ray through the frustum
this.raycaster.setFromCamera(normalizedPosition, camera);
// get the list of objects the ray intersected
const intersectedObjects = this.raycaster.intersectObjects(scene.children);
if (intersectedObjects.length) {
// pick the first object. It's the closest one
this.pickedObject = intersectedObjects[0].object;
// save its color
this.pickedObjectSavedColor = this.pickedObject.material.emissive.getHex();
// set its emissive color to flashing red/yellow
this.pickedObject.material.emissive.setHex((time * 8) % 2 > 1 ? 0xFFFF00 : 0xFF0000);
}
}
}
To use it we just need to create an instance and call it in our render loop
const pickHelper = new PickHelper();
...
function render(time) {
time *= 0.001;
...
// 0, 0 is the center of the view in normalized coordinates.
pickHelper.pick({x: 0, y: 0}, scene, camera, time);
In this case though we will always pick where the camera is facing which is the center of the screen so we pass in 0 for both x and y which is the center in normalized coordinates.
And with that objects will flash when we look at them
Typically we don’t want selection to be immediate. Instead we require the user to keep the camera on the thing they want to select for a few moments to give them a chance not to select something by accident.
To do that we need some kind of meter or gauge or some way to convey that the user must keep looking and for how long.
One easy way we could do that is to make a 2 color texture and use a texture offset to slide the texture across a model.
Let’s do this by itself to see it work before we add it to the VR example.
First we make an OrthographicCamera
const left = -2; // Use values for left const right = 2; // right, top and bottom const top = 1; // that match the default const bottom = -1; // canvas size. const near = -1; const far = 1; const camera = new THREE.OrthographicCamera(left, right, top, bottom, near, far);
And of course update it if the canvas changes size
function render(time) {
time *= 0.001;
if (resizeRendererToDisplaySize(renderer)) {
const canvas = renderer.domElement;
const aspect = canvas.clientWidth / canvas.clientHeight;
camera.left = -aspect;
camera.right = aspect;
camera.updateProjectionMatrix();
}