What is virtual reality?
The computer science professor Fred Brooks defined virtual reality "as an experience in which the user is effectively immersive in a responsive virtual world" (Brooks 1999, online source). This implies user dynamic control of a viewport. In order to describe an experience, we use terminology as immersion and interactivity. Immersion refers to the feeling that occurs in a situation where spectators (such as viewers or gamers) are paying so much attention to this particular medium, and the latter causes so much involvement that the audience is no longer aware of the things that are happening around them (Heirman 2018). From a 3D vision technology perspective, user control of the viewport and a surrounding experience make virtual reality more immersive than other types of media. In a virtual reality device, like an HTC Vive or Oculus Quest, you have two displays, one in front of each eye, and the image is a little different from the other, which generates a clear 3D view.
With a virtual reality device, it is possible to move around in a predefined space and interact with the virtual environment. By using the virtual reality device, we are able to control our viewpoint. This means that the experience is completely one's own private experience, so the virtual reality device is updating the 3D space according to the direction the user is looking at. This is possible by built-in sensors tracking the user's head position and rotation in order to update the 3D environment. Because of the minimal latency of updating the 3D world, virtual reality is the immersive medium.
Virtual Reality hardware
Head tracking
We tracked the user's head in order to update the displays according to the user's viewpoint. It's important we can do it very quickly to maintain emotion. Most VR devices on the market have this tracking latency controlled at level of single digit milliseconds. So it's not noticeable in most cases. We need to track the user's viewpoint in the 3D space. This implies we have to track their head rotation and position. The built-in tracking system tracks the user's head rotation using accelerometer or gyrometer or both. The same methods are actually used in most smartphones in order to support features such as autorotation when you flip your phone like this. For position tracking, external optical tracking devices are normally used. It could be several ceiling-mounted infrared cameras pointing at a user so they can calculate the position of the user in the 3D space and feed the data to the machine, which then takes care of updating the display accordingly. It is important to mention that not all devices come with position tracking. Some have rotation tracking only. This means that you can still look around but you are stuck where you are. You won't be able to move around in VR with your body to get closer to certain objects or observe them from a different angle like you can in real life. You can still navigate the 3D environment with a controller, a touchpad or joystick, but it won't be as natural, and you won't have a good sense of scale in VR without this position tracking. Integrating position tracking and updating the graphics in real time potentially could consume a lot of computational power
Controllers
In VR, you will be interacting with objects using VR controllers. In order to pick up an object in the 3D space, the system will need to know where your hands are. So we will need to be able to track the position of the VR controllers. In the real world, if I like to grab my mug to see if I still have tea in it, I will have to rotate the mug slightly. So in VR, I should also be able to do this, which requires rotation tracking of the controller too.
The controllers have a built-in system to track rotation and it uses external optical sensors to track it's position. Other than interacting with objects and gesturing with friends, often the controllers are used for something we don't normally do with our hands in real life. That is navigation. Basically, we can use the controllers to walk around 3D world without moving the actual position of our body. This enables users to explore a virtual world that is bigger than the physical space they're actually in.
In order to look and move around in a video game, we often use a touchpad, mouse, keyboard, or joystick. A similar method has been adapted for us to navigate in VR. As you can see in the middle of a vr controller, there is a small joystick which you can use to navigate your 3D environment. When you use the joystick to move around in VR, it can be called virtual navigation, as opposed to actually moving around in the physical world.
Furthermore, most VR controllers nowadays are able to provide haptic feedback through vibration. So, what does this haptic feedback through vibration mean? In real life, when I put this mug on the table I will feel a small vibration the moment the mug touches the table. This can be supported by the controller in VR. Because we have the position tracking of the controller, the system knows where the mug is moving in a 3D space, and it knows the 3D position of the table. So when these two overlap with each other in the 3D space, at the moment the mug touches the table, the system can detect it and make the controller generate a small vibration to resonate with what I expect to feel in real life. This gives a very strong sense of immersion.
Last updated