Virtual reality is the next hot technology. The Oculus Rift, a virtual reality headset that allows you to see 3D worlds, is almost ready for consumers and developers are racing to come up with novel uses for it. Most of the ideas lean towards the entertainment realm. Video games have been a fertile ground for experimentation in the field of Virtual Reality. Adapting existing video games to Virtual Reality is a great first step to better understanding this technology, and what it can offer. But the technology could theoretically offer a lot more.
VR promises the ability to put humans into worlds and situations that would otherwise be impossible or too dangerous to experience. It does this by completely immersing the user’s vision with a video headset. This headset tracks their head movement, so that the user can look around naturally, adding to the immersion. Noise canceling headphones are often used to immerse the player’s hearing as well. The only thing that the user can see or hear is what the designer wants them to see. This can be incredibly powerful in the hands of a creative and talented designer.
This is different from Augmented Reality in an important detail. While Virtual Reality completely blocks out the outside world, Augmented Reality creates an overlay on the existing world. This could be used to keep meta-information about the environment available, such as directions to a location, or weather information. Recently Microsoft has announced the HoloLens, which will allow players to turn any room into a gaming space. At E3 2015, Microsoft demoed their HoloLens with Minecraft, showing how players could play in multiplayer worlds, and even change aspects of the world with voice commands.
But AR tech is still in its infancy. The HoloLens is a huge leap forward, but it’s clear that VR tech is where most of the developer resources are going. For this article, we’ll be talking mostly about VR. But AR isn’t going away, and what could VR and AR look like in 5-10 years?
In this article, I want to look at how current experiences are being adapted to VR, how successful those experiences are, and specific deficiencies that designers will need to improve before the technology can reach its full potential.
Augmented Reality Theater
While working on my Master’s degree, we (myself, Corrie Colombero, Pui Mo, and Monet Rouse) looked at the world of virtual and augmented reality, and came up with a design that we thought might fill some gaps in the current experience. The result was the Augmented Reality Theater, or the ART.
The ART was designed for 5-10 years into the future. It used augmented reality glasses and tactile experiences to enhance the movie theater experience.
This design was eventually presented at SIGGRAPH 2014 and we learned a lot about the world of VR and AR. Working on this project gave me a sense of how advanced the technology in these worlds was, but from the perspective of a designer, it also showed me how much work needed to be done to improve these experiences.
The point of the ART was to create a novel movie theater experience. However, as we learned more about the world of AR and VR, it became clear that there were some obvious deficiencies in the current experience.
Examining Virtual Reality
Because of the fact that VR offers to replace at least two of the users senses completely, we can judge how successful a VR experience is by how well it immerses the user in this new world. While a user can have their vision and hearing immersed in this new world, as soon as they attempt to stand up and move around, or speak, or touch something in the real world that doesn’t match what they are seeing and hearing, the illusion is broken.
Designers have been attempting to solve these problems to create a complete full body immersion, allowing the user to not just look around the world, but to have their full body movements replicated in the virtual world, and to allow the user to feel and manipulate the objects in the virtual world, and allow the characters in the virtual world react to their speech and movements.
The ideal experience looks something like the Holodeck from Star Trek:
In the show, simulations are so advanced and realistic to be dangerous, even deadly. We’re a long way from that experience, assuming it’s even possible, but there are a few problems with current VR that if solved, would greatly enhance the user experience in VR. In adapting existing and familiar experiences into VR, the three current problems I see are gestural controls, free movement and tactile experiences.
Gesture controls are a significant challenge in all computing right now. Current control mechanisms like the mouse and keyboard have been in use since the dawn of desktop computing, but when the user isn’t chained to a screen, these control mechanisms don’t make sense any more. While it is possible to control VR experiences with a mouse and keyboard, for a fully immersive experience, designers are looking for new ways to give the user full control.
Let’s take a common example that many VR experiences are trying to adapt: the First Person Shooter(FPS). FPS games allow the player to use a variety of guns to fight their way through a scenario, shooting enemies throughout the game. The genre is so named because the player sees the environment from a first person perspective, seeing only the gun in their field of view.
PrioVR is a technology that utilizes an Oculus Rift headset and a full body suit to track the player movements. This suit translates their movements into movements for their avatar in the game.
Their video demonstration shows the player picking up, aiming and firing weapons, leaning around corners, and even performing melee attacks, both punches and kicks. However, looking at the video, it doesn’t look entirely accurate or natural. And a player using this system in their home would need a lot of space to perform the movements that the player in the video is performing. With their vision completely covered, an errant movement could easily destroy equipment in the home, or injure the player.
Another control device called Sixense uses a modular system that involves handheld controllers with additional tracking points for the limbs and torso. These additional points appear to be optional. The handheld controllers track very precise movement of the hands, and their video suggests that players will be able to naturally pick up objects in the game world.
This controller appears to track gestural movements very precisely, and helps translate those movements into in-game actions. One of the demonstrations involves the player holding lightsabers and deflecting laser blasts from a drone.
This video also shows the Sixense’s attempt to solve the problem of using the device in a smaller space. The player can set up sensors around their play area, and if they get too close to the edge, the game world will show them a red laser grid, warning them that they are getting too close to the edge of the play area. I see this as a good compromise between the limited space that most home players would have to use this system in, and giving players the confidence that they would need to play this type of game.
Regardless of whether it works perfectly at this point, it is clear that we have the technology that will be required to track player motions and translate them into an experience. While I wouldn’t call it a solved problem, there is definitely thinking being done, and with more development, this part of the experience will be much improved.
A much bigger problem is free movement.
When immersing a user’s sight and hearing in a virtual world that allows them to look around, tracking their head movements precisely, it’s not surprising that the user might expect to be able to walk around in this virtual world, and interact with objects more fully. However, that’s not the experience that VR normally offers.
The reasons for this are pretty simple. The virtual world doesn’t need to, and shouldn’t translate directly to the real world. The entire point of VR is to put users into a world that they couldn’t normally visit. Recreating a player’s living room while they sit in their living room is probably not an experience most people would be excited about.
Several companies are attempting to solve this problem in various ways. Cyberith and Virtuix are both developing devices that hold the user in place, while allowing their feet free movement through the use of a low-friction surface. This allows the user to use their legs freely, while not allowing them to move around and run into walls or equipment nearby.
When combined with a VR headset, along with some kind of controller (in their promotional material, both companies use a gun-shaped controller for FPS games), these technologies could successfully simulate free movement in a virtual world.
However, these systems both require a lot of extra equipment and space. It’s not clear if I could strap myself into one of these systems, or if I’d need assistance.
And any tech this new is sure to be expensive. The Cyberith system is available for pre-order for $1249 as of this writing. For a new laptop, that’s a great deal, but for a gaming peripheral, it’s quite expensive. The Virtuix Omni is available for pre-order for $699 as of this writing, which is much cheaper comparably, but still incredibly expensive.
Both of these systems also include custom-sized components, like harnesses. The Virtuix Omni requires special shoes. Which means different-sized people would have to purchase extra equipment in order to make use of the system.
But these are not entirely out of the reach of someone who has the means to purchase a gaming PC and other entertainment devices. And they both appear to work with a host of popular games, and there’s no reason to think that future games would ignore their technologies as they become more common.
Another company called Zero Latency is taking a different approach. Rather than hold the player in place, they are simply giving them a much larger area to roam in.
This company puts players into a large warehouse space, and has created a custom game to be played with VR headsets and gun controllers. Patrons can play 6 player co-op to fight off zombie hordes with completely free movement.
Giving players open space to move around in adds greatly to the immersion, but creates some safety concerns. The company claims that they have in-game safety features that will keep you from walking into walls, and offer an hour of time in the game for $88 per person.
However, this experience only exists in one place (Melbourne, Australia), and there is only one experience to be had. It’s hard to fault them for either of these issues though, as the technology is so new, and their solution to some of the problems I’ve considered is quite novel.
While other companies are trying to make these experiences smaller and more compact, Zero Latency have recognized that one of the big advantages of the VR experience is its size. Rather than hold people back, they’ve given them more space to roam. It will be interesting to see if they can expand to other locations, and other experiences. As it stands now, it looks like they’ve got a really great start.
But all of these companies have only partially solved the last problem I want to talk about: tactile experiences.
As humans move through the world, all of their senses are constantly engaged. They can see the world around them, hear the sounds of the birds chirping and people talking, and they can feel the breeze blowing past them, a drop of rain on the top of their head, a hand on their shoulder, the transition between the sidewalk and a dirt path under their feet as they walk.
Sight and sound are powerful experiences, but recreating the sense of touch in a virtual world is a very complex problem. Our entire bodies are capable of pulling in information through the skin. Temperature, moisture and pressure are all types of information that we can understand without seeing or hearing anything.
This has largely been ignored in the virtual world. I don’t believe it’s been ignored because developers and designers feel it’s unimportant, but more likely because it’s a difficult problem without a clear solution. Some gaming systems we’ve already mentioned give players a gun-shaped controller which is perfectly usable during FPS games.
This solution works great for FPS games, and since many companies are using these games as an introduction for their technology, it’s a great first step to enhance immersion, and a quick win.
But as VR experiences expand, finding a universal solution for tactile experiences will become more and more challenging. Every physical prop that replicates something in the virtual world that is added reduces the flexibility of that scenario.
Imagine a virtual car prototype. A designer shows the user the car they are designing via a VR headset to get reactions. The user reaches out to touch the steering wheel but there’s nothing there. So the designer places an analog for the steering wheel in place to increase engagement in the prototype. The user now has a steering wheel to touch, but now reaches out to touch the radio, which isn’t there, and the turn signal knob, and the door release.
If the designer continues adding physical pieces to this experience, eventually they’ve just built a physical model, which is what they were trying to avoid by building the virtual prototype. But without a tactile experience, they lose one dimension of the experience they’re testing.
When researching the Augmented Reality Theater discussed above, the team I worked on discovered all sorts of interesting tactile technology in development. Like REVEL and Aireal from Disney Research. REVEL is a wearable technology that changes the way users experience surfaces. For instance, if the user reached out to touch a smooth surface, but the VR experience was showing a stone surface, this technology could give the user the experience of touching stone.
Aireal uses small puffs of air to simulate impacts on the user. Their demonstration uses soccer balls bouncing off the users hands, but any kind of impact could theoretically be simulated.
We don’t live in a Star Trek world where we can use force fields and other high tech tricks to simulate objects. So the most successful experiences that involve tactile experiences are carefully designed to avoid the user having to touch anything.
This is one of the reasons that Birdly is so successful.
Birdly is a full body VR experience that allows the user to experience flying. The system has the user lie face down in a rig that allows them to flap their arms like wings. They can also lean left, right, forward and back in order to control the experience. They are strapped into the system and it requires outside assistance to get into and out of. In addition, a fan is placed near their head to simulate the wind blowing against them. I got to see Birdly at SIGGRAPH 2014, and while I didn’t get a chance to experience it myself, everyone I talked to thought it was very successful.
To me, its success is based largely on embracing the constraints of the medium. Rather than trying to fit an existing experience into the VR headset, the creators of Birdly created their own experience that matched what they could accomplish with the headset, and worked from there.
I see Birdly as the most successful VR experience that currently exists. Rather than try to recreate reality, it creates its own reality. In doing so, it only has to follow the rules it creates for itself. This allows for a much richer experience. The user isn’t spending time wondering why they can’t reach out and touch something, they are totally immersed in the experience.
Virtual Reality is only going to become bigger and better over the next 5-10 years while Augmented Reality and other related technologies are developing. The thinking on these topics will only expand. However, I’d like to see less thinking on how the technology works, and more thinking on what gestures humans will find comfortable, and how it will actually feel as they are using it.
One of the biggest issues I see with VR right now is that most of the time is being spent on improving the technology itself, and not enough time is being spent on understanding the user experience. I come from a design background, and I think VR tech is at the point where the headset and audio is good enough for us to start thinking about how humans actually want to use it. Most of the experiences that I’ve discussed in this article are in their infancy. They’re demonstrations that exist in order to show users the possibilities of the technology.
Much like any new medium, it begins by adapting existing material. When film first arrived, they looked much like plays, with a still shot viewing a scene that actors would work in. When television first arrived, they borrowed many of their programming ideas from radio and film. So it’s no surprise that VR experiences are adapting video games as the technology becomes viable.
But an experience like Birdly gives us a peek into what is possible with a VR system. If we want this tech to be successful, we have to start embracing the constraints of the medium and come up with new experiences, rather than simply adapt existing experiences into this new technology.
I look forward to the future.