Background information about immersive technology


VISION

HEARING

HAPTICS

VESTIBULAR

TASTE

SMELL


Immersive technology refers to all forms of perceptual (input to the user) and interactive (output from the user) hardware technologies that blur the line between the physical world and the simulated or digital world. To create a realistic environment, development and recording tools that blends the various technologies together for use within various types of immersive applications or media is required. Since VR is about driving the perceptual system, the place to start is with the senses: vision, audio, haptics, the vestibular sense, smell and taste. In our opinion, VR isn’t going to implement the last two in the foreseeable future, but vision, audio, the vestibular sense and haptics work to varying extents today, and there are potential paths forward for all of them. In addition to getting virtual information into the perceptual system, VR also needs machine perception— the ability to sense, reconstruct, and understand the real world. That will let us move around safely and bring real-world objects like desks, keyboards and furniture into the virtual world, potentially reskinning them. It would be even more valuable to bring real humans into the virtual world -that would enable true telepresence- where you could meet, work, play games and basically do anything with people anywhere in the world.

 

VRmed Ltd. is focused on solutions based on new cutting edge underlaying technologies creating a full immersive experience for various verticals of the new market. We will support you getting the newest immersive technologies in place for your projects.

Feel free to drop an e-mail or send a short request

Start with a workshop. We will explain how you can use immersive technologies for your business.

In our role as advisors for companies on the strategic benefits of immersive experiences, we often find that there are gaps in manager´s knowledge and understanding about the technology they’re looking at implementing. They may have the theory of what it could do but have only ever seen it in a few slides of a presentation; they don’t have first-hand experience of putting on a VR headset, for example. The danger here is that people start making assumptions – ones that might be completely wrong- and could mean wasted time and money. We need to close that gap by putting ourselves and our clients into those experiences first-hand and actually trying out these different technologies. It’s about achieving a better, more balanced understanding of the exciting opportunities available.

For more information about the future market of immersive technology please find click here

V I S I O N

HMD - Head Mountain Displays

For vision, we need to increase the field of view to the full human range, increase resolution and clarity to the retinal limit, increase dynamic range to real-world levels, and implement proper depth of focus. Firstly, the visuals, as this is the most critical area for near-term improvement. Current high-end headsets like the Rift and Vive, with their roughly 100 degree field of view and 1080×1200 display panels, equate to around 15 pixels per degree. Humans are capable of seeing at least 220 degrees field of view at around 120 pixels per degree (assuming 20/20 vision), Michael Abrash, Chief Scientist at Oculus Inc., says and display and optics technologies are far away from achieving this (forget 4K or 8K, this is beyond 24K per eye). In five years, he predicts a doubling of the current pixels per degree to 30, with a widening of FoV to 140 degrees, using a resolution of around 4000×4000 per eye. In addition, the current fixed depth of focus of current headsets should become variable. Widening the FOV beyond 100 degrees and achieving variable focus both require new advancements in displays and optics, but he believes this will be solved within five years. Rendering 4K x 4K per eye at 90Hz is an order of magnitude more demanding than the current spec, so for this to be achievable in the next five years, foveated rendering is essential, Abrash says. This is a technique where only the tiny portion of the image that lands on the fovea—the only part of the retina that can see significant detail—is rendered at full quality, with the rest blending to a much lower fidelity (massively reducing rendering requirements). Estimating the position of the fovea requires “virtually perfect” eye tracking, which Abrash describes as “not a solved problem at all” due to the variability of pupils, eyelids, and the complexities of building a system that works across the full range of eye motion for a broad userbase. But as it is so critical, Abrash believes it will be tackled in five years, but admits it has the highest risk factor among his predictions. Most significantly, at the high end, VR headsets will become much cheaper, smaller and wireless. We’ve heard many times that existing wireless solutions are simply not up the task of meeting even the current bandwidth and latency requirements of VR. Webelieve it can be achieved in five years, assuming foveated rendering is part of the equation.

Eyetracking

One of the most noticeable problems with virtual reality right now is focus. While so many of the virtual worlds we've explored have been rich in detail and character, currently VR headsets can't account for the way the human eye change shape to focus when looking at different distances - the result usually being blurry in VR. As resolution increases in headsets, blurring will become more evident. But the problem is fixable, even if it's not perfect by 2021. One answer to this is a technique called foveated rendering, which mimics human vision by only focusing on the small part of the image it's looking at. In VR, it would mean only rendering the pixels in that spot - maybe a tenth of what is currently rendered. We predict that despite the complexities of eye tracking and wider-FoV displays and optics, headsets will be lighter in five years, with better weight distribution. Eye tracking data is collected using either a remote or head-mounted ‘eye tracker’ connected to a computer. While there are many different types of non-intrusive eye trackers, they generally include two common components: a light source and a camera. The light source (usually infrared) is directed toward the eye. The camera tracks the reflection of the light source along with visible ocular features such as the pupil. This data is used to extrapolate the rotation of the eye and ultimately the direction of gaze. Additional information such as blink frequency and changes in pupil diameter are also detected by the eye tracker.

HEARING

Spatial Audio

Audio needs proper spatialization (your sense of where sound is coming from), full spatial propagation (how sound moves around a virtual space), and synthesis (generating sounds from modeling of physical motions and collisions).

Personalised head-related transfer functions (HRTFs) will enhance the realism of positional audio. HMD´s current 3D audio solution generates a real-time HRTF based on head tracking, but this is general across all users. HRTFs vary by individual due to the size of the torso and head and the shape of the ears; creation of personalised HRTFs should significantly improve the audio experience for everyone, Abrash believes. While he didn’t go into detail as to how this would be achieved (it typically requires an anechoic chamber), he suggested it could be “quick and easy” to generate one in your own home within the next five years. In addition, he expects advancements to the modelling of reflection, diffraction and interference patterns will improve sound propagation to a more realistic level. Accurate audio is arguably even more complex than visual improvement due to the discernible effect of the speed of sound; despite these impressive advancements, real-time virtual sound propagation will likely remain simplified well beyond five years, as it is so computationally expensive, Abrash says.

 

How Spatial Audio works

 Spatial Audio is a powerful tool that you can use to control user attention. You can present sounds from any direction to draw a listener's attention and give them cues on where to look next. But most importantly, Spatial Audio is essential for providing a believable VR experience. When VR users detect a mismatch between their senses, the illusion of being in another world breaks down.

 

Interaural time differences: When a sound wave hits a person's head, it takes a different amount of time to reach the listener's left and right ears. This time difference varies depending on where the sound source is in relationship to the listener's head. The farther to the left or right side of the head the object is located, the larger this time difference is.

Closely related to the effect of occlusion is a sound object’s directivity pattern. A directivity pattern is a shape or pattern that describes the way in which sound emanates from a source in different directions. For example, if you walk in a circle around someone playing a guitar, it sounds much louder from the front (where the strings and sound hole are) than from behind. When you are behind, the body of the guitar and the person holding it occlude the sound coming from the string

HAPTIC & CONTROLLERS (UI)

Controller - the mouse of VR

Haptics is particularly challenging. The haptics that would matter most would be for the hands, which are our primary means of interaction with the world, and which rely on haptics for their feedback loop. All we can do right now is produce crude vibrations and forms of resistance. Someday, perhaps there will be some sort of glove or exoskeleton that can let us interact naturally with virtual objects, but that’s a true research problem. A hand-held motion devices like Oculus Touch could remain the default interaction technology “40 years from now”. Ergonomics, functionality and accuracy will no doubt improve over that period, but this style of controller could well become “the mouse of VR.” Interestingly, we suggested hand tracking (without the use of any controller or gloves) would become standard within five years, accurate enough to represent precise hand movements in VR, and particularly useful for expressive avatars, and for simple interactions that don’t require holding a Touch-like controller, such as web browsing or launching a movie. We think there are parallels with smartphones compared to consoles and PC here; touchscreens are great for casual interaction, but nothing beats physical buttons for typing or intense gaming. It makes sense that no matter how good hand tracking becomes, you’ll still want to be holding something with physical resistance in many situations.

VESTIBULAR

Galvanic Vestibular Stimulation (GVS)

VIRTUAL REALITY, AS it exists now, works because humans trust their eyes above all else. And in a VR headset, the possibilities of what you can see are pretty much infinite. What you can feel, on the other hand, is not. You’ll pretty much feel like you’re sitting on your couch. Forget zooming through space. Or rocking on a boat in stormy seas. But what if virtual reality, as it might exist in the future, also fools the inner ear that keeps track of motion? The motion sickness is actually happening because our vestibular system -- a complicated sensory system in our inner ear that provides balance and spatial orientation -- is out of whack. When we walk a character through a room in a VR game without walking ourselves, a mismatch happens because we don't feel that motion represented in 3D space. Our brain instantly notices that discrepancy between what you're seeing and what you're feeling.

That’s where galvanic vestibular stimulation comes in—a fancy name for a simple procedure. The vestibular system keeps you situated in space by relying on the subtle movements of fluid and tiny bones in your ears. Put an electrode behind each ear, hook up a 9 volt battery, and you can stimulate the nerves that run from your inner ears to the brain. Zap with GVS and your head suddenly feels like it’s rolling to the right. Reverse the electrodes and you feel your head roll to the left.

Apparently the Mayo Clinic has been hunkered down for 11 years trying to solve this very problem, and today they're announcing the commercial availability of the technology they've been developing. It's a bit difficult to wrap your head around, but it's incredibly exciting.

vMocion has secured the exclusive global license to use Mayo Clinic's GVS technology in commercial products, and that's where things get real. The platform they've developed can be integrated into existing operating systems, devices like VR or AR glasses, smartphones, and TVs. vMocion's platform can use any existing game to create that sense of motion. Game developers technically wouldn't need to add any additional code to their games, provided the platform they're developing for supports vMocion's technology. It would automatically sync movement seen onscreen to four stimulation points, thus delivering that believable sensation of movement to the inner ear.

TASTE

under construction

SMELL

under construction