Road the the Holodeck: A Conversation with Visby

In all the VR excitement, we often forget about today's technical challenges and limitations. We're talking things like connectivity speed, battery life, and graphics processing -- the factors causing first generation high-end VR rigs to be stationary and tethered experiences.

But one of the biggest technical hurdles is file size. For HD resolution in 360 degrees, the sheer weight of bits starts to really stack up. And don't forget that total is then multiplied by two for VR's sterescopic experience (these challenges were outlined in a Google I/O session). 

This is the challenge that Visby is trying to solve. We got the chance to catch up with the stealth-mode company in San Francisco last week. Its core technology is a codec for VR content capture and playback that will achieve meaningful lossless compression. 

Where this really comes into play is lightfields. For those unfamiliar, these are essentially VR experiences you walk around in. The key is photorealistic 3D object rendering from every possible angle, including things like accurate light reflections -- A massive data payload.

Visby achieves this by capturing multiple perspectives in a given lightfield. It then uses that data to extrapolate and simulate the remaining vantage points, thus shifting the load from storage to processing. The result is a data-efficient, yet still dimensionally accurate, lightfield. 

This has been one of the biggest gating factors in reaching VR's true potential. Visby co founder Ryan Damm explains that we have fully immersive graphical VR in games, including positional tracking. And we have photorealistic VR without full immersion and tracking.

But the true promise of lightfields is the best of both worlds -- photo-realistic immersive 3D spaces you can walk around and experience positional tracking. And that's when we start to get to VR's holy grail, the fabled holodeck (though haptics remains a technical hurdle).

Thinking further into the future, Damm and BD lead Scott Hill aspire to support lightfield applications in computer vision. This takes things like AI and autonomous vehicles and empowers them with a more multidimensional sensory input (field of vision). 

But in the nearer term, they believe that lightfields will follow VR's overall path, applying first to entertainment like gaming and cinematic applications. Then it will move into the all the VR-ripe verticals we continue to examine -- everything from enterprise to education and design. 

“All of the areas where VR is being discussed will benefit from lightfield technology,” said Damm.

Mike Boland

Michael Boland is Chief Analyst and VP of Content for BIA/Kelsey, covering online and mobile media. Mike is a frequent speaker at top industry conferences such as BIA/Kelsey events, Search Engine Strategies, ad:tech, and WHERE 2.0. He has authored in-depth reports on the changing local media landscape including online video, social networking and mobile. He contributes regularly to highly read online news sources such as Business Insider and the Huffington Post. A trusted source for reporters covering the interactive media space, his comments have appeared in major news and trade media, including the Wall Street Journal, Fortune and Forbes. Previously he was a San Francisco-based freelance writer for business and technology magazines, such as Red Herring, Business 2.0, and Mobile Magazine. Mike began his career in business analysis and journalism as a staff reporter for Forbes magazine, where he covered tech & media.