pexels-photo-123335.jpeg

Desktop VR has phenomenal mature ecosystems but suffers from the limitation that users are wired due to high bandwidth needed to transfer video from the desktop to the HMD. On the other hand, standalone Mobile VR suffers from serious processing and battery life limitations. Cloud VR, which marries the two technologies, is an excellent fit and this white paper by Tirias Research highlights the current industry challenges and the solution proposed using Cloud VR.

The cloud VR framework while practical is still in it's infancy and has problems which have to be solved before it can be deployed. Fortunately, the technical solution to these problems exist today and with efforts and collaboration amongst industry leaders, this solution is deployable in the near term.

So, what are the five key blockers that prevent an immersive Cloud VR experience today?

  • Low Latency Video Encode: The cloud VR architecture leverages the full processing capability of Desktop VRs and compresses the Video 300:1 and streams it across the cloud network. For the VR application, where frames are rendered based on user head movements, it means a very low end-end latency compression engine is required while preserving the fidelity of graphics rendered. NGCodec's hardware accelarated reality codec has ultra low sub-frame encode latency and solves this problem. NGCodec has also demonstrated the quality and latency of this encoder at CES2018.
  • Low Latency network: Seamless VR applications typically need an motion-to-photon latency of ~20ms. If video is rendered, compressed and delivered through a network, it's critical that network providers ensure network latency is bare minimum and 5G infrastructure and fiber optics plus WiFi can deliver ~2ms latency, ~0.1ms jitter and very low packet loss.
  • Proximity to the edge: It's crucial that cloud servers performing the heavy duty rendering and encoding be deployed at high density 'edge locations' to ensure maximum coverage. E.g. in a metropolitan area like NYC at an epicenter with up to 200 mile radius coverage. Due to speed of light and 50% overhead going through equipment every 60 miles is ~1ms round trip latency on a fiber optic network.
  • Low Latency Video Decode: The addition of a compression layer in VR processor will naturally require a decompresser at the consumption head set end. For a seamless experience, the HMD has to, in addition to the modem, also incorporate a low latency video decoder. It's important that a standard compliant decoder enabled with Low latency decode mode be integrated in chipset inside the HMD. The low latency video decode must be coupled with a low latency display controller.  
  • Inside out 6DoF tracking: 3DoF tracking currently used in most mobile HMDs has its limitations which 6DoF tracking is expected to alleviate and most leading HMD makers are working to incorporate built-in 6DoF tracking in the next generation HMDs. Furthermore, the outside-in tracking used in current Desktop VRs also needs to be replaced with Inside-Out architecture to avoid the need for the user to install external tracking infrastructure.

 

Download Cloud VR White Paper

 

Comment