[:en]High Fidelity[:]


We believe that both the hardware and the internet infrastructure are now available to give people around the world access to an interconnected Metaverse that will offer a broad range of capabilities for creativity, education, exploration, and play. And by using all of our computers together in an open shared network, we can simulate this space at a far larger scale than would be possible by any single company or centrally hosted system.

By using a range of new hardware devices like the Oculus Rift, Samsung Gear VR, Leap 4Motion, PrioVR, Sixsense, and depth cameras, the experience of exploring these worlds can be incredibly immersive and the interaction with others lifelike and emotional

Our ongoing development and R&D work focuses on several areas:

Low-latency sensor-based interaction between avatars

We use inexpensive webcams and motion controllers to capture gaze, facial expressions, and body language, which is then streamed at low latency along with 3D positional audio to establish lifelike presence. We also use head-mounted displays like the Oculus Rift for full immersion, as well as hand and full body motion controllers.

Content scalability

Virtual worlds servers using a spatial tree structure for storage are nested inside each other and dynamically assigned to handle content load. 3D content from multiple formats can be loaded into the world and presented at multiple levels of detail.

Audience scalability

We’re building an architecture that can dynamically recruit servers from user-contributed pools of devices to scale across rapidly varying audiences.

For more information check out: the website[:]