by on / No comments

High Fidelity System Architecture Diagram

Our plan, as described in this diagram (download as PDF), is to create the software and protocols enabling VR to reach the scale of today’s consumer internet. High Fidelity will allow many different people and institutions to deploy virtual world servers, interconnecting those servers so that people and digital objects can travel among them, and harnessing shared computing devices to scale their content and load.   When used with the new display and input devices coming to market, High Fidelity will enable a planetary-scale virtual space with room for billions of people, served by billions of computers.

The Domain Server is the master server that is started up to create a virtual world.  The domain server creates a number of smaller servers as needed to stream the audio, avatar, and world contents to connecting clients.  Those clients can be either people connecting interactively, or ‘agents’ that might–for example–be an AI avatar or a piece of interactive content. The virtual world contained inside a particular domain server can be of any size, and can contain any number of servers, people, and content.  Domains can be connected to each other, and can also occupy fixed locations within a larger ‘metaverse’.

The Nameserver is a service run by High Fidelity that allows domain servers and places inside them to be discovered via unique text names. So, for example, you can type  @SanFrancisco  or  @HighFidelity  to jump instantly to the location and orientation inside the appropriate domain server for that name.  The Nameserver also can authenticate people, so a particular domain server might be created that allows access to only a specific set of people by checking their identity with the Nameserver.  It is not necessary to use the High Fidelity Nameserver to deploy a domain server.  In a manner similar to websites, the Nameserver uses SSL and OAuth to authenticate accounts, with High Fidelity operating as a certificate authority, so that entrants to a particular virtual world can securely disclose chosen aspects of their identity.

The Voxel Server stores and serves the content that is found inside the virtual world.  Objects inside the virtual world are stored in a sparse voxel octree.  The octree subdivides the world into a smaller and smaller set of nested cubes, and only creates smaller cubes when there is something inside that cube that needs to be stored.  This allows an enormous physical space to be efficiently stored, because the ‘empty’ areas don’t take up any extra space.  More importantly, it allows huge numbers of objects to be viewed at a huge distance, because cubes/voxels can be averaged together.  So, for example, a forest of thousands of highly detailed trees will ultimately become just a single pixel/voxel on the horizon.   High Fidelity voxel servers can be recursively nested inside of each other; for example, your apartment in a large virtual city can be managed by a separate voxel server containing all the content inside it.  In this way, cities and other interesting virtual spaces can be created containing an almost infinite amount of content.

The Audio Mixer serves a 3D spatialized audio stream for the clients that want to ‘hear’ the virtual world, and does so by mixing together the live audio coming from people and objects in the virtual world.   When an audio mixer becomes overloaded with clients, the domain server can deploy new audio mixers that are connected together in a multicast tree to scale to any number of listeners.  As described below, these new mixers can optionally be obtained from the High Fidelity assignment server, allowing load to be balanced among nearby peers.

The Avatar Mixer collects and re-transmits information about avatars such as facial expressions and body movements, and scales in a manner similar to the audio mixer.

The Assignment Server is a High Fidelity service that allows people to share their computers with each other to act as servers or as scripted interactive content.  Devices register with the assignment server as being available for work, and the assignment server delegates them to domain servers that want to use them.  Units of a cryptocurrency, will be exchanged by users of the assignment server, to compensate each other for their use of each other’s devices.  The assignment server can analyze the bandwidth, latency, and Network Address Translation capabilities of the contributed devices to best assign them to jobs.  So, for example, an iPhone connected over home WiFi might become a scripted animal wandering around the world, while a well-connected home PC on an adequately permissive router might be used as a voxel server.

The Digital Marketplace allows people to buy, sell and transfer digital goods to and from each other, and move these goods among different domain servers.

The Currency Server provides the wallet services and other API’s needed to allow people to quickly and easily share their computing devices as well as buy and sell digital goods using a cryptocurrency.

Javascript is the language used by the High Fidelity system to create interactive in-world content, avatar attachments, and UI extensions to the interactive client.  Javascript code can also be deployed to additional devices through the assignment server or domain server, allowing interactive content to be deployed across as many machines as desired to create very complex simulations.  Examples of different scripts can be found here.

The Domain Server, Audio Mixer, Avatar Mixer, Voxel Server, Interactive Client, and other software components needed to create and deploy virtual worlds are released under the Apache 2.0 open source license, and the code can be found at the High Fidelity Github repository.

by on / 5 comments

philip-rosedale-talking-to-emily-at-svvr-meetup-2014

About a week ago Philip Rosedale, founder of the virtual world Second Life and High Fidelity, was a guest speaker at the Silicon Valley Virtual Reality Meetup where he spoke and showed a few very interesting things about his new Virtual Reality project.”

Jo Yardley continues with a thoughtful and complete summary of the evening, while SVVR put together a nice 1.5 hour video of the full event.

by on / 36 comments

identity

In the real world, we don’t have name tags floating over our heads. The metaverse shouldn’t be any different. The exchange of information like a name (whether real or made up), a place of work, or a city of origin, has to be something that is up to you. Giving your name is an important part of the basic exchange of trust that often begins with a handshake… “Hi, my name is Philip”. It is important that we control the decision as to when and with whom we do it. There are unusual cases where we have those names on stickers on our chest or hanging around our necks, but in general we don’t. Most of the time we don’t want to be identified until we are ready. On the flip side, once we are ready to be identified, we often need a secure way of doing it, like a passport or driver’s license.

A ‘metaverse’ of connected internet servers run by different people and containing different parts of the virtual world poses an additional challenge: Not only do you need to have the choice when and to whom to disclose parts of your identity, you also cannot always trust the particular server you are ‘inside’ with different aspects of your identity. This is similar to visiting a new website and being unwilling to give credit card information, or unwilling to login using Twitter or Facebook, until you understand and trust the site.

Our design with High Fidelity is the one that seems like the best solution to meet these goals: Operators of different virtual world servers (we call these ‘domains’) can decide on the level of identity security with which they wish to challenge people arriving at their locations. This can range from nothing (meaning that disclosure of identity information is totally up to you), to a requirement akin to cookies on websites (I want a token that I can use to identify you the next time you login here, but I don’t need to know who you really are), or finally a request for unambiguous identity infomation (I want to know your real name to allow you to login here).

To make this possible, High Fidelity will run a global service that lets you optionally store and validate identity information (such as your true RL name, a unique avatar name, or proof of connection to other identity services like Twitter or Facebook), and then also lets you selectively show this information to other people in the virtual world, regardless of which location/server you are currently in. You won’t have to use it, but it will hopefully be useful for many people, and will be one of the ways that we will be able to make money as a business.

Thoughts on this direction are welcome. The details of virtual world identity are something that will need to be examined and scrutinized in a suitably open forum, in the same way that we have together made things like SSL and OAuth work for the web. We’ll look for ways to organize these events and discussions as things develop.

by on / 11 comments

The Inquiry
How to create virtual touch? Without haptic feedback rigs or direct stimulation to the brain, how can we get closer to that special, sometimes intimate, sometimes intricate, sometimes magical feeling that is touch? We’re trying a lot of different approaches, but this video illustrates one combination: a front-facing PrimeSense depth camera, the FaceShift facial tracking SDK, the Leap Motion controller, and the Hifi virtual world software. There’s no physical feeling for either party, but as you’ll see, Ryan is virtually touching Emily’s hair, and that’s one step in the right direction.

Emily on screen with hifi setup with Leap Motion and PrimeSense depth camera

The Setup
Emily and Ryan both sat at MacBook Pro’s with PrimeSense depth cameras clipped to the top of their screens (we 3D printed the clip), the Faceshift SDK extracting head position and facial features, and our Interface software processing and streaming the data to control the avatar’s face and body. Ryan’s hand is detected by the Leap Motion controller. The end-to-end latency is about 100 milliseconds. For our headphone and microphone, we usually use this Sennheiser headset.

You might notice that the audio is noisy. This is because we applied some gain to bring the levels up. “But you claim high fidelity audio,” you might be thinking. Well, one of the brilliant things about our audio architecture is that it works similar to the real world. The further you are away, the harder it is to hear. But this doesn’t work for recording/capturing, something we’ve yet to optimize for.

We captured the video with the Screen Recording functionality in QuickTime player, piping the sound out from Interface and in to QuickTime using Soundflower. To capture, I logged in as my avatar and stood next to Ryan and Emily, recording what I observed.

When you see Emily’s right hand raise, it’s because she’s moving her mouse. In our current version, moving your mouse cursor will also move your hand and arm.

Still curious and question-full? Leave a comment; ask away.

by on / 7 comments

Today, the Google I/O conference is happening here in San Francisco. The talk Voiding your Warranty: Hacking Glass included the above video, which features our cofounder Ryan showing off his hacker skills with the Glass. Here’s the story behind the video.

Using an avatar as a proxy for communication has many benefits. Your avatar can always look good, be well lit and in an interesting location. However, even the most immersive virtual worlds fall flat when trying to deliver the emotional data from real world facial expressions and body language.

From video game controllers to tracking your sleep behavior, there is a good deal of experimentation being done with wearable sensor hardware right now. In addition to soldering our own creations together, we have been checking out work done by others as fast as we can all with the goal enabling rich emotional avatar communication.

As you can imagine, when we received our beautiful new Google Glass as part of the Explorer Program, we were eager to see if we could access its sensors and drive our avatar’s head movement (caveat: Google Ventures is one of our investors).

Being the only white guy with a beard here at High Fidelity, working with Glass fell to me ;) This was a great exercise because it gave us an opportunity to abstract the input layer for multiple device support (we also got Oculus working! Stay tuned for that blog).

We had previously created an Android app that grabbed all the phone’s sensor data and sent it over UDP to a configurable port. Imagine holding your phone and being able to twist and move your avatar’s hand. Kinda like turning any phone (with sensors) into a Wii controller. Low and behold when we plugged our Glass in and tried to run the Android app from our IDE, Glass showed up as a device and it “just worked”. We could not edit the fields in the GUI on Glass but we could see from the log that it was transmitting the data.

For obvious reasons, Glass has some pretty aggressive energy saving behavior which made it tricky to keep the transmission alive. We ended up moving the sensor data transmission to a service layer. To stop transmission we just turn Glass off.

You can see in the video that we have a very low latency connection between human and avatar head movement using Glass!