by on / 5 comments

philip-rosedale-talking-to-emily-at-svvr-meetup-2014

About a week ago Philip Rosedale, founder of the virtual world Second Life and High Fidelity, was a guest speaker at the Silicon Valley Virtual Reality Meetup where he spoke and showed a few very interesting things about his new Virtual Reality project.”

Jo Yardley continues with a thoughtful and complete summary of the evening, while SVVR put together a nice 1.5 hour video of the full event.

by on / 35 comments

identity

In the real world, we don’t have name tags floating over our heads. The metaverse shouldn’t be any different. The exchange of information like a name (whether real or made up), a place of work, or a city of origin, has to be something that is up to you. Giving your name is an important part of the basic exchange of trust that often begins with a handshake… “Hi, my name is Philip”. It is important that we control the decision as to when and with whom we do it. There are unusual cases where we have those names on stickers on our chest or hanging around our necks, but in general we don’t. Most of the time we don’t want to be identified until we are ready. On the flip side, once we are ready to be identified, we often need a secure way of doing it, like a passport or driver’s license.

A ‘metaverse’ of connected internet servers run by different people and containing different parts of the virtual world poses an additional challenge: Not only do you need to have the choice when and to whom to disclose parts of your identity, you also cannot always trust the particular server you are ‘inside’ with different aspects of your identity. This is similar to visiting a new website and being unwilling to give credit card information, or unwilling to login using Twitter or Facebook, until you understand and trust the site.

Our design with High Fidelity is the one that seems like the best solution to meet these goals: Operators of different virtual world servers (we call these ‘domains’) can decide on the level of identity security with which they wish to challenge people arriving at their locations. This can range from nothing (meaning that disclosure of identity information is totally up to you), to a requirement akin to cookies on websites (I want a token that I can use to identify you the next time you login here, but I don’t need to know who you really are), or finally a request for unambiguous identity infomation (I want to know your real name to allow you to login here).

To make this possible, High Fidelity will run a global service that lets you optionally store and validate identity information (such as your true RL name, a unique avatar name, or proof of connection to other identity services like Twitter or Facebook), and then also lets you selectively show this information to other people in the virtual world, regardless of which location/server you are currently in. You won’t have to use it, but it will hopefully be useful for many people, and will be one of the ways that we will be able to make money as a business.

Thoughts on this direction are welcome. The details of virtual world identity are something that will need to be examined and scrutinized in a suitably open forum, in the same way that we have together made things like SSL and OAuth work for the web. We’ll look for ways to organize these events and discussions as things develop.

by on / 11 comments

The Inquiry
How to create virtual touch? Without haptic feedback rigs or direct stimulation to the brain, how can we get closer to that special, sometimes intimate, sometimes intricate, sometimes magical feeling that is touch? We’re trying a lot of different approaches, but this video illustrates one combination: a front-facing PrimeSense depth camera, the FaceShift facial tracking SDK, the Leap Motion controller, and the Hifi virtual world software. There’s no physical feeling for either party, but as you’ll see, Ryan is virtually touching Emily’s hair, and that’s one step in the right direction.

Emily on screen with hifi setup with Leap Motion and PrimeSense depth camera

The Setup
Emily and Ryan both sat at MacBook Pro’s with PrimeSense depth cameras clipped to the top of their screens (we 3D printed the clip), the Faceshift SDK extracting head position and facial features, and our Interface software processing and streaming the data to control the avatar’s face and body. Ryan’s hand is detected by the Leap Motion controller. The end-to-end latency is about 100 milliseconds. For our headphone and microphone, we usually use this Sennheiser headset.

You might notice that the audio is noisy. This is because we applied some gain to bring the levels up. “But you claim high fidelity audio,” you might be thinking. Well, one of the brilliant things about our audio architecture is that it works similar to the real world. The further you are away, the harder it is to hear. But this doesn’t work for recording/capturing, something we’ve yet to optimize for.

We captured the video with the Screen Recording functionality in QuickTime player, piping the sound out from Interface and in to QuickTime using Soundflower. To capture, I logged in as my avatar and stood next to Ryan and Emily, recording what I observed.

When you see Emily’s right hand raise, it’s because she’s moving her mouse. In our current version, moving your mouse cursor will also move your hand and arm.

Still curious and question-full? Leave a comment; ask away.

by on / 7 comments

Today, the Google I/O conference is happening here in San Francisco. The talk Voiding your Warranty: Hacking Glass included the above video, which features our cofounder Ryan showing off his hacker skills with the Glass. Here’s the story behind the video.

Using an avatar as a proxy for communication has many benefits. Your avatar can always look good, be well lit and in an interesting location. However, even the most immersive virtual worlds fall flat when trying to deliver the emotional data from real world facial expressions and body language.

From video game controllers to tracking your sleep behavior, there is a good deal of experimentation being done with wearable sensor hardware right now. In addition to soldering our own creations together, we have been checking out work done by others as fast as we can all with the goal enabling rich emotional avatar communication.

As you can imagine, when we received our beautiful new Google Glass as part of the Explorer Program, we were eager to see if we could access its sensors and drive our avatar’s head movement (caveat: Google Ventures is one of our investors).

Being the only white guy with a beard here at High Fidelity, working with Glass fell to me ;) This was a great exercise because it gave us an opportunity to abstract the input layer for multiple device support (we also got Oculus working! Stay tuned for that blog).

We had previously created an Android app that grabbed all the phone’s sensor data and sent it over UDP to a configurable port. Imagine holding your phone and being able to twist and move your avatar’s hand. Kinda like turning any phone (with sensors) into a Wii controller. Low and behold when we plugged our Glass in and tried to run the Android app from our IDE, Glass showed up as a device and it “just worked”. We could not edit the fields in the GUI on Glass but we could see from the log that it was transmitting the data.

For obvious reasons, Glass has some pretty aggressive energy saving behavior which made it tricky to keep the transmission alive. We ended up moving the sensor data transmission to a service layer. To stop transmission we just turn Glass off.

You can see in the video that we have a very low latency connection between human and avatar head movement using Glass!

by on / 1 comment

hacking-3d-printer

The last few days, we’re been giving our trusty Makerbot Replicator a bit of a work out, as we mock up parts for our ongoing office experiments. Ryan figured out the platform wasn’t staying hot enough to keep the printout together, so he hacked it by placing a small space heater to keep the temperature near the printer a bit warmer. Worked like a charm!