by on / 13 comments

The Inquiry
How to create virtual touch? Without haptic feedback rigs or direct stimulation to the brain, how can we get closer to that special, sometimes intimate, sometimes intricate, sometimes magical feeling that is touch? We’re trying a lot of different approaches, but this video illustrates one combination: a front-facing PrimeSense depth camera, the FaceShift facial tracking SDK, the Leap Motion controller, and the Hifi virtual world software. There’s no physical feeling for either party, but as you’ll see, Ryan is virtually touching Emily’s hair, and that’s one step in the right direction.

Emily on screen with hifi setup with Leap Motion and PrimeSense depth camera

The Setup
Emily and Ryan both sat at MacBook Pro’s with PrimeSense depth cameras clipped to the top of their screens (we 3D printed the clip), the Faceshift SDK extracting head position and facial features, and our Interface software processing and streaming the data to control the avatar’s face and body. Ryan’s hand is detected by the Leap Motion controller. The end-to-end latency is about 100 milliseconds. For our headphone and microphone, we usually use this Sennheiser headset.

You might notice that the audio is noisy. This is because we applied some gain to bring the levels up. “But you claim high fidelity audio,” you might be thinking. Well, one of the brilliant things about our audio architecture is that it works similar to the real world. The further you are away, the harder it is to hear. But this doesn’t work for recording/capturing, something we’ve yet to optimize for.

We captured the video with the Screen Recording functionality in QuickTime player, piping the sound out from Interface and in to QuickTime using Soundflower. To capture, I logged in as my avatar and stood next to Ryan and Emily, recording what I observed.

When you see Emily’s right hand raise, it’s because she’s moving her mouse. In our current version, moving your mouse cursor will also move your hand and arm.

Still curious and question-full? Leave a comment; ask away.

Susan Wilson on December 1, 2013

I think that this is already miles ahead of current technology. The facial expression when talking adds so much compared to Second Life’s “move mouth when talking”. Also, whether or not you can actually feel the touch, just being able to reach out to someone like this in an intuitive and unexpected way is incredible considering currently pre-recorded animations are required. A certain amount of feeling is intuitive, the way you feel chilly when you go to a snow sim and want a winter jacket. The unpredictability of gestures will make them all that more real.

Reply

Alan on December 5, 2013

That’s really awesome! Is there video of Ryan in front of his Mac, using the Leap Motion?

Reply

Adrian McCarlie on December 7, 2013

This is going to be amazing, but I hope there is an option for being able to log into the world without all the expensive hardware, or else this has the ability to severely limit the uptake, many people can barely afford to upgrade their video cards let alone buy all the accessories but still want to be part of the scene.
If the ancillary hardware is mandatory I fear you would lose up to 80% of the potential audience due to its exclusivity.
I know you want total uptake, so it must be accessible to all for that to happen. Will the world be accessible without the extra hardware?

Reply

Stijn on December 9, 2013

Hi Grayson

That looks absolutely astonishing. I can imagine it did feel intimate for Ryan and Emily. The facial expression and the projection of this expression on to someone else by means of gestures is indeed where the magic starts.

Using a mouse, however, seems like a pain. 2 mice, one for each arm looks like someone is doing an old fashioned puppet on a string act. Which looks funny at best. Im sure you are already aware, but what about smart clothing with embedded motion sensors? By what means do you envision expressing gestures?

Best regards
Stijn

Reply

Adam on December 10, 2013

I don’t code but this is insane. Great luck on the project but you’ll surely get there.

What all of you are thinking about is massive.

Reply

Bruce Thomson on December 26, 2013

This whole topic of ‘virtual physicality’ greatly interests me. To save time, if you like I’d be happy to confer by Skype if you wanted to (ID is PalmyTomo).
- As long as a person is presented visually and convincingly (hypnotically) with a scenario in an engaging way, we seem able to react as if it is physically real. We may not need physical haptics like gloves & suits & hydraulic platforms of great accuracy, just headsets that deliver audio-visual-olfactory cues, and hydraulic platforms that ‘hint’ enough to support those main sensations.
- Our minds seem quite eager to adopt virtualities, ignoring imperfections in favour of enjoying the benefits. Examples:
- I bet if you simply made a haptics glove ‘buzz’ very gently, the user would mostly exaggerate that to get most of the happiness of touching Emily.
- We can laugh our heads off at even a stick-figure cartoon, or weep when a very abstract, encoded virtuality in a thing called a book.
- With *zero* physical stimulus to our sleep-paralysed bodies, we can wake terrified by a dream of falling or drowning or being pursued.
Regards,
Bruce Thomson in New Zealand.

Reply

Nika Talaj on December 31, 2013

You’ve come pretty far pretty fast, congrats! Can you say anything about your goals for the level of client graphics processors required to render Hifi’s world? Can you say anything about whether there will be different levels of fidelity available on different devices – e.g. phones vs. gaming rigs?

Reply

Tracy on February 2, 2014

Absolutely fantastic, totally fascinated by how good it looks.

Reply

Luke Scotney on February 6, 2014

I cannot begin to grasp how awesome this is! I couldn’t think of myself enjoying the benefits of 3D cameras, motion sensors etc. But seeing this in action I think to myself “Damn, I gotta get me some gear for this!”

Be awesome to see how this project takes off after it’s public, I have high hopes and will certainly be excited to join in the fun!

Reply

Ruark on February 20, 2014

You guys are absolutely right! there is suddenly SO much more life in these avatars! Even as basic as they are, they convey more emotion & depth, than anything I’ve eve seen in SL! Looking good!

Reply

John E. on March 1, 2014

Would there be any interest in using gloves, Leap Motion, or Kinect with details down to individual fingers? This would enable people to communicate using American Sign Language.

Reply

Maya on April 27, 2014

Well done!
Using the faceshift Sdk to extract and conform a live video stream of a persons face via webcam/front facing camera on a smartphone – then mapping it to an avatars face in real time, would be the next step forward.

Bringing Second life into the real-world is where it’s at:
How? an except below, from the hard science novel “Memories with Maya”

“So,” Krish said, in true geek style… “Dan knows where we are, because my phone is logged in and registered into the virtual world we have created. We use a digital globe to fly to any location. We do that by using exact latitude and longitude coordinates.”
Krish looked at the prof, who nodded. “So this way we can pick any location on Earth to meet at, provided of course, I’m physically present there.”
“I understand,” said the prof. “Otherwise, it would be just a regular online multi-player game world.”

“Precisely,” Krish said. “What’s unique here is a virtual person interacting with a real human in the real world. We’re now on the campus Wifi.” He circled his hand in front of his face as though pointing out to the invisible radio waves. “But it can also use a high-speed cell data network. The phone’s GPS, gyro, and accelerometer updates as we move.”

Krish explained the different sensor data to Professor Kumar. “We can use the phone as a sophisticated joystick to move our avatar in the virtual world that, for this demo, is a complete and accurate scale model of the real campus.”

The prof was paying rapt attention to everything Krish had to say. “I laser scanned the playground and the food-court. The entire campus is a low rez 3D model,” he said. “Dan can see us move around in the virtual world because my position updates. The front camera’s video stream is also mapped to my avatar’s face, so he can see my expressions.”

“Now all we do is not render the virtual buildings, but instead, keep Daniel’s avatar and replace it with the real-world view coming in through the phone’s camera,” explained Krish.

“Hmm… so you also do away with render overhead and possibly conserve battery life?” the prof asked.
“Correct. Using GPS, camera and marker-less tracking algorithms, we can update our position in the virtual world and sync Dan’s avatar with our world.”

“And we haven’t even talked about how AI can enhance this,” I said.

Reply

    H M on August 14, 2014

    Wow, yes definitely a step in the right direction. This is exciting stuff!

    Reply

Trackbacks/Pingbacks

  1.  High Fidelity – Touch?
  2.  Podcast der Volkshochschule in virtueller Welt 5/13 | Volkshochschule Goslar           * * * Weblog * * *            vhs im Second Life
  3.  Are Philip And Andrew Going Back To Their Roots? » Ciaran Laval
  4.  Second Life founder's stealth virtual reality startup High Fidelity raises $2.5M | VentureBeat | Deals | by Harrison Weber
  5.  Second Life founder’s stealth virtual reality startup High Fidelity raises $2.5M | 381test
  6.  Le secteur de la réalité virtuelle en pleine ébullition « MarketingVirtuel.fr MarketingVirtuel.fr
  7.  Talking castAR and High Fidelity | Living in the Modem World

Add your thoughts

Author

Grayson Stebbins