top of page

Our project consisted of three major components - 

 

  • Oculus Rift DK2

    • VR headset

    • Translational and Rotational tracking

 

  • Leap Motion camera

    • Robust hand tracking

    • Integrated with Oculus

 

  • PR2 robot

    • Two armed robot

    • Head mounted kinect (pan/tilt)

 

Working with Oculus

 

RIFT  is essentially comprises of two devices - a set of sensors connected via USB (the head tracker), and a monitor device, connected via HDMI or DVI (the display) as shown - 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Both these devices are managed by Oculus SDK. The Rift contains sensor hardware (called an IMU) that detects linear acceleration on 3 axes and rotation speed on 3 axes. The 3 rotation axes and 3 acceleration axes add up to six degrees of freedom (6DOF). Additionally, the DK2 (and presumably includes an infrared camera meant to track the orientation and position of an array of infrared LEDs built into the surface of the Rift. 

 

A simple OpenGL program is written to extract the current orientation of Oculus HMD with the yaw, pitch and roll values in degrees. The result is shown in the video -  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Head tracking data from Oculus HMD

 

Stereo - rendering on Oculus requires the scene to be rendered in split-screen stereo with half the screen used for each eye (Left/Right). The lenses in the Rift magnify the image to provide a very wide field of view (FOV) that enhances immersion introducing pincushion distortion. An equal and opposite barrel distortion is applied to cancel out the distortion as shown in the picture - 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

PR2 has 7DOF and hand tracking is provided by Leap Motion camera mounted on the top of Oculus.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

bottom of page