Technologies

We use a variety of software and hardware in this project

Sensor package

  • ATmega128RFA1 MCU
  • 802.15.4 wireless technology
  • 9DOF sensors including accelerometer, gyroscope, magnometer
  • Lithium-polymer battery powered
  • micro USB charging

TetherCam

  • Xbox Kinect v1
  • AprilTags library

Synthesis

  • Kalman filtering

3D consumption

  • Unity 3D
  • Oculus Rift

Architecture

The overall architecture of the project is as follows:

Physical object

The physical object represents the real world object we will be placing sensors on. The user can directly interact with the physical object by moving it around or changing its orientation, effectively using it as an “input” to our system. The object will be fully battery-powered and untethered.

Identification tag

The identification tag is a visual mark on the physical object that will “communicate” with the Kinect via visible light. We are using the AprilTags library for both tag generation and detection.

Object sensors

We use a 9DOF combination of sensors for orientation detection. This includes an accelerometer, gyroscope and magnometer.

Wireless module

The wireless module is responsible for wirelessly sending sensor data to the computer. The ATmega128RFA1 has a built in 2.4Ghz radio which we will use to communicate over the 802.15.4 protocol.

Microsoft Kinect sensor

The Microsoft Kinect contains a camera and depth sensor, used to track the identification tag (described above). The Kinect will output visual and depth data about the scene the sensor observes. The Kinect might be worn by the user so that the tracking is relative to their point of view.

Computer

The computer will serve as the processing unit for the user experience. The computer will receive all data (vision data from the Kinect, orientation data from the sensors via the wireless module). The computer will contain all the software and perform all processing.

Vision processing

The vision processing runs on the computer and will make use of the AprilTags and other libraries to position the tag in 3D space.

Orientation processing

The orientation processing runs in the computer, and uses the raw sensor data received wirelessly to figure out the actual orientation of the object. Sensor fusion is involved and a Kalman filter will be used.

Unity 3D

Unity 3D is our 3D rendering engine. It will take in information about the object’s position and orientation, appropriately place the object in the virtual world, and render the 3D scene.

Oculus Rift

The Oculus Rift is connected to the computer and will be the primary output device used to consume the experience. Users will interact with our system using the Oculus Rift.

Use cases & Behavior

Our system is designed to be easy to use and natural in feel. Sinse our goal was to bring a physical object to a virtual world seamlessly, we are making sure that our system is simple and intuitive. After pluging the tranceiver into the computer and starting one of the demo applications, the user must hold the device with the sensors still for approximately 5-10 seconds. The device position and orientation will be initiallized by the computer and the motion of the object in the real world will be modeled by our system and displayed accordingly in the demo's virtual space. In addition our system will allow for custom inputs to the system from the device, by attaching a sensor like a button or knob. Finally the sensor can be recharged by plugging it in to the computer or wall A/C and a green light will indicate that the device is charging.