Chris Zulinov's profile

Multi-touch Interactive Spherical Display

As hands are placed on the sphere they are tracked using an infrared camera, their locations are parsed via xml data into ActionScript where they are turned into XY coordinates. These XY coordinates are then used by Flash to give the user feedback. This can be both visual and auditory feedback. From here, the only limitation is imagination.


The first step in this process is capturing the locations of the contact points on the sphere. These ‘blobs’ are obtained using an open source program called Community Core Vision (CCV version 1.3). In the setup, CCV uses an infrared array and infrared camera to accurately track differences in light that appear on the surface of the sphere when it is touched. Generally CCV is used on a flat surface, but as this project has proven, with heavy alterations to the configuration file, it also works on a sphere. These differences are then translated into ‘blobs’, each with their own specific parameters. These parameters not only include their location on a XY plane, but also their acceleration and velocity. CCV can export the data in several different ways. For this project I used XML . The XML data was then sent over a LAN to a computer sitting next to it. This computer dealt with the interaction and animations.
Multi-touch Interactive Spherical Display
Published:

Multi-touch Interactive Spherical Display

Multi-touch interactive spherical display

Published:

Creative Fields