Summary form only given. Human-computer interfaces (HCI) have evolved from mouse-keyboard based interaction using text and mouse events to multi-touch screens and other exotic approaches such as using special gloves or other devices to translate human actions into application controls.
One of the trends which are "en vogue" in our days is the control of computer applications and/or computer controlled devices using human gestures and/or body movements. There were and still are a series of attempts to produce computer control scripts via gesture based interfaces, and research literature is abundant in papers on this subject.
However, little or few are of value, as the computer based control via gestures has to be robust and in real-time. Any lag in the result can lead to users abandoning it. More recently, 3D IR cameras provide a promised path to produce depth images. The depth information extracted by the 3D IR camera is mapped to different grey levels in a black and white image.
This image is finally interpreted by image processing suites. 3D IR cameras use near-infrared illumination, allowing objects that are closer to the camera to appear brighter than the rest. This means that computer vision software modules can much more accurately distinguish hands and fingers from the rest of a person's body, even in complete darkness.
In this talk, we will discuss various technologies to be used to build a 3D IR Camera, review which will be finalized with a discussion of the 3D IR Camera designed and implemented by our group. Based on data provided by the 3D IR Camera a series of related image processing applications will be discussed.
A demonstration, of a 3D IR camera system will be presented and applications related to gaming, learning, and immersion in a 3D virtual reality space will be given at the end