Questions and Answers

The Dual Aperture camera uses a 4-color sensor to capture both visible and infrared spectrum data to generate an image with accurate depth information.
A 4-color sensor utilizes 4 pixel types: red, green, blue and infrared. The RGB pixels capture visual light spectrum data and are used to create a conventional RGB image. Infrared pixels capture infrared light spectrum data and are used to enhance the image's sharpness. Differences in sharpness between the two data sets are used to estimate the depth of objects in the image.
The Dual Aperture camera has two apertures – one for visible light and one for Infrared light; however, the infrared aperture is narrower than the visible light aperture. This difference creates a disparity in focus between the RGB and Infrared images that can then be used to measure the depth of an object in an image.
A Dual Aperture camera can capture the 3-Dimensional depth information of objects in an image. The 3D depth information can be used to:
  1. Refocus the image after the picture is taken
  2. Generate a 3D image for 3D TVs
  3. Track 3D gestures
The infrared and RGB color components are each focused differently according to the depth of the objects in the image. By using these differences, it is possible to estimate depth at each pixel without aligning the IR and RGB data.
Depth measurement enables the camera’s software to refocus the image withoutcompromising the resolution, which it would in light-filed cameras. This creates several advantages for the DA camera, including the ability to adjust the depth of field and the amount of blur.
The Dual Aperture camera uses depth information to filter out unwanted movements in the background/foreground, e.g. fingers in front of face, that would normally confuse 2D gesture-tracking algorithms. It is also possible to recognize gestures that have variable depth information, e.g. back and forth movements, in order to further enhance the user experience.
Kinect and Leap Motion systems require multiple sensors with elaborate LED lighting sources. The Dual Aperture system requires only a single sensor using ambient light sources, which make it possible to implement gesture tracking in small form factor cameras, e.g. mobile device cameras. The DA system also works under direct, bright sunlight (unlike Kinect) and captures high quality color images (like conventional cameras).
The Dual Aperture enables image capture that is on par with Bayer pattern sensor equipped cameras with similar specifications (in number of pixels, noise levels, and color quality).
The infrared color component can capture additional sharpness information on top of the RGB. This information can then be used to either sharpen the image or be used in noise-reduction algorithms that are applied to the image.
The Dual Aperture camera utilizes the same form factor used in most traditional cameras, even for mobile devices. The only change to the manufacturing process is to use 2 apertures for the optical lens. The visual aperture is the same as the one used in a traditional camera and the IR aperture is created by opening a small hole in the IR filter.
The Dual Aperture camera has lower manufacturing costs when compared to other similar depth measurement technologies that require additional sensors, special infrared light sources, and complex computing for image alignment.
The Dual Aperture camera has the same power consumption requirements as a conventional camera when capturing an image; however, generating depth information does require additional processing in the main processor (AP) of mobile devices. For gesture tracking, DA maintains low power and computing by using minimum pixels and video frame rate.
Dual Aperture is well suited for mobile applications that incorporate photography and gesture control. There are also use cases that extend to controlling TVs via gestures and enhancing safety in automobiles.
Dual Aperture does not manufacture the actual cameras. DA provides licenses to sensor and camera manufacturers to manufacture the camera as well as the software algorithms involved in touch refocus, 3D images, 3D depth estimation, and image enhancement.
Yes. A 3.2M pixel mobile sensor sample is currently available. Other resolution sensors are currently under development and will be made available in 2014.
The first camera prototype will be available in Q1 2014 as a standalone USB camera.
They can expect to be able to capture regular pictures and manipulate the depth information via various photograph applications.