staging

Multi-Sensor Sensing and Fusion
Core Competencies

Multi-Sensor Sensing and Fusion

  1. Home
  2. Innovative Firmware & Software
  3. Multi-Sensor Sensing and Fusion

Transforming Imaging with Multi-Sensor Fusion

Our innovative multi-lens camera solutions offer unparalleled imaging capabilities, providing stunning 180/360-degree panoramic views, advanced sensor fusion applications, and precise depth-sensing stereo images.

 

Key Benefits
  • 180/360-Degree Panoramas: Capture immersive and breathtaking panoramic views with ease.
  • Sensor Fusion: Combine captured image data from multiple sensors for enhanced image quality, depth perception, and more.
  • Depth-Sensing Stereo Image: Create accurate 3D models and enable innovative applications like augmented reality and virtual reality.
  • Versatility: Our solutions are well designed and flexible too to fit into a wide range of applications, from photography and videography to industrial inspection and scientific research.

 

Applications
  • Photography and Videography: Capture stunning panoramic images and videos for creative expression.
  • Virtual and Augmented Reality: Create immersive experiences with accurate depth perception and reconstructed 3D models.
  • Industrial Inspection: Perform detailed inspections and measurements with high-precision imaging.
  • Scientific Research: Collect and capture live data for various scientific fields, such as environmental monitoring and robotics.

 

Hand holding a lens showing a sharp corrected image of a highway scene, symbolizing multi-sensor fusion and depth imaging capabilities at Ability Enterprise

 

 

Dynamic Real-Time Image Stitching: The Future of Imaging

Ability's innovative dynamic stitching technology is a game-changer in the world of imaging. By dividing the stitching area into multiple smaller areas, our algorithm can stitch images directly on the camera device, eliminating the need for time-consuming post-processing on a PC or app.

 

Key Benefits
  • Instant Stitching: Enjoy stitched images immediately after capturing.
  • Seamless Integration: Stitching is performed directly on the camera, simplifying the workflow.
  • Enhanced Image Quality: Our advanced stitching algorithm ensures high-quality, seamless panoramas.
  • Versatility: Compatible with a wide range of camera models and applications.

 

How It Works
  • Image Segmentation: The stitching area is divided into multiple overlapping segments.
  • Real-Time Stitching: Our algorithm processes each segment individually, stitching them together in real time.
  • Optimization: Advanced optimization techniques ensure seamless transitions and high-quality results.

 

Applicaions
  • Photography: Capture stunning panoramic landscapes, cityscapes, and group photos.
  • Videography: Create immersive 360-degree videos for virtual reality and social media.
  • Surveillance: Monitor large areas with a single panoramic view.
  • Industrial Inspection: Inspect large structures and equipment with ease.

 

 

 

Immersive 360° Panoramas: Beyond the Limits of Vision

Traditional image stitching techniques are not only slow but also generate distorted images due to perspective differences and thus deteriorate the viewing experience. Ability's proprietary dynamic algorithm instantly analyzes and corrects perspective differences of objects at various distances, ensuring seamless and natural stitched images.

 

Key Benefits
  • True-to-life Scenes: Presents a more realistic view as seen by natural human eyes.
  • Seamless Stitching: Eliminates stitching artifacts for a more immersive viewing experience.
  • Real-time Processing: Instantly generates panoramic images for various application needs.
  • Application Scenarios: Virtual reality, live streaming, conference rooms, exhibition venues, and more.

 

 

 

 

 

Addressing Image Variations in Multi-Image Stitching

When stitching multiple images together, a critical challenge arises: handling disparities between individual images. These disparities primarily stem from two factors:

Disparties
  • Varying Perspectives: Each image sensor captures the scene from a unique angle, resulting in different environmental information being recorded.
  • Individual Sensor/Lens Tolerances: Variations in sensor and lens characteristics contribute to differences in image quality and color rendering.

Consequently, a more sophisticated approach beyond simple single-image IQ adjustments is necessary. 

 

The key considerations for addressing image discrepancies are as follows:

  • Feature-based alignment: Accurately matching corresponding features across multiple images to establish a common reference frame.
  • Color balancing: Ensuring consistent color representation across all images.
  • Exposure compensation: Adjusting exposure levels in accordance with varying lighting conditions.
  • Seamless blending: Creating smooth transitions between images to minimize visible seams.

By carefully addressing these factors, we can achieve high-quality stitched images that accurately represent the original scene.

 

 

 

 

Pre-Calibration for Improved Image Stitching

To address the inherent brightness variations at the edges of images captured by different lenses, we've developed a pre-calibration process. By placing the camera within an integrating sphere that emits uniform light, we can accurately measure and compensate for these variations. 

 

Applications
  • Tolerates Lens Variations: This method can accommodate differences in brightness between individual lens units, reducing the stringent requirements for lens consistency during manufacturing process.
  • Reduces Costs: By mitigating the need for extremely high-precision lenses, the overall unit cost can be significantly reduced.
  • Enhances Image Quality: The pre-calibration process helps to minimize the brightness falloff commonly observed at the edges of images, resulting in more consistent and visually appealing stitched panoramas.

 

Process Overview
  • Integrating Sphere Placement: The camera is placed inside an integrating sphere that emits uniform light.
    Brightness Measurement: The brightness distribution across the image is measured, identifying areas with lower luminance.
  • Calibration Data Generation: Calibration data is generated based on the measured brightness variations.
    Software Compensation: During image stitching, the software applies the calibration data to compensate for brightness differences, produce more seamless and visually appealing stitched images.

By implementing this pre-calibration process, we can deliver high-quality stitched panoramas while reducing material costs and increasing manufacturing flexibility.

 

 

 

Horizon Stabilization for 360 Video

Our horizon stabilization algorithm makes use of the Inertial Measurement Unit (IMU) data to counteract camera shake and maintain a level horizon in 360-degree video footage. By analyzing the IMU's data, we can accurately determine the camera's orientation and apply appropriate corrections to the video frames.

 

Advantages
  • With the built-in stabilization, there is no need for the external stabilizers anymore. The image stabilization is nicely performed by the camera's internal algorithms, making it a more compact and cost-effective solution.
  • Improved Image Quality: By reducing image distortion from camera shake, the generated video is smoother and more visually appealing, especially when viewing in VR or on large displays.

 

How It Works
  • IMU Data Acquisition: The IMU continuously collects orientation information of the camera.
  • Orientation Estimation: Our algorithm processes the IMU data to estimate the camera's respective orientation in 3D space.
  • Image Correction: The estimated orientation is used to correct for any tilt or rotation in the captured images to ensure a level horizon.
  • Stitching Optimization: The stabilized images are then stitched together to create a seamless 360-degree panorama.

 

Real-World Applicaion
  • A good example is a 360-degree action camera mounted on a helmet. The IMU data would be used to stabilize the video, even during intense activities like mountain biking or skiing, and hence produce smooth and immersive footage.
    Orientation Estimation: Our algorithm processes the IMU data to estimate the camera's respective orientation in 3D space.
  • By incorporating horizon stabilization into our 360-degree cameras, we provide users with a more stable and enjoyable viewing experience.

 

 

 

 

Rolling Shutter Correction

Our rolling shutter correction algorithm makes use of the Inertial Measurement Unit (IMU) data to mitigate the Jello Effect caused by the rolling shutter in CMOS sensors. By analyzing the IMU's acceleration and angular velocity data, we can accurately estimate the camera's motion and apply appropriate corrections to the captured images.

 

Advantages
  • Cost-Effective Solution: Rolling shutter sensors are often widely used due to their affordable cost compared to the global shutter sensors. Our algorithm allows you to achieve the benefits of global shutter exposure using existing CMOS technology.
  • Improved Image Quality: By correcting rolling shutter distortions, our algorithm enhances image quality, reducing image distortion and improving overall visual fidelity.
  • Versatility: Our algorithm can be applied to various applications, including action cameras, drones, and surveillance systems, where rolling shutter distortions can be a significant issue.

 

How It Works
  • IMU Data Acquisition: The IMU continuously collects orientation information of the camera.
  • Motion Estimation: Our algorithm processes the IMU data to estimate the camera's motion during the exposure time.
  • Image Correction: Based on the estimated motion, the algorithm applies appropriate corrections to the captured image to compensate for rolling shutter distortions.

 

Real-World Applicaion
  • Action Cameras: Reduce the "jello effect" often seen in action camera footage, resulting in smoother and more stable videos.
  • Drones: Minimize image distortions caused by drone vibrations and fast movements.
  • Surveillance Systems: Improve the accuracy of object tracking and motion analysis.

By incorporating rolling shutter correction into the imaging systems, you can enhance image quality, reduce image distortion, and improve the overall performance of your applications.

 

 

 

 

Stereo Depth Sensing

Our stereo depth sensing algorithm involves two key steps:

  • Stereo Calibration: The cameras are carefully calibrated to ensure they are aligned correctly with consistent optical characteristics. This calibration process establishes the baseline distance between the cameras, which is crucial for depth calculation.
  • Parallax Calculation: Once calibrated, the algorithm analyzes the disparity between corresponding points in the left and right images. This disparity, known as parallax, is directly related to the depth of the scene. By calculating parallax, we can estimate the distance from the camera to the particular objects in the scene.
Advantages
  • Wider Exploration Area: Compared to laser-based distance measurement, stereo vision provides a wider field of view, and thus a larger area can be scanned.
  • Lower Cost: Stereo vision systems typically use less expensive components than laser-based systems, making them a more cost-effective solution for many applications.

 

Keywords Search