Slide
Slide

3D Recon V2

3D Recon V2 is a subsea stereo imaging system that produces high-resolution, geospatially accurate 3D models of what is seen subsea.

3D Recon V2

3D Recon V2 is a subsea stereo imaging system that produces high-resolution, geospatially accurate 3D models of what is seen subsea.

3D Recon V2

3D Recon V2 is a subsea stereo imaging system that produces high-resolution, geospatially accurate 3D models of what is seen subsea. Housed in a compact subsea enclosure rated for depths up to 4,000 meters, it features two high-resolution machine vision cameras and a high-performance MEMS IMU. By integrating inertial technology with imaging sensors, 3D Recon V2 can generate real-time sparse point clouds for navigation and quality control, as well as high-density 3D models that help engineers make informed decisions about the integrity of their subsea assets.


3D Recon V2 Applications

While 3D Recon V2 was initially developed for Integrity Management (IM) and Inspection, Repair, and Maintenance (IRM) purposes. Its versatility has since led to adoption in a range of other applications, including::

  • Jumper/Spool Metrology
  • Mooring Chain Link Inspection
  • Hull Inspection
  • Pipeline Out-Of-Straightness (OSS) Surveys
  • Pre-Install/As-Built Documentation
  • Real-Time Relative-to-Structure Positioning
  • Real-Time Object Detection and Pose Estimation of Underwater Objects.


How 3D Recon V2 Works

3D Recon V2 leverages the ‘3D reconstruction’ methodology, advanced computer vision, and inertial navigation techniques to generate camera poses, point clouds, object detection, and pose estimation of underwater objects. It supports both real-time and highly optimized offline dense point cloud generation. Here's how each pipeline works:

Real-Time Dense Point Cloud:

  • Alignment: IMU data determines the sensor's orientation (heading, pitch, roll).
  • Feature Detection: Cameras capture images at 5-10Hz, and features are identified based on intensity variations.
  • Feature Matching: Descriptor vectors are computed for features and used for matching between stereo cameras and sequentially from frame to frame, creating a sparse point cloud (approximately 5-10cm pixel resolution).
  • Navigation Integration: The navigation engine integrates acceleration and angular velocity data to update navigation states (position, velocity, attitude).
  • Global Matching: Matching features with existing ones in the map occurs.
  • Position Projection: Features are projected from the camera frame to the global frame.
  • Consistency Checks: Checks ensure alignment with the global map, utilizing RANSAC and optimization.
  • Navigation Update: Successful feature matches update the state and covariance; new features are added to the map.
  • Path Optimization: Data collection for map optimization involving keyframes triggered by specific events.
  • Dense Point Cloud Generation: High-density point clouds are generated using computed position and orientation and utilize disparity maps created by comparing light intensity between the left and right cameras.
  • SLAM: Camera pose, trajectory estimation, and dense point cloud generation occur in real time, allowing the operator to see their position relative to the structure, track their path, and assess coverage. When previously mapped areas are revisited, a loop closure event is triggered, correcting accumulated drift in camera pose and trajectory to ensure globally consistent dense point clouds.

Offline Dense Point Cloud:

  • Feature Detection: Similar to the real-time dense pipeline, this pipeline also detects features from every image captured by the cameras but extracts higher-quality features than the real-time dense pipeline.
  • Feature Matching: Descriptor vectors are computed for features and used to match every image collected. Unlike the real-time dense pipeline, this pipeline also performs non-sequential image matching.
  • Structure from Motion: Structure from Motion and Bundle Adjustment algorithms are utilized to optimally estimate the 3D positions of features and the camera poses for each captured image. This process creates a sparse 3D point cloud along with the estimated camera poses at the time each image was taken.
  • Dense Point Cloud Generation: High-density point clouds are generated using computed camera poses and disparity maps created by comparing light intensity between patches around sparse features across all images.
  • Mesh Generation: Once the dense point cloud is generated, surface normals are computed for each point using its three nearest neighbors, which define a local plane. This process results in a highly detailed 3D model of the structure.

Real-Time Detection and Pose Estimation of Underwater Objects:

  • Deep Learning model training: State of the art Deep Learning models are trained using collected and annotated data to recognize a certain object. The model learns to detect the object and generate bounding boxes and confidence scores of its predictions.
  • Keypoint prediction: The model and pipeline also generate specific ‘Key Points’ used for pose estimation.
  • Pose Estimation: Using known correspondences between the 2D Key Points and the corresponding object’s dimensions, the pose of the object relative to the cameras are calculated using the Perspective-n-Points algorithm. 


3D Recon V2 Key Features

  • Linear, Angular, and Area Accuracy: Because of the tightly coupled inertial navigation solution, 3D Recon V2 offers accurate spatial scaling within the dense delivered models. This precision ensures reliable data for measurement-based assessments.
  • Change Detection: One of the key advantages of 3D Recon V2 is its ability to automate change detection. Automated Change Detection is a powerful feature that becomes accessible through the utilization of models generated by 3D Recon V2. By comparing historical 3D models, operators can monitor structural changes over time for proactive IM decisions.
  • Additional Sensor Integration: 3D Recon V2 can integrate data from other sensors (Contactless CP, hydrocarbon sniffers, etc.), providing a comprehensive model or heat map for structural integrity analysis. 


Deliverables of 3D Recon V2

3D Recon V2 delivers high-resolution, geospatially accurate 3D models with submillimeter pixel resolution. These precise models support the subsea integrity community in making informed decisions about structural health over time. 3D Recon V2 integrates seamlessly into existing workflows, providing high-resolution still images, conventional video, and 3D models generated both in real time for verification and offline for detailed inspection.


Data Acquisition Requirements

We require a survey mux to integrate the 3D Recon V2 spread. All power, serial communications, and image data are connected to the Mux/ROV through a single PBOF cable. It relies on one RS232 and one GB Ethernet port for configuration and data transmission. Power requirements are 24Vdc at 150W.


Product Specs

  1. Type of Application – Marine
  2. Year of Introduction – 2020
  3. Length - 77.9 cm
  4. Width - 59.4 cm
  5. Depth - 30.0 cm
  6. Weight in Air - 34Kg
  7. Weight in Water - 23Kg
  8. Power Requirement - 24VDC 6A / 150W
  9. Serial Communications - RS232 (230400 bps)
  10. Housing – Titanium
  11. Rated - 4,000 m
  12. Sensor Type - Active Pixel CMOS, Global Shutter Resolution - 2028 x 2448
  13. FoV Horizontal - 85°
  14. FoV Vertical - 65°
  15. Number of Frames Per Second - 5 - 10 fps Copper & 10 - 20 fps SM Fiber
  16. Start Up Time - 0.02 s
  17. Communication - 1 Gbps Ethernet Copper & 10 Gps
  18. SM Fiber - Single Fiber Channel
  19. Software – Included
  20. Export Image Formats - Color palletized Mesh, Point Cloud *.ply (Binary)
  21. Battery Type - No Battery

 

 

Contact Zupt today to learn more about 3D Recon and how it can enhance your subsea operations.