3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion

Yu Zhang, Mao Ye, Dinesh Manocha, Ruigang Yang

Research output: Contribution to journalArticle

  • 1 Citations

Abstract

We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras or even to our human visual system. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera (e.g., the Microsoft Kinect camera) with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or a simple parametric surface. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.

LanguageEnglish (US)
Pages1785-1798
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume40
Issue number8
DOIs
StatePublished - Aug 1 2018

Fingerprint

3D Reconstruction
Mirror
Fusion
Acoustics
Mirrors
Fusion reactions
Glass
Depth Map
Camera
Cameras
Ultrasonic sensors
Parametric Surfaces
Sensor
Sensor Fusion
Human Visual System
Sensors
3D Model
Interior
Segmentation
Update

Keywords

  • 3D reconstruction
  • sensor fusion
  • transparent/mirrored surface modeling
  • ultrasonic range finding

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Cite this

3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion. / Zhang, Yu; Ye, Mao; Manocha, Dinesh; Yang, Ruigang.

In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40, No. 8, 01.08.2018, p. 1785-1798.

Research output: Contribution to journalArticle

@article{efe53964a48c4b15a9255be1b1cafc3b,
title = "3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion",
abstract = "We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras or even to our human visual system. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera (e.g., the Microsoft Kinect camera) with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or a simple parametric surface. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.",
keywords = "3D reconstruction, sensor fusion, transparent/mirrored surface modeling, ultrasonic range finding",
author = "Yu Zhang and Mao Ye and Dinesh Manocha and Ruigang Yang",
year = "2018",
month = "8",
day = "1",
doi = "10.1109/TPAMI.2017.2723883",
language = "English (US)",
volume = "40",
pages = "1785--1798",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
issn = "0162-8828",
publisher = "IEEE Computer Society",
number = "8",

}

TY - JOUR

T1 - 3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion

AU - Zhang, Yu

AU - Ye, Mao

AU - Manocha, Dinesh

AU - Yang, Ruigang

PY - 2018/8/1

Y1 - 2018/8/1

N2 - We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras or even to our human visual system. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera (e.g., the Microsoft Kinect camera) with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or a simple parametric surface. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.

AB - We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras or even to our human visual system. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera (e.g., the Microsoft Kinect camera) with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or a simple parametric surface. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.

KW - 3D reconstruction

KW - sensor fusion

KW - transparent/mirrored surface modeling

KW - ultrasonic range finding

UR - http://www.scopus.com/inward/record.url?scp=85023178308&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85023178308&partnerID=8YFLogxK

U2 - 10.1109/TPAMI.2017.2723883

DO - 10.1109/TPAMI.2017.2723883

M3 - Article

VL - 40

SP - 1785

EP - 1798

JO - IEEE Transactions on Pattern Analysis and Machine Intelligence

T2 - IEEE Transactions on Pattern Analysis and Machine Intelligence

JF - IEEE Transactions on Pattern Analysis and Machine Intelligence

SN - 0162-8828

IS - 8

ER -