Sensor Fusion

89 papers with code • 0 benchmarks • 2 datasets

Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. [Wikipedia]

Most implemented papers

Improvements to Target-Based 3D LiDAR to Camera Calibration

UMich-BipedLab/extrinsic_lidar_camera_calibration 7 Oct 2019

The homogeneous transformation between a LiDAR and monocular camera is required for sensor fusion tasks, such as SLAM.

A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors

HKUST-Aerial-Robotics/VINS-Fusion 11 Jan 2019

We highlight that our system is a general framework, which can easily fuse various global sensors in a unified pose graph optimization.

LiDARTag: A Real-Time Fiducial Tag System for Point Clouds

UMich-BipedLab/LiDARTag 23 Aug 2019

Because of the LiDAR sensors' nature, rapidly changing ambient lighting will not affect the detection of a LiDARTag; hence, the proposed fiducial marker can operate in a completely dark environment.

PointPainting: Sequential Fusion for 3D Object Detection

Song-Jingyu/PointPainting CVPR 2020

Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature.

PointFusion: Deep Sensor Fusion for 3D Bounding Box Estimation

mialbro/PointFusion CVPR 2018

We present PointFusion, a generic 3D object detection method that leverages both image and 3D point cloud information.

Multi-Resolution Multi-Modal Sensor Fusion For Remote Sensing Data With Label Uncertainty

GatorSense/MIMRF 2 May 2018

It is valuable to fuse outputs from multiple sensors to boost overall performance.

MonoLayout: Amodal scene layout from a single image

hbutsuak95/monolayout 19 Feb 2020

We dub this problem amodal scene layout estimation, which involves "hallucinating" scene layout for even parts of the world that are occluded in the image.

CenterFusion: Center-based Radar and Camera Fusion for 3D Object Detection

mrnabati/CenterFusion 10 Nov 2020

In this paper, we focus on the problem of radar and camera sensor fusion and propose a middle-fusion approach to exploit both radar and camera data for 3D object detection.

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion

aleksandrkim61/EagerMOT 29 Apr 2021

Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time.

R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package

hku-mars/r3live 10 Sep 2021

Moreover, to make R3LIVE more extensible, we develop a series of offline utilities for reconstructing and texturing meshes, which further minimizes the gap between R3LIVE and various of 3D applications such as simulators, video games and etc (see our demos video).