no code implementations • 19 Apr 2022 • Qi Chen, Sourabh Vora
We propose a simple yet effective proposal-free architecture for lidar panoptic segmentation.
no code implementations • NeurIPS 2021 • Qi Chen, Sourabh Vora, Oscar Beijbom
Recent works recognized lidars as an inherently streaming data source and showed that the end-to-end latency of lidar perception models can be reduced significantly by operating on wedge-shaped point cloud sectors rather then the full point cloud.
no code implementations • 14 Jun 2021 • Qi Chen, Sourabh Vora, Oscar Beijbom
Recent works recognized lidars as an inherently streaming data source and showed that the end-to-end latency of lidar perception models can be reduced significantly by operating on wedge-shaped point cloud sectors rather then the full point cloud.
Ranked #23 on LIDAR Semantic Segmentation on nuScenes
4 code implementations • CVPR 2020 • Sourabh Vora, Alex H. Lang, Bassam Helou, Oscar Beijbom
Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature.
15 code implementations • CVPR 2020 • Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, Oscar Beijbom
Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar.
Ranked #312 on 3D Object Detection on nuScenes (using extra training data)
17 code implementations • CVPR 2019 • Alex H. Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, Oscar Beijbom
These benchmarks suggest that PointPillars is an appropriate encoding for object detection in point clouds.
no code implementations • 8 Feb 2018 • Sourabh Vora, Akshay Rangesh, Mohan M. Trivedi
Finally, we evaluate our best performing model on the publicly available Columbia Gaze Dataset comprising of images from 56 subjects with varying head pose and gaze directions.
no code implementations • 31 Jan 2018 • Sujitha Martin, Sourabh Vora, Kevan Yuen, Mohan M. Trivedi
The study and modeling of driver's gaze dynamics is important because, if and how the driver is monitoring the driving environment is vital for driver assistance in manual mode, for take-over requests in highly automated mode and for semantic perception of the surround in fully autonomous mode.