Paper

Visual Perception System for Autonomous Driving

The recent surge in interest in autonomous driving stems from its rapidly developing capacity to enhance safety, efficiency, and convenience. A pivotal aspect of autonomous driving technology is its perceptual systems, where core algorithms have yielded more precise algorithms applicable to autonomous driving, including vision-based Simultaneous Localization and Mapping (SLAMs), object detection, and tracking algorithms. This work introduces a visual-based perception system for autonomous driving that integrates trajectory tracking and prediction of moving objects to prevent collisions, while addressing autonomous driving's localization and mapping requirements. The system leverages motion cues from pedestrians to monitor and forecast their movements and simultaneously maps the environment. This integrated approach resolves camera localization and the tracking of other moving objects in the scene, subsequently generating a sparse map to facilitate vehicle navigation. The performance, efficiency, and resilience of this approach are substantiated through comprehensive evaluations of both simulated and real-world datasets.

Results in Papers With Code
(↓ scroll down to see all results)