Search Results for author: Daniel Asmar

Found 8 papers, 2 papers with code

H-SLAM: Hybrid Direct-Indirect Visual SLAM

1 code implementation12 Jun 2023 Georges Younes, Douaa Khalil, John Zelek, Daniel Asmar

The recent success of hybrid methods in monocular odometry has led to many attempts to generalize the performance gains to hybrid monocular SLAM.

OSPC: Online Sequential Photometric Calibration

no code implementations28 May 2023 Jawad Haidar, Douaa Khalil, Daniel Asmar

Photometric calibration is essential to many computer vision applications.

Visual Odometry

The benefits of synthetic data for action categorization

no code implementations20 Jan 2020 Mohamad Ballout, Mohammad Tuqan, Daniel Asmar, Elie Shammas, George Sakr

In this paper, we study the value of using synthetically produced videos as training data for neural networks used for action categorization.

Optical Flow Estimation

A Unified Formulation for Visual Odometry

no code implementations11 Mar 2019 Georges Younes, Daniel Asmar, John Zelek

Monocular Odometry systems can be broadly categorized as being either Direct, Indirect, or a hybrid of both.

Depth Estimation Visual Odometry

FDMO: Feature Assisted Direct Monocular Odometry

no code implementations15 Apr 2018 Georges Younes, Daniel Asmar, John Zelek

Visual Odometry (VO) can be categorized as being either direct or feature based.

Visual Odometry

Keyframe-based monocular SLAM: design, survey, and future directions

1 code implementation2 Jul 2016 Georges Younes, Daniel Asmar, Elie Shammas, John Zelek

Extensive research in the field of monocular SLAM for the past fifteen years has yielded workable systems that found their way into various applications in robotics and augmented reality.

Identifying Good Training Data for Self-Supervised Free Space Estimation

no code implementations CVPR 2016 Ali Harakeh, Daniel Asmar, Elie Shammas

This paper proposes a novel technique to extract training data from free space in a scene using a stereo camera.

Cannot find the paper you are looking for? You can Submit a new open access paper.