Topological Semantic Mapping by Consolidation of Deep Visual Features

24 Jun 2021  ·  Ygor C. N. Sousa, Hansenclever F. Bassani ·

Many works in the recent literature introduce semantic mapping methods that use CNNs (Convolutional Neural Networks) to recognize semantic properties in images. The types of properties (eg.: room size, place category, and objects) and their classes (eg.: kitchen and bathroom, for place category) are usually predefined and restricted to a specific task. Thus, all the visual data acquired and processed during the construction of the maps are lost and only the recognized semantic properties remain on the maps. In contrast, this work introduces a topological semantic mapping method that uses deep visual features extracted by a CNN (GoogLeNet), from 2D images captured in multiple views of the environment as the robot operates, to create, through averages, consolidated representations of the visual features acquired in the regions covered by each topological node. These representations allow flexible recognition of semantic properties of the regions and use in other visual tasks. Experiments with a real-world indoor dataset showed that the method is able to consolidate the visual features of regions and use them to recognize objects and place categories as semantic properties, and to indicate the topological location of images, with very promising results.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods