Abstract
Visual Simultaneous Localization and Mapping (vSLAM) has gained much attention for localization and mapping of autonomous vehicle and many impressive and robust vSLAM systems have been developed and achieved considerable performance in recent years. However, some problem have still not been solved because of limited information from geometrical features. In this paper we provide a comparative analysis of computationally effective pixel-wise semantic segmentation algorithms that can be used in visual semantic SLAM.