Please use this identifier to cite or link to this item: https://hdl.handle.net/11147/12088
Full metadata record
DC FieldValueLanguage
dc.contributor.authorÇınaroğlu, İbrahimen_US
dc.contributor.authorBaştanlar, Yalınen_US
dc.date.accessioned2022-06-23T06:41:13Z-
dc.date.available2022-06-23T06:41:13Z-
dc.date.issued2022-11-
dc.identifier.urihttps://doi.org/10.1016/j.jestch.2022.101098-
dc.identifier.urihttps://hdl.handle.net/11147/12088-
dc.descriptionThis work was supported by the Scientific and Technological Research Council of Turkey (Grant No.120E500). We also acknowledge the support of NVIDIA Corporation with the donation of Titan Xp GPU used for this research.en_US
dc.description.abstractVision based solutions for the localization of vehicles have become popular recently. In this study, we employ an image retrieval based visual localization approach, in which database images are kept with GPS coordinates and the location of the retrieved database image serves as the position estimate of the query image in a city scale driving scenario. Regarding this approach, most existing studies only use descriptors extracted from RGB images and do not exploit semantic content. We show that localization can be improved via descriptors extracted from semantically segmented images, especially when the environment is subjected to severe illumination, seasonal or other long-term changes. We worked on two separate visual localization datasets, one of which (Malaga Streetview Challenge) has been generated by us and made publicly available. Following the extraction of semantic labels in images, we trained a CNN model for localization in a weakly-supervised fashion with triplet ranking loss. The optimized semantic descriptor can be used on its own for localization or preferably it can be used together with a state-of-the-art RGB image based descriptor in hybrid fashion to improve accuracy. Our experiments reveal that the proposed hybrid method is able to increase the localization performance of the standard (RGB image based) approach up to 7.7% regarding Top-1 Recall values.en_US
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.relation.ispartofEngineering Science and Technology, an International Journalen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectAutonomous drivingen_US
dc.subjectImage matchingen_US
dc.subjectImage-based localizationen_US
dc.titleLong-term image-based vehicle localization improved with learnt semantic descriptorsen_US
dc.typeArticleen_US
dc.authorid0000-0001-8712-9461en_US
dc.authorid0000-0002-3774-6872en_US
dc.institutionauthorÇınaroğlu, İbrahimen_US
dc.institutionauthorBaştanlar, Yalınen_US
dc.departmentİzmir Institute of Technology. Computer Engineeringen_US
dc.identifier.wosWOS:000807515200009en_US
dc.identifier.scopus2-s2.0-85125251322en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.identifier.doi10.1016/j.jestch.2022.101098-
dc.contributor.affiliation01. Izmir Institute of Technologyen_US
dc.contributor.affiliation01. Izmir Institute of Technologyen_US
dc.relation.issn22150986en_US
dc.description.volume35en_US
dc.identifier.scopusqualityQ1-
item.fulltextWith Fulltext-
item.grantfulltextopen-
item.openairetypeArticle-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.languageiso639-1en-
item.cerifentitytypePublications-
crisitem.author.dept03.04. Department of Computer Engineering-
Appears in Collections:Computer Engineering / Bilgisayar Mühendisliği
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Files in This Item:
File Description SizeFormat 
1-s2.0-S2215098622000064-main.pdfArticle4.3 MBAdobe PDFView/Open
Show simple item record



CORE Recommender

SCOPUSTM   
Citations

6
checked on Apr 5, 2024

WEB OF SCIENCETM
Citations

4
checked on Mar 23, 2024

Page view(s)

30,480
checked on Apr 29, 2024

Download(s)

562
checked on Apr 29, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.