?
Analysis of Deep Feature Matching Algorithms in UAV Visual Localization
The work presents an analysis of the application of deep learning-based methods for the keypoint extraction and matching in the context of map-aided UAV visual localization. A method for visual localization in three degrees of freedom is proposed, which employs pretrained SuperPoint and LightGlue networks for both short-term optical flow and global frame-to-map matching components. The ability of SuperPoint and LightGlue to refine a prediction of a less accurate location estimator without fine-tuning was estimated. The accuracy of the proposed method was evaluated and a detailed analysis of keypoint-based image matching algorithms was conducted on the AdM_UAV dataset. The performance evaluation of the proposed method was carried out on NVIDIA Jetson Orin Nano and shows acceptable results for real-time visual localization on this hardware platform.