Keypoint Matches Filtering in Computer Vision: Comparative Analysis of RANSAC and USAC Variants

Authors

  • Andriy Fesiuk
  • Yuriy Furgala

Keywords:

SIFT, SURF, ORB, BRISK, RANSAC, USAC, detection, description, keypoints

Abstract

In this study, a detailed analysis is conducted to evaluate the efficiency of various keypoint matching filtering methods, including RANSAC and its USAC variations, namely, USAC-DEFAULT, USAC-FAST, USAC-ACCURATE, USAC-MAGSAC, and USAC-PROSAC. Keypoints are detected and described using the SIFT, SURF, ORB, and BRISK methods. This work aims to assess the impact of filtering methods on the accuracy, stability, and processing speed of image analysis. The results show that while RANSAC has the slowest performance, it provides the highest stability, with a similarity coefficient deviation of 0.5%. RANSAC with modified parameters demonstrates higher accuracy and significantly faster processing compared to standard RANSAC, outperforming it by approximately 2.5 times and achieving a 4% accuracy improvement over USAC-DEFAULT. The most rapid methods are USAC-PROSAC and USAC-FAST, whereas USAC-MAGSAC has the longest execution time among all USAC variations. Accuracy analysis of the different detectors shows that SIFT achieved the highest similarity coefficient values. SURF demonstrated slightly lower accuracy than SIFT, while BRISK showed results inferior to SURF. ORB is found to be the least effective among the evaluated detectors. This work emphasizes the importance of an adaptive approach when selecting keypoints matching filtering methods to achieve high accuracy, stability, and processing speed in various computer vision applications. The findings of this study will assist developers and researchers in choosing optimal filtering methods and improving the efficiency of image processing algorithms for specific tasks.

 

References

F. Zhu, H. Li, J. Li, B. Zhu, & S. Lei, “Unmanned aerial vehicle remote sensing image registration based on an improved oriented FAST and rotated BRIEF-random sample consensus algorithm,” Engineering Applications of Artificial Intelligence, vol. 126, 106944, 2023. https://doi.org/10.1016/j.engappai.2023.106944.

A. Urdapilleta and A. Agudo, “Comparative study of feature localization methods for endoscopy image matching,” Proceedings of the 2023 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), Kuala Lumpur, Malaysia, 2023, pp. 3719-3723. https://doi.org/10.1109/ICIPC59416.2023.10328381.

P. Arora, R. Mehta, & R. Ahuja, “An adaptive medical image registration using hybridization of teaching learning-based optimization with affine and speeded up robust features with projective transformation,” Cluster Comput, vol. 27, pp. 607–627, 2024. https://doi.org/10.1007/s10586-023-03974-3.

A. Kaur, M. Kumar, & M. K. Jindal, “Cattle identification system: a comparative analysis of SIFT, SURF and ORB feature descriptors,” Multimed Tools Appl, vol. 82, pp. 27391–27413, 2023. https://doi.org/10.1007/s11042-023-14478-y.

Y. M. Furgala, & B. P. Rusyn, “Peculiarities of Melin transform application to symbol recognition,” Proceedings of the 2018 14th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering (TCSET), Lviv-Slavske, Ukraine, 2018, pp. 251–254. https://doi.org/10.1109/TCSET.2018.8336200.

Y. Cui, Y. Hao, Q. Wu, et al., “An optimized RANSAC for the feature matching of 3D LiDAR point cloud,” Proceedings of the 2024 5th ACM International Conference on Computing, Networks and Internet of Things (CNIOT'24), 2024, pp. 287–291. https://doi.org/10.1145/3670105.3670153.

J. Yang, Z. Huang, S. Quan, Z. Cao and Y. Zhang, “RANSACs for 3D rigid registration: A comparative evaluation,” IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 10, pp. 1861-1878, 2022. https://doi.org/10.1109/JAS.2022.105500.

Y. Furgala, Y. Mochulsky, B. Rusyn, “Evaluation of objects recognition efficiency on mapes by various methods,” Proceedings of the 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP), Lviv, Ukraine, 2018, pp. 595-598. https://doi.org/10.1109/DSMP.2018.8478435.

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, issue 2, pp. 91–110, 2004. https://doi.org/10.1023/B:VISI.0000029664.99615.94.

H. Bay, A. Ess, T. Tuytelaars, & L. Van Gool, “SURF: Speeded up robust features,” Computer Vision and Image Understanding, vol. 110, issue 3, pp. 346–359, 2008. https://doi.org/10.1016/j.cviu.2007.09.014.

E. Rublee, V. Rabaud, K. Konolige, & G. Bradski, “ORB: An efficient alternative to SIFT or SURF,” Proceedings of the 2011 IEEE International Conference on Computer Vision, pp. 2564–2571, 2011. https://doi.org/10.1109/ICCV.2011.6126544.

S. Leutenegger, M. Chli, & R. Y. Siegwart, “BRISK: Binary robust invariant scalable keypoints,” Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 2011, pp. 2548–2555. https://doi.org/10.1109/ICCV.2011.6126542.

M. Kobasyar and B. Rusyn, “The Radon transform application for accurate and efficient curve,” Proceedings of the International Conference Modern Problems of Radio Engineering, Telecommunications and Computer Science, 2004., Lviv-Slavsko, Ukraine, 2004, pp. 223-224. https://ieeexplore.ieee.org/abstract/document/1365928.

M. A. Fischler, R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381-395, 1981. https://doi.org/10.1145/358669.358692.

R. Raguram, O. Chum, M. Pollefeys, J. Matas and J.-M. Frahm, “USAC: A universal framework for random sample consensus,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 2022-2038, 2013. https://doi.org/10.1109/TPAMI.2012.257.

M. Ivashechkin, D. Baráth, J. Matas, “USACv20: Robust essential, fundamental and homography matrix estimation,” arXiv:2104.05044v1, 2021. https://doi.org/10.48550/arXiv.2104.05044.

C.G. Melek, E. Battini Sonmez, H. Ayral, “Development of a hybrid method for multi-stage end-to-end recognition of grocery products in shelf images,” Electronics, vol. 12, 3640, 2023. https://doi.org/10.3390/electronics12173640.

M. Erkin Yücel, C. Ünsalan, “Planogram compliance control via object detection, sequence alignment, and focused iterative search,” Multimedia Tools and Applications, vol. 83, issue 8, pp. 24815-24839, 2024. https://doi.org/10.1007/s11042-023-16427-1.

A. Tonioni, L. Di Stefano, “Product recognition in store shelves as a sub-graph isomorphism problem,” In: Battiato, S., Gallo, G., Schettini, R., Stanco, F. (eds) Image Analysis and Processing – ICIAP’2017, Lecture Notes in Computer Science, vol 10484, 2017. Springer, Cham. https://doi.org/10.1007/978-3-319-68560-1_61.

C. G. Melek, E. Battini Sönmez, and S. Varlı, “Datasets and methods of product recognition on grocery shelf images using computer vision and machine learning approaches: An exhaustive literature review,” Engineering Applications of Artificial Intelligence, vol. 133, 108452, 2024. https://doi.org/10.1016/j.engappai.2024.108452.

N. Mohtaram, F. Achakir, “Automatic detection and recognition of products and planogram conformity analysis in real time on store shelves,” In: Bebis, G., et al. Advances in Visual Computing. ISVC 2022. Lecture Notes in Computer Science, vol 13599, 2022. Springer, Cham. https://doi.org/10.1007/978-3-031-20716-7_6.

S. A. Khan Tareen and R. H. Raza, “Potential of SIFT, SURF, KAZE, AKAZE, ORB, BRISK, AGAST, and 7 more algorithms for matching extremely variant image pairs,” Proceedings of the 2023 4th IEEE International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), Sukkur, Pakistan, 2023, pp. 1-6. https://doi.org/10.1109/iCoMET57998.2023.10099250.

M. Muja, & D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in Alpesh Ranchordas & Helder Araújo, ed., ‘VISAPP (1)’, INSTICC Press, pp. 331-340, 2009.

X. Ling, J. Liu, Z. Duan, J. Luan, “A robust mismatch removal method for image matching based on the fusion of the local features and the depth,” Remote Sensing, vol. 16, no. 11, 1873, 2024. https://doi.org/10.3390/rs16111873.

A. Zelinsky, “Learning OpenCV – Computer vision with the OpenCV library (Bradski, G.R. et al.; 2008) [On the Shelf],” IEEE Robotics & Automation Magazine, vol. 16, no. 3, pp. 100-100, 2009. https://doi.org/10.1109/MRA.2009.933612.

J. Howse, and J. Minichino, Learning OpenCV 4 Computer Vision with Python 3: Get to Grips with Tools, Techniques, and Algorithms for Computer Vision and Machine Learning, Packt Publishing Ltd, 2020.

O. Chum, J. Matas, & J. Kittler, “Locally optimized RANSAC,” In B. Michaelis & G. Krell (Eds.), Pattern Recognition: 25th DAGM Symposium, Magdeburg, Germany, September 10–12, Proceedings (Vol. 2781, pp. 236–243). Springer Berlin Heidelberg, 2003. https://doi.org/10.1007/978-3-540-45243-0_3.

D. Barath, J. Matas, “Graph-cut RANSAC: Local optimization on spatially coherent structures,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 9, pp. 4961-4974, 2021. https://doi.org/10.1109/TPAMI.2021.3071812.

D. Baráth, J. Noskova, M. Ivashechkin and J. Matas, “MAGSAC++, a Fast, Reliable and Accurate Robust Estimator,” Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 1301-1309. https://doi.org/10.1109/CVPR42600.2020.00138.

O. Chum and J. Matas, “Matching with PROSAC - progressive sample consensus,” Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), San Diego, CA, USA, 2025, vol. 1, pp. 220-226. https://doi.org/10.1109/CVPR.2005.221.

D. Nasuto, J. B. R. Craddock, “Napsac: High noise, high dimensional robust estimation-it’s in the bag,” Proceedings of the British Machine Vision Conference 2002, Cardiff, UK, 2–5 September 2002, pp. 458–467.

M. Rodríguez, G. Facciolo, & J.-M. Morel, “Robust homography estimation from local Affine maps,” Image Processing On Line, vol. 13, pp. 65–89, 2023. https://doi.org/10.5201/ipol.2023.356.

A. Fesiuk and Y. Furgala, “The impact of parameters on the efficiency of keypoints detection and description,” Proceedings of the 2023 IEEE 13th International Conference on Electronics and Information Technologies (ELIT), Lviv, Ukraine, 2023, pp. 261-264. https://doi.org/10.1109/ELIT61488.2023.10310866.

A. Fesiuk, and Yu. Furgala, “Keypoints on the images: Comparison of detection by different methods,” Electronics and Information Technologies, no. 21, pp. 15–23, 2023. https://doi.org/10.30970/eli.21.2.

T. Luan, F. Lv, M. Sun, X. Ban and Z. Zhou, “Multiobject tracking algorithm with adaptive noise calculation and information fusion in crowded environments,” IEEE Sensors Journal, vol. 24, no. 17, pp. 27666-27676, 2024. https://doi.org/10.1109/JSEN.2024.3425842.

A. Barroso-Laguna, E. Brachmann, V. A. Prisacariu, G. Brostow and D. Turmukhambetov, “Two-view geometry scoring without correspondences,” Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 2023, pp. 8979-8989. https://doi.org/10.1109/CVPR52729.2023.00867.

Downloads

Published

2025-07-01

How to Cite

Fesiuk, A., & Furgala, Y. (2025). Keypoint Matches Filtering in Computer Vision: Comparative Analysis of RANSAC and USAC Variants. International Journal of Computing, 24(2), 343-350. Retrieved from https://www.computingonline.net/computing/article/view/4018

Issue

Section

Articles