We are happy to announce that the Seventh Workshop on Image Matching: Local Features and Beyond will be held at CVPR 2025 on June 11-12, 2025, in Nashville TN, USA. The workshop will once again feature an open challenge on Kaggle which will be announced soon. If you wish to receive announcements, please join our mailing list (expect 2-3 emails a year).
Matching two or more images across wide baselines is a core computer vision problem, with applications to stereo, 3D reconstruction, re-localization, SLAM, and retrieval, among many others. Until recently one of the last bastions of traditional handcrafted methods, they too have begun to be replaced with learned alternatives. Interestingly, these new solutions still rely heavily on design intuitions behind handcrafted methods. In short, we are clearly in a transition stage, and our workshop, held every year at CVPR since 2019, aims to address this, bringing together researchers across academia and industry to assess the true state of the field. We aim to establish what works, what doesn’t, what’s missing, and which research directions are most promising, while focusing on experimental validation.
Towards this end, every workshop edition has included an open challenge on local feature matching. Its results support our statement, as solutions have evolved from carefully tuned traditional baselines (e.g. SIFT keypoints with learned patch descriptors) to more modern solutions (e.g. transformers). Local features might have an expiration date, but true end-to-end solutions still seem far away. More importantly, the results of the Image Matching Challenges have shown that comprehensive benchmarking with downstream metrics is crucial to figure out how novel techniques compare with their traditional counterparts. Our ultimate goal is to understand the performance of algorithms in real-world scenarios, their failure modes, and how to address them, and to find out problems that emerge in practical settings but are sometimes ignored by academia. We believe that this effort provides a valuable feedback loop to the community.
Topics include (but are not limited to):
We invite paper submissions up to 8 pages, excluding references and acknowledgements. They should use the CVPR template (reviews are double-blind, so please hide author data in the pdf) and be submitted to the CMT site:
Submissions must contain novel work and will be indexed in IEEE Xplore/CVF. They will receive at least two double-blind reviews.
We welcome PC self-nominations. If you’re willing to review for the workshop, please reach out at image-matching@googlegroups.com.
(All dates are at 11:59PM, Pacific Time, unless stated otherwise.)
2019-2021 Image Matching Benchmark (used in previous editions of the challenge).
Please reach us with any questions at image-matching@googlegroups.com.