We are happy to announce that the Fifth Workshop on Image Matching: Local Features and Beyond will be held at CVPR 2023 on June 19, 2023 (morning, Pacific Time) in Vancouver, Canada. The workshop will once again feature an open challenge which will be announced in the following weeks. We plan to hold it on Kaggle, as we did last year. Further details will be announced before the conference.
11:30-11:40 ORGANIZERS: The 2023 Image Matching Challenge at Kaggle
11:40-11:47 CHALLENGE TALK: 1st place: Team ZJU3DV
11:47-11:54 CHALLENGE TALK: 2nd place: Team Current | Resistance | Voltage
11:54-12:01 CHALLENGE TALK: 4th place: Team Roni Heka
12:01-12:08 CHALLENGE TALK: 5th place: Team Kohei
12:08-12:15 CHALLENGE TALK: 6th place: Team MaxChen303
12:15-12:22 CHALLENGE TALK: 8th place: Team RMD-3DV
12:22-12:30 ORGANIZERS: Closing
Matching two or more images across wide baselines is a core computer vision problem, with applications to stereo, 3D reconstruction, re-localization, SLAM, and retrieval, among many others. Until recently one of the last bastions of traditional handcrafted methods, they too have begun to be replaced with learned alternatives. Interestingly, these new solutions still rely heavily on design intuitions behind handcrafted methods. In short, we are clearly in a transition stage, and our workshop, held every year at CVPR since 2019, aims to address this, bringing together researchers across academia and industry to assess the true state of the field. We aim to establish what works, what doesn’t, what’s missing, and which research directions are most promising, while focusing on experimental validation.
Towards this end, every workshop edition has included an open challenge on local feature matching. Its results support our statement, as solutions have evolved from carefully tuned traditional baselines (e.g. SIFT keypoints with learned patch descriptors) to more modern solutions (e.g. transformers). Local features might have an expiration date, but true end-to-end solutions still seem far away. More importantly, the results of the Image Matching Challenges have shown that comprehensive benchmarking with downstream metrics is crucial to figure out how novel techniques compare with their traditional counterparts. Our ultimate goal is to understand the performance of algorithms in real-world scenarios, their failure modes, and how to address them, and to find out problems that emerge in practical settings but are sometimes ignored by academia. We believe that this effort provides a valuable feedback loop to the community.
Topics include (but are not limited to):
Formulations of keypoint extraction and matching pipelines with deep networks.
Application of geometric constraints into the training of deep networks.
Leveraging additional cues such as semantics and mono-depth estimates.
Methods addressing adversarial conditions where current methods fail (weather changes, day versus night, etc.).
Attention mechanisms to match salient image regions.
Integration of differentiable components into 3D reconstruction frameworks.
Connecting local descriptors/image matching with global descriptors/image retrieval.
Matching across different data modalities such as aerial versus ground.
Large-scale evaluation of classical and modern methods for image matching, by means of our open challenge.
New perception devices such as event-based cameras.
Other topics related to image matching, structure from motion, mapping, and re-localization, such as privacy-preserving representations.
Call for Papers
We invite paper submissions up to 8 pages, excluding references and acknowledgements. They should use the CVPR template and be submitted to the CMT site. Submissions must contain novel work and will be indexed in IEEE Xplore/CVF. They will receive at least two double-blind reviews.