What is MMCV?

MMCV is a multimodal claim verification dataset featuring natural, multi-hop claims, with strong supervision for supporting facts to enable more explainable fact-checking systems. It is collected by a team of NLP researchers at Illinois Institute of Technology, and Emory University.

For more details about MMCV, please refer to our paper:

Getting started

MMCV is distributed under a CC BY-SA 4.0 License. The combined dataset as well as for each hop can be downloaded below.

A more comprehensive summary about data download, preprocessing, baseline model training, and evaluation is included in our GitHub repository.

Evaluation

As explained in the Sec 5 of the paper, zero-shot multimodal claim verification using various MLLMs under two settings. In the closed-book setting, the model does not retrieve information from external knowledge sources and must rely on its parametric (internal) knowledge to verify the claim. In the open-book setting, the model is provided with a set of gold evidence. Please refer to our evaluation script provided below for calculating the performance metrics.

Citation

If you use MMCV in your research, please cite our paper with the following BibTeX entry

@inproceedings{wang-etal-2025-piecing,
    title = "Piecing It All Together: Verifying Multi-Hop Multimodal Claims",
    author = "Wang, Haoran  and
      Rangapur, Aman  and
      Xu, Xiongxiao  and
      Liang, Yueqing  and
      Gharwi, Haroon  and
      Yang, Carl  and
      Shu, Kai",
    editor = "Rambow, Owen  and
      Wanner, Leo  and
      Apidianaki, Marianna  and
      Al-Khalifa, Hend  and
      Eugenio, Barbara Di  and
      Schockaert, Steven",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.coling-main.498/",
    pages = "7453--7469",
    abstract = "Existing claim verification datasets often do not require systems to perform complex reasoning or effectively interpret multimodal evidence. To address this, we introduce a new task: multi-hop multimodal claim verification. This task challenges models to reason over multiple pieces of evidence from diverse sources, including text, images, and tables, and determine whether the combined multimodal evidence supports or refutes a given claim. To study this task, we construct MMCV, a large-scale dataset comprising 15k multi-hop claims paired with multimodal evidence, generated and refined using large language models, with additional input from human feedback. We show that MMCV is challenging even for the latest state-of-the-art multimodal large language models, especially as the number of reasoning hops increases. Additionally, we establish a human performance benchmark on a subset of MMCV. We hope this dataset and its evaluation task will encourage future research in multimodal multi-hop claim verification."
}
Leaderboard
To verify MMCV claim, the system must first retrieve the supporting facts from the corpus and predict whether the claim is supported or not. The retrieved facts are evaluated against the ground-truth to yield accuracy F1 scores.
Model Code Closed-Book Open-Book
1-Hop 2-Hop 3-Hop 4-Hop 1-Hop 2-Hop 3-Hop 4-Hop
1
Sep 16, 2024
MMCV Baseline
Illinois Tech
(Wang et al. 2024)
71.79 63.87 66.76 64.64 79.20 71.66 65.86 66.97