Abstract
Automated scientific fact-checking is essential for validating the accuracy of scientific claims, ensuring the integrity of information, and fostering public trust in research findings. However, existing benchmarks for scientific fact-checking suffer from limitations, such as crowd-sourced claims and a lack of quantitative experimental data representation. To address these gaps, we introduce a novel dataset SCITAB, comprising 1.2K challenging scientific claims accompanied by original scientific tables. These claims demand compositional reasoning for verification, making them more representative of real-world scientific fact-checking needs.
Project Members and Collaborators
- Xinyuan Lu (Ph.D. student)
- Dr. Liangming Pan (Postdoc, UCSB)
- Dr. Qian Liu (Research Scientist, SEA AI Lab)
- Prof. Preslav Nakov (Professor, MBZUAI)
- Prof. Min-Yen Kan (Professor and Advisor)
Publications