Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) have demonstrated impressive capabilities in tasks such as natural language understanding, visual question answering, and image captioning. To investigate their performance in data science domain, many data science benchmarks have been proposed. Despite their success, the existing data science benchmarks face significant challenges because they often do not align well with realistic applications. To address these challenges, we introduce DSBench, a comprehensive benchmark designed to evaluate data science agents with realistic tasks. This benchmark includes 466 data analysis tasks and 74 data modeling tasks, sourced from Eloquence and Kaggle competitions. DSBench offers a realistic setting by encompassing long contexts, multimodal task backgrounds, reasoning with large data files and multi-table structures, and performing end-to-end data modeling tasks. Our evaluation of state-of-the-art LLMs, LVLMs, and agents shows that they struggle with most tasks, with the best agent resolving only 34.12% of data analysis tasks and achieving a 34.74% Relative Performance Gap (RPG). These findings underscore the need for further advancements in developing more practical, intelligent, and autonomous data science models.
@misc{jing2024dsbenchfardatascience,
title={DSBench: How Far Are Data Science Agents to Becoming Data Science Experts?},
author={Liqiang Jing and Zhehui Huang and Xiaoyang Wang and Wenlin Yao and Wenhao Yu and Kaixin Ma and Hongming Zhang and Xinya Du and Dong Yu},
year={2024},
eprint={2409.07703},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2409.07703},
}