Zelai Xu, Zhexuan Xu, Xiangmin Yi, Huining Yuan, Xinlei Chen, Yi Wu, Chao Yu, Yu Wang
VS-Bench is a new benchmark for evaluating Vision Language Models (VLMs) in multi-agent environments, revealing significant gaps in current models' strategic reasoning and decision-making abilities.
Vision Language Models (VLMs) are increasingly being used in complex tasks that involve both visual and language-based inputs. However, most existing tests for these models focus on simple, single-agent tasks. VS-Bench is a new benchmark designed to evaluate how well these models can handle more complex scenarios involving multiple agents interacting in visual environments. The results show that current models are not yet able to perform at optimal levels in these situations, highlighting areas for improvement in future research.