⚠️ WARNING: our data contains model outputs that may be considered offensive.
AutoTrust aims at providing a thorough assessment of trustworthiness in VLMs in Autonomous Driving Tasks
The research endeavor is designed to assist researchers and practitioners in better understanding the trustworthiness issues associated with the deployment of state-of-the-art Vision-Language Models (VLMs) for autonomous driving.
This project is organized around the following five primary perspectives of trustworthiness, including:
@misc{xing2024autotrustbenchmarkingtrustworthinesslarge,
title={AutoTrust: Benchmarking Trustworthiness in Large Vision Language Models for
Autonomous Driving},
author={Shuo Xing and Hongyuan Hua and Xiangbo Gao and Shenzhe Zhu and Renjie Li and
Kexin Tian and Xiaopeng Li and Heng Huang and Tianbao Yang and Zhangyang Wang and Yang Zhou and Huaxiu Yao and
Zhengzhong Tu},
year={2024},
eprint={2412.15206},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.15206},
}
Ask us questions at tzz@tamu.edu.
We thank the SQuAD team and the DecodingTrust team for sharing their website template.