Large Language Model (LLM) in Autonomous Driving

Talk By Hongyang LI

Nov 23, 2023 Thursday


We present DriveL, a new task, dataset, metrics, and baseline for end-to-end autonomous driving. It considers Grah Visual Question Answering (GVQA), where question-answer pairs are interconnected via logical dependencies at the object-level, i.e., interactions between object pairs, and the task-level, for example, perception to prediction to planning. In this talk, I will give the recent work and trending topics on how large language models (LLMs) could facilitate autonomous driving. Some preliminary results are provided and discussed to validate the zero-shot ability of the proposed algorithm at OpenDriveLab. For more details, please visit


Nov 23, 2023 Thursday



RmE1-101, GZ Campus


628 334 1826 (PW: 234567)

Bilibili Live:

ID: 30748067

Speaker Bio:

Hongyang Li

Research Scientist, Shanghai AI Lab

Hongyang received PhD from The Chinese University of Hong Kong in 2019. He is currently a Research Scientist at OpenDriveLab, Shanghai AI Lab. His expertise focused on perception and cognition, end-to-end autonomous driving and foundation model. He serves as Area Chair for top-tiered conferences multiple times, including CVPR, NeurIPS. He won as PI the CVPR 2023 Best Paper Award, and proposed BEVFormer that is renowned for 3D object detection baseline and won the Top 100 AI Papers in 2022.