Pretrained Models, Alignment Tech and Agents

Talk By Jiaxing ZHANG

Nov 09, 2023 Thursday


Large Language Models (LLMs) are revolutionizing the field of AI technology by fundamentally changing the way we approach it. These pretrained models offer a wide range of general and specialized capabilities, which serve as the foundation for a multitude of downstream alignments and applications. By treating LLMs as the cognitive core and harnessing the power of tools and knowledge bases, AI agents can be designed and built to seamlessly interact with both the physical and digital environments. We are now entering an exciting new era of large models, and in this presentation, we will delve into the practical aspects and profound implications of this paradigm shift in technology.


Nov 09, 2023 Thursday



RmE1-101, GZ Campus


628 334 1826 (PW: 234567)

Bilibili Live:

ID: 30748067

Speaker Bio:

Dr. Jiaxing ZHANG

Chair Scientist, IDEA

PhD degree from Peking University

Dr. Jiaxing Zhang is currently a chair scientist at the International Digital Economy Academy (IDEA). With a PhD degree from Peking University, he previously held positions at reputable companies such as Microsoft Research Asia and Ant Group, and has developed an expertise in large language models, deep learning, and natural language processing over the years. In terms of academic achievements, he has published more than 30 academic papers and submitted more than 70 patents in top academic conferences and journals (NIPS, OSDI, ACL, CVPR, SIGMOD, NSDI, OOPSLA, ICSE, etc.) in the fields of artificial intelligence, deep learning, distributed systems, etc. Presently, Dr. Zhang is deeply dedicated in LLM technology and NLP research. The “Fengshenbang” large models built by the team under his leadership has become the largest open-source LLM system in Chinese.