Zeyu Wang is an Assistant Professor in the Thrusts of Artificial Intelligence (AI) and Computational Media and Arts (CMA) at the Hong Kong University of Science and Technology (Guangzhou) and an Affiliate Assistant Professor in the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology. He leads the Creative Intelligence and Synergy (CIS) Lab @HKUST(GZ). He received a PhD from the Department of Computer Science at Yale University, advised by Profs. Julie Dorsey and Holly Rushmeier, and a BS from the School of Artificial Intelligence at Peking University, advised by Profs. Hongbin Zha and Katsushi Ikeuchi. He has published 20 papers in top international journals and conferences on computer graphics and human-computer interaction, including ACM Transactions on Graphics (TOG), ACM Conference on Human Factors in Computing Systems (CHI), and ACM Journal on Computing and Cultural Heritage (JOCCH). He serves as a reviewer for TOG, SIGGRAPH, CHI, VR, CGF, EG, PG, JOCCH, VRST, GVC, etc. His research has been recognized by an Adobe Research Fellowship, a Franke Interdisciplinary Research Fellowship, a Best Paper Award, and a Best Demo Honorable Mention Award.
Honors & Awards
• Adobe研究奖学金
Adobe Research Fellowship
• 弗兰克跨学科研究奖学金
Franke Interdisciplinary Research Fellowship
• 最佳论文奖
Best Paper Award
• Best Demo Honorable Mention Award
Research Interest
•图形分析和智能界面,包括非真实感渲染和基于草图的建模 Drawing analysis and intelligent interfaces, including non-photorealistic rendering and sketch-based modeling
•虚拟/增强/扩展现实,包括使用激光雷达数据进行3D场景捕捉、处理和编辑 Virtual/augmented/extended reality, including 3D scene capture, processing, and editing using LiDAR data
•过程建模,包括三维形状、外观和视频等视觉媒体的参数化生成 Procedural modeling, including parametric generation of visual media like 3D shapes, appearances, and videos
•基于人工智能的内容生成,包括神经渲染、数字人、文本视听信息的多模式创建 AI-based content generation, including neural rendering, digital humans, multimodal creation with text-audio-visual information
•在艺术、设计、感知和文化遗产方面的应用,如龙门石窟和敦煌舞蹈的数字化 AI-based content generation, including neural rendering, digital humans, multimodal creation with text-audio-visual information