llama43.png

Yuang Peng (彭雨昂)

Master Student at ITML Group, Tsinghua University.

Research Intern,
Foundation Model Group, Megvii Research (Face++).

Research: As a researcher and engineer specializing in large language model, my primary focus on the development of efficient and scalable methods for multimodal data modeling, with particular emphasis on text, images, and videos. My interest spans multiple ares, including generative modeling, representation learning, reinforcement learning, and embodied AI. My ultimate ambition is to cultivate multimodal perception, reasoning, and generation capabilities for Artificial General Intelligence (AGI), with the goal of creating fully intelligent systems and robots that can enhance human lives.

Experience: I am currently pursuing my Master’s degree in Computer Science at Tsinghua University, advised by Shutao Xia and Bin Chen. I was a research intern at Foundation Model Group, Megvii Research, and Shanghai Artificial Intelligence Laboratory. I was a short-term visiting scholar at the Artificial Intelligence Group, University of Cambridge, advised by Pietro Liò. I obtained my Bachelor’s degree in Computer Science from Wuhan University, where I was recognized as a distinguished graduate and graduated summa cum laude.

News

Sep 20, 2023 Introduce multimodal LLM: DreamLLM

Selected Publications

  1. arXiv
    Dreamllm: Synergistic multimodal comprehension and creation
    Runpei Dong*, Chunrui Han*, Yuang Peng, Zekun Qi, Zheng Ge, Jinrong Yang, Liang Zhao, Jianjian Sun, Hongyu Zhou, Haoran Wei, Xiangwen Kong, Xiangyu Zhang, Kaisheng Ma, and Yi Li
    arXiv preprint arXiv:2309.11499, 2023
  2. arXiv
    Chatspot: Bootstrapping multimodal llms via precise referring instruction tuning
    Liang Zhao*, En Yu*, Zheng Ge, Jinrong Yang, Haoran Wei, Hongyu Zhou, Jianjian Sun, Yuang PengRunpei Dong, Chunrui Han, and Xiangyu Zhang
    arXiv preprint arXiv:2307.09474, 2023