I am currently a Fourth-year undergraduate student in Software Engineering at the School of Software, Tsinghua University. I am a research intern at the AI4CE Lab at New York University and the Litchi Lab at College AI, Tsinghua University, advised by Prof. Chen Feng and Prof. Yiming Li. Previously, I worked with Prof. Jiachen Li at the University of California, Riverside. My research interests lie at the intersection of Spatial Intelligence, 3D vision and Embodied AI. I am actively seeking PhD opportunities for Fall 2026 and always open to communication and collaboration. Please feel free to contact me!
2022.09-2026.06 | B.E. Software Engineering, School of Software, Tsinghua University
2025.03-2025.11 | Wanderland: Geometrically Grounded Simulation for Open-World Embodied AI I joined the AI4CE Lab at NYU Tandon to work on the Wanderland project under the guidance of Prof. Chen Feng. Wanderland reconstruct large-scale and diverse urban environments using handheld LiDAR–IMU–RGB sensor rigs, training 3D Gaussian Splatting (3DGS) models integrated into Isaac Sim. We investigate whether visual realism alone is sufficient for embodied AI and conclude negatively—trustworthy benchmarking still demands metric-scale geometric grounding that previous pipelines lack. My contributions included designing the data acquisition and reconstruction pipeline, optimizing sensor calibration and trajectory estimation, and evaluating geometric constraints’ impact on reconstruction quality and reinforcement learning-based navigation performance. This experience significantly deepened my understanding of leveraging geometric priors to enhance embodied AI performance in real-world scenarios.
2024.06- 2025.01 | Multi-Robot Social Navigation with Cooperative Occupancy Prediction
I went to the TASL at the University of California, Riverside as a visiting scholar to conduct this research under the guidance of Prof. Jiachen Li. In this project, we integrated cooperative perception methods into a multi-agent reinforcement learning framework to address social navigation tasks using Occupancy Grid Map Prediction (OGMP). Our approach resulted in performance enhancements in navigating complex environments with multiple agents. Through this experience, I gained a understanding of reinforcement learning and social navigation tasks, as well as skills in communicating and leading research within a diverse team.
2023.08-2024.05 | Feature Aggregation for Real-time Free-viewpoint Dynamic Human Video from Sparse Views
This project is directed by Prof. Feng Xu and Wenbin Lin, focusing on the reconstruction of real-time human body movements, mesh, and lighting textures from sparse perspectives. Specifically, I utilize a small number of mobile phone videos as input to estimate human posture and render it into a mesh model in real time. The paper related to this project is currently under review.
2023.03-2023.07 | Quality of Experience(QoE) Improvement in Mobile Live Streaming Program
I joined the Student Research Training program: QoE Improvement in Mobile Live Streaming, advised by Prof. Lifeng Sun. In this project, I extensively searched and reviewed relevant literature, and implemented a method for evaluating video streaming hotspots using C++. This method was finally applied to the adaptive bitrate system in our laboratory.
2024.11 | Awarded Comprehensive Scholarship (~Top 10%) at School of Software, Tsinghua University.
2024.10 | Received the Weng Scholarship at Tsinghua University.
2023.09 | I won the 2023 Hantex Scholarship
Course Project
Team Project
Homework
Homework
I am interested in trombone and classical music. I am also a member of TUSO. Here are some videos of our performances:
Tsinghua University New Year Special Concert, 2024.01
TUSO’s 30th Anniversary Concert, 2023.06
I like traveling and photography. My profile picture was taken during my trip to Kyoto.
I hope to share my travel experiences here, and you will see them soon…
I plan to document my volunteer activities here, which you will be able to see soon…
Powered by Jekyll and Minimal Light theme.