About Me

Jie Feng is currently a postdoctoral researcher in the Department of Electronic Engineering at Tsinghua University. From 2021 to 2023, he worked at Meituan as a researcher specializing in intelligent decision-making and large language models. He received his B.S. and Ph.D. degrees (advised by Prof. Yong Li) in electronic engineering from Tsinghua University in 2016 and 2021, respectively. His research mainly focuses on large language models, spatiotemporal data mining and urban science, with over 40 papers published in top-tier venues including KDD, WWW, UbiComp, NAACL, AAAI, TKDE, etc. His work has garnered more than 3000 citations on Google Scholar. His research is supported by the Shuimu Tsinghua Scholar Program and the China Postdoctoral Talent Plan.

We are seeking self-motivated interns and collaborators to conduct research on spatio-temporal data mining, large language models, embodied agent, etc., with us remotely or at Tsinghua. Our interns have good record of publishing first-author papers in CCF-A conferences/journals. I also have strong connections with the industry (e.g., Meituan, Tencent) and can recommend good opportunities for internships or full-time positions. Feel free to contact me via email if you are interested.

Research Interests

  • Large Language Models: investigating techniques for building domain-specific, multi-modal LLMs for urban systems, e.g., CityGPT and CityBench.
  • LLM based Agents: building powerfull intelligent agents for urban applications, e.g., AgentMove, TrajAgent and LMP.
  • Spatiotemporal Data Mining: trajectory mining, traffic forecasting, e.g., UniST.
  • Urban Science: human dynamics, urban dynamics, in-depth and data-driven analyses of urban issues.

News

  1. New![2025.04] I was honored to present an invited talk, “LLM-Based Agentic Framework: A Novel Paradigm for Modeling Human Mobility”, at the Saptial Data Intelligence 2025 Conference, which introduced our recent works AgentMove and TrajAgent.
  2. New![2025.04] We released a survey about LLM-Powered Spatial Intelligence Across Scales: Advances in Embodied Agents, Smart Cities, and Earth Science, Spatial Intelligence Across Scales! I was honored to present an invited talk on this survey at the Spatial Data Intelligence 2025 Conference in Xiamen. For a Chinese translation of the survey, you can refer to this article by Zhuanzhi.
  3. New![2025.04] I am glad to be selected as one of 2025 Spatial Data Intelligence Rising Star! news.
  4. New![2025.02] We released a preprint paper about evaluating basic spatial abilities of VLMs from a perspective from psychometrics, Basic Spatial Abilities.
  5. New![2025.01] We released a survey about enhancing reasoning abilities of large language models, Towards Large Reasoning Models, which has attracted significant attention from the research community.
  6. New![2025.01] AgentMove has been accepted as a main conference paper at NAACL 2025, and the code has been released. Feel free to try it out!
  7. [2024.12] The datasets and implementation codes for CityGPT and CityBench projects are now open-source. Welcome any suggestions and collaboration!
  8. [2024.12] I was honored to present an invited talk, “CityGPT: From LLM to Urban Generative Intelligence,” at the CCF ChinaData 2024 conference.
  9. [2024.11] We released a survey about world models, Understanding World or Predicting Future.
  10. [2024.10] We released a preprint paper about urban simulation platform with massive LLM-based agents, opencity.
  11. [2024.10] We released a preprint paper about an LLM-based agent framework for automated and unified trajectory modeling across datasets, models and tasks, TrajAgent.
  12. [2024.10] I am glad to be selected as one of Stanford/Elsevier Top 2% Scientists 2024.
  13. [2024.08] We released two preprint papers about LLM-based agent assisted mobility prediction: AgentMove and LMP.
  14. [2024.06] We released a preprint paper about enhancing the spatial cognition of large language models, CityGPT.
  15. [2024.06] We released a preprint paper about benchmarking (multi-modal) large language models for urban tasks, CityBench.
  16. [2024.05] Two papers are accepted in KDD 2024! High Efficiency Delivery Network and UniST.
  17. [2023.12] We released a preprint paper about the Urban Generative Intelligence (UGI) in the era of large language models. For more details, please refer to arxiv.
  18. [2023.04] Meituan’s Real-Time Intelligent Dispatching Algorithms wins the INFORMS 2023 Edelman Finalist Reward. I am glad to contribute to the part “Divide-and-Conquer” framework, the algorithm details about this is introduced in the High Efficiency Delivery Network.

Survey

  1. Feng, Jie, et al. “A Survey of Large Language Model-Powered Spatial Intelligence Across Scales: Advances in Embodied Agents, Smart Cities, and Earth Science.” preprint 2025.
  2. Xu, Fengli, et al. “Towards Large Reasoning Models: A Survey of Reinforced Reasoning with Large Language Models.” preprint 2025.
  3. Ding, Jingtao, et al. “Understanding World or Predicting Future? A Comprehensive Survey of World Models.” preprint 2024.
  4. Li, Fuxian, et al. “Dynamic graph convolutional recurrent network for traffic prediction: Benchmark and solution.” ACM Transactions on Knowledge Discovery from Data 17.1 (2023): 1-21.

Talks

  1. “LLM-Based Agentic Framework: A Novel Paradigm for Modeling Human Mobility”, ACM SIGSPATIAL CHINA Saptial Data Intelligence 2025 Conference, Xiamen
  2. “CityGPT: From LLM to Urban Generative Intelligence”, CCF ChinaData 2024 Conference, Boao