About

Hi👋, welcome to visit my website! I am a Ph.D student at Institute of Computing Technology(ICT), Chinese Academy of Sciences(CAS), advised by Prof. Shenghua Liu and Yiwei Wang . I work on Large Language Model (LLMs) Reasoning 🤖, especially in Vision-Language Models (VLM) Reasoning and Solve complex logic problems with LLMs.

I completed research internships at Tsinghua University, focusing on large language model post-training, and at Microsoft Software Technology Center Asia (STCA), working on feature engineering and language model training.

Beyond my core research, I explore Web3+AI and Ethereum 🔗 (work with PharosInsight).

I am a member of sigmir and welcome research collaborations 🤝!

News


Interests
  • Large Language Model Reasoning
  • Vision-Language Model Reasoning
  • Solve Logic Problems with LLMs
Education
  • Ph.D Candidate, 2024-202X

    Institute of Computing Technology

  • B.S. in Computer Science, 2020-2024

    North China University of Technology

Highlight Publications

🔍 Quickly discover relevant content by filtering publications.
(2025). Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning. arXiv.

PDF Cite DOI arXiv

(2025). Can Graph Descriptive Order Affect Solving Graph Problems with LLMs?. Association for Computational Linguistics (ACL) Main, 2025 (CCF-A).

PDF Cite Code Slides Video DOI Paper Website Poster

(2025). A Survey of Context Engineering for Large Language Models. arXiv.

PDF Cite Code DOI arXiV Hugging Face

(2025). Innate Reasoning is Not Enough : In-Context Learning Enhances Reasoning Large Language Models with Less Overthinking. arXiv.

PDF Cite arXiv

(2023). Attack based on data : A novel perspective to attack sensitive points directly. Cybersecurity (CCF-C).

PDF Cite Dataset DOI

Other Publications

(2025). Are All Prompt Components Value-Neutral? Understanding the Heterogeneous Adversarial Robustness of Dissected Prompt in Large Language Models. arXiv.

PDF Cite DOI arXiv

(2025). Who is in the Spotlight: The Hidden Bias Undermining Multimodal Retrieval-Augmented Generation. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP) Oral, 2025 (CCF-B).

PDF Cite DOI arXiv

(2025). PIS:Linking Importance Sampling and Attention Mechanisms for Efficient Prompt Compression. arXiv.

PDF Cite DOI arXiv

Experience

 
 
 
 
 
Microsoft Software Technology Center (STC) Asia
Software Engineer Intern
April 2024 – September 2024 Beijing, China
 
 
 
 
 
Tsinghua University
Machine Learning Engineer Intern
September 2023 – March 2024 Beijing, China
 
 
 
 
 
PaddlePaddle, Baidu
Software Engineer Intern
July 2021 – December 2021 Beijing, China

Services

✍🏻 Reviewer for

  • ACM International Conference on Information and Knowledge Management (CIKM) 2025
  • AAAI Conference on Artificial Intelligence (AAAI) 2026
UCAS UCAS Long ICT