LLM Researcher
Email : [email protected]
LLM Blog: Wei Shen’s LLM Blog
**2025**
[AAAI2025] J Pan, W Shen, S Huang, Q Zhou, Y Zhang. Pre-dpo: Improving data utilization in direct preference optimization using a guiding reference model.
[EMNLP2025] J Hu, X Wu, W Shen, JK Liu, W Wang, S Jiang, H Wang, H Chen, B Chen, W Fang, Xianyu, Y Cao, H Xu, and Y Liu. OpenRLHF: A Ray-based Easy-to-use, Scalable and High-performance RLHF Framework.
[NIPS2025] W Shen, G Liu, Z Wu, R Zhu, Q Yang, C Xin, Y Yue, L Yan. Exploring Data Scaling Trends and Effects in Reinforcement Learning from Human Feedback.
[NIPS2025] C Zhang, T Pearce, P Zhang, K Wang, X Chen, W Shen, L Zhao, J Bian. What Do Latent Action Models Actually Learn?
[NIPS2025] X Xu, X Zhao, H Xiang, X Zhang, W Shen, H Hu, L Qi. HPSERec: A Hierarchical Partitioning and Stepwise Enhancement Framework for Long-tailed Sequential Recommendation.
[ICML2025] C Zhang*, W Shen*, L Zhao, X Zhang, X Xu, W Dou, J Bian. Policy Filtration for RLHF to Mitigate Noise in Reward Models.
[ICML2025] Y Liu, J Lu, Z Chen, C Qu, JK Liu, C Liu, Z Cai, Y Xia, L Zhao, J Bian, C Zhang, W Shen, Z Lin. AdaptiveStep: Automatically Dividing Reasoning Step through Model Confidence
[ACL2025] Z Hu, Y Liu, J Zhao, S Wang, Y Wang, W Shen, Q Gu, AT Luu, SK Ng. LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models.
[ACL2025] Z Xiao, Z Wang, W Ma, Y Zhang, W Shen, Y Wang, L Gong, Z Liu. Mitigating Posterior Salience Attenuation in Long-Context LLMs with Positional Contrastive Decoding.
[ICLR2025] W Shen, C Yin, Y Liu, Z Xiao, X He. Why RoPE Struggles to Maintain Long-Term Decay in Long Sequences?
**2024**
[EMNLP2024] J Zhou, C Jiang, W Shen, X Zhou, X He. Leveraging web-crawled data for high-quality fine-tuning.
**2023**
[AAAI2023] Y Cai, C Zhang, W Shen, X Zhang, W Ruan, L Huang. Reprem: Representation pre-training with masked model for reinforcement learning.