Jiawei Liu
I am a researcher at OpenAI 🌉.
I received my CS PhD at University of Illinois Urbana-Champaign supported by Fellowships from Amazon and Yee Memorial Fund, and Illinois Innovation Award. In my four years at UIUC, I worked with Lingming Zhang to solve novel software engineering problems with the sword of machine learning and programming language. My PhD research built early work to evaluate and train coding models, and model synthesis methods to fuzz test AI compilers.
Coding Models
- Training models [StarCoder2] to reason [PurpCode][Code-R1] and follow diverse instructions [Magicoder]
- Evaluating code correctness [EvalPlus], efficiency [EvalPerf], without hard verifiers [CodeFavor]
- Code editing should be real-time and can be largely accelerated by multi-layer speculation [Blazedit]
Automated Testing
Research Impact
ResearchFull List
- NeurIPS’25 / PurpCode: Reasoning for Safer Code GenerationThe Thirty-ninth Annual Conference on Neural Information Processing Systems. 2025🥇 1st Place in Amazon Nova AI Challenge 2025
- Proc. ACM Softw. Eng. 2 (ISSTA). Jun 2025
- Forty-first International Conference on Machine Learning. Jun 2024Adopted by Meta Llama 3.1, Google CodeGemma, and IBM Granite
- arXiv preprint arXiv:2402.19173. Jun 2024
- Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. Jun 2023🏆 ACM SIGSOFT Distinguished Paper Award
- Proceedings of the ACM on Programming Languages 6 (OOPSLA1). Apr 2022
Awards & Honors
🥇1st Place, Amazon Nova AI Challenge ($250K) 2025
Jane Street Fellowship Honorable Mention
Amazon Nova AI Challenge Research Grant ($250K) 2024
OpenAI Researcher Access Program
Machine Learning and Systems Rising Stars
Warren W. Yee Memorial Fellowship
Invited Talk
NLP+SE Seminar, UT Austin: Smelling the Quality of LLM-generated Code Mar 2025
Programming Systems, Uber: Evaluating LLMs for Correct & Efficient Code Generation Sept 2024
ARiSE Lab, Columbia University: Simplify the Making of Great Software in the ML Era April 2024
Snowflake GenAI: Rigorous Evaluation of LLMs for Code (Slides) Feb 2024
AST Lab, ETH Zürich: Generating Test-Cases for ML Compilers (Slides) Jan 2024
GAI4SE, NC State University: LLMs for Software Testing (Guest Lecture) Nov 2023
Apache TVM Conference: Automating DL Compiler Bug Finding with NNSmith Mar 2023
SAMPL, University of Washington: Coverage-Guided Tensor Compiler Fuzzing May 2022