Microsoft Research AI for Science
No. 5 Danling Street
Beijing, China. 100080.
Email: liuchangsmail at gmail dot com; changliu at microsoft dot com.
|
|
Biography
I am currently a senior researcher at Microsoft Research AI for Science (Asia branch) (科学智能中心).
Before that, I was in Microsoft Research Asia, Machine Learning Group.
I received my Ph.D. degree (cum laude) in 2019 from the TSAIL Group at the Department of Computer Science and Technology of Tsinghua University, supervised by Prof. Jun Zhu.
I received my B.Sc. degree in 2014 from the Department of Physics of Tsinghua University.
I also visited Prof. Lawrence Carin's group at Duke University from Oct. 2017 to Oct. 2018.
My research interest focuses on machine learning, including sampling methods (mainly in the context of Bayesian inference; e.g., variational inference, MCMC), generative models (e.g., diffusion models), and AI methods for molecular science problems (e.g., electronic structure, molecular dynamics, statistical mechanics).
Research
[Google Scholar]
[GitHub]
[Semantic Scholar]
("*": corresponding author; "†": equal contribution)
Preprints
-
MatterSim: A Deep Learning Atomistic Model Across Elements, Temperatures and Pressures.
Han Yang†, Chenxi Hu†, Yichi Zhou†, Xixian Liu†, Yu Shi†, Jielan Li†, Guanzhi Li†, Zekun Chen†, Shuizhou Chen†,
Claudio Zeni, Matthew Horton, Robert Pinsler, Andrew Fowler, Daniel Zügner, Tian Xie, Jake Smith, Lixin Sun, Qian Wang, Lingyu Kong, Chang Liu,
Hongxia Hao, Ziheng Lu.
[Paper & Appendix]
[arXiv]
Publications
-
Physical Consistency Bridges Heterogeneous Data in Molecular Multi-Task Learning.
Yuxuan Ren†, Dihan Zheng†, Chang Liu*, Peiran Jin, Yu Shi, Lin Huang, Jiyan He, Shengjie Luo, Tao Qin, Tie-Yan Liu.
Neural Information Processing Systems (NeurIPS), 2024.
[Paper & Appendix]
[arXiv]
[Slides]
[Poster]
-
Infusing Self-Consistency into Density Functional Theory Hamiltonian Prediction via Deep Equilibrium Models.
Zun Wang, Chang Liu, Nianlong Zou, He Zhang, Xinran Wei, Lin Huang, Lijun Wu, Bin Shao.
Neural Information Processing Systems (NeurIPS), 2024.
[Paper & Appendix]
[arXiv]
-
Self-Consistency Training for Density-Functional-Theory Hamiltonian Prediction.
He Zhang, Chang Liu*, Zun Wang, Xinran Wei, Siyuan Liu, Nanning Zheng*, Bin Shao, Tie-Yan Liu.
International Conference on Machine Learning (ICML), 2024.
[Paper & Appendix]
[arXiv]
[Slides]
[Poster]
-
Predicting Equilibrium Distributions for Molecular Systems with Deep Learning.
Shuxin Zheng†*, Jiyan He†, Chang Liu†*, Yu Shi†, Ziheng Lu†,
Weitao Feng, Fusong Ju, Jiaxi Wang, Jianwei Zhu, Yaosen Min, He Zhang, Shidi Tang, Hongxia Hao, Peiran Jin, Chi Chen, Frank Noé,
Haiguang Liu†*, Tie-Yan Liu*.
Nature Machine Intelligence, 2024.
[Paper]
[Supplementary Information]
[arXiv]
[Project Homepage]
[Slides]
[BibTeX]
-
Overcoming the Barrier of Orbital-Free Density Functional Theory for Molecular Systems Using Deep Learning.
He Zhang†, Siyuan Liu†, Jiacheng You, Chang Liu*, Shuxin Zheng*, Ziheng Lu, Tong Wang, Nanning Zheng, Bin Shao*.
Nature Computational Science, 2024.
[Full-text access]
[arXiv]
[Code (inference)]
[Model checkpoints & evaluation data]
[Slides]
[Podcast]
[BibTeX]
-
Invertible Rescaling Network and Its Extensions.
Mingqing Xiao, Shuxin Zheng, Chang Liu*, Zhouchen Lin*, Tie-Yan Liu.
International Journal of Computer Vision, 2023.
[Full-text access]
[arXiv]
[Code]
[BibTeX]
-
Geometry in sampling methods: A review on manifold MCMC and particle-based variational inference methods.
Chang Liu, Jun Zhu.
Advancements in Bayesian Methods and Implementation, in Handbook of Statistics (vol. 47), Elsevier, 2022.
[Full-text paper]
[BibTeX]
-
Direct Molecular Conformation Generation.
Jinhua Zhu, Yingce Xia*, Chang Liu*, Lijun Wu, Shufang Xie, Yusong Wang, Tong Wang, Tao Qin, Wengang Zhou, Houqiang Li, Haiguang Liu, Tie-Yan Liu.
Transactions on Machine Learning Research, 2022.
[Paper & Appendix]
[arXiv]
[Code]
[BibTeX]
-
Rapid Inference of Nitrogen Oxide Emissions Based on a Top-Down Method with a Physically Informed Variational Autoencoder.
Jia Xing, Siwei Li, Shuxin Zheng, Chang Liu, Xiaochun Wang, Lin Huang, Ge Song, Yihan He, Shuxiao Wang, Shovan Kumar Sahu, Jia Zhang, Jiang Bian, Yun Zhu, Tie-Yan Liu, Jiming Hao.
Environmental Science & Technology, 2022.
[BibTeX]
-
Sampling with Mirrored Stein Operators.
Jiaxin Shi, Chang Liu, Lester Mackey.
International Conference on Learning Representations (ICLR; Spotlight), 2022.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[BibTeX]
-
PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior.
Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, Tie-Yan Liu.
International Conference on Learning Representations (ICLR), 2022.
[Paper & Appendix]
[arXiv]
[Code: vocoder, acoustic model]
[BibTeX]
-
On the Generative Utility of Cyclic Conditionals.
Chang Liu*, Haoyue Tang, Tao Qin, Jintao Wang, Tie-Yan Liu.
Neural Information Processing Systems (NeurIPS), 2021.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[Video]
[Media (Chinese)]
[BibTeX]
-
Learning Causal Semantic Representation for Out-of-Distribution Prediction.
Chang Liu*, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu.
Neural Information Processing Systems (NeurIPS), 2021.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[Video]
[Media (Chinese)]
[BibTeX]
-
Recovering Latent Causal Factor for Generalization to Distributional Shifts.
(Formerly named "Latent Causal Invariant Model")
Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-yan Liu.
Neural Information Processing Systems (NeurIPS), 2021.
[Paper]
[Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[Media (Chinese)]
[BibTeX]
-
Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning.
Jongjin Park†, Younggyo Seo†, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu.
Neural Information Processing Systems (NeurIPS), 2021.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[Media (Chinese)]
[BibTeX]
-
Generalizing to Unseen Domains: A Survey on Domain Generalization.
Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Wenjun Zeng, Tao Qin.
International Joint Conference on Artificial Intelligence (IJCAI; survey track), 2021.
New long version published at IEEE Transactions on Knowledge and Data Engineering (TKDE), 2022.
[Paper]
[arXiv]
[BibTeX]
-
Invertible Image Rescaling.
Mingqing Xiao, Shuxin Zheng*, Chang Liu*, Yaolong Wang, Di He, Guolin Ke, Jiang Bian, Zhouchen Lin, Tie-Yan Liu.
European Conference on Computer Vision (ECCV; Oral), 2020.
[Paper & Appendix]
[arXiv]
[Code]
[BibTeX]
-
Variance Reduction and Quasi-Newton for Particle-Based Variational Inference.
Michael H. Zhu*, Chang Liu*, Jun Zhu*.
International Conference on Machine Learning (ICML), 2020.
[Paper]
[Appendix]
[Code]
[Slides]
[Video]
[BibTeX]
-
Understanding MCMC Dynamics as Flows on the Wasserstein Space.
Chang Liu, Jingwei Zhuo, Jun Zhu.
International Conference on Machine Learning (ICML), 2019.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[BibTeX]
-
Understanding and Accelerating Particle-Based Variational Inference.
Chang Liu, Jingwei Zhuo, Pengyu Cheng, Ruiyi Zhang, Jun Zhu, Lawrence Carin.
International Conference on Machine Learning (ICML), 2019.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[BibTeX]
-
Variational Annealing of GANs: A Langevin Perspective.
Chenyang Tao, Shuyang Dai, Liqun Chen, Ke Bai, Junya Chen, Chang Liu, Ruiyi Zhang, Georgiy Bobashev, Lawrence Carin.
International Conference on Machine Learning (ICML), 2019.
[Paper]
[Appendix]
[BibTeX]
-
Straight-Through Estimator as Projected Wasserstein Gradient Flow.
Pengyu Cheng, Chang Liu, Chunyuan Li, Dinghan Shen, Ricardo Henao, Lawrence Carin.
NeurIPS 2018 Bayesian Deep Learning Workshop, 2018.
[Paper]
[arXiv]
[BibTeX]
-
Message Passing Stein Variational Gradient Descent.
Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, Bo Zhang.
International Conference on Machine Learning (ICML), 2018.
[Paper]
[Appendix]
[arXiv]
[BibTeX]
-
Riemannian Stein Variational Gradient Descent for Bayesian Inference.
Chang Liu, Jun Zhu.
AAAI Conference on Artificial Intelligence (AAAI), 2018.
[Paper]
[Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[BibTeX]
-
Stochastic Gradient Geodesic MCMC Methods.
Chang Liu, Jun Zhu, Yang Song.
Neural Information Processing Systems (NeurIPS), 2016.
[Paper]
[Appendix]
[Code]
[Slides]
[Poster]
[BibTeX]
Technical Reports
-
Learning to Match Distributions for Domain Adaptation.
Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu. 2020.
-
Modeling Lost Information in Lossy Image Compression.
Yaolong Wang, Mingqing Xiao, Chang Liu, Shuxin Zheng, Tie-Yan Liu. 2020.
Thesis
Talk & Teaching Materials
Academic Service
-
Area Chair: NeurIPS (2022-2024), ICML (2023-2024), ICLR (2024-2025).
-
Reviewer: NeurIPS (2016,2018-2021), ICML (2019-2022), ICLR (2021-2022), AAAI (2020-2021), UAI (2019), IJCAI (2021), AISTATS (2023). ICML 2020 top-33% reviewer.
Teaching
-
Lecturer, Summer School on Deep Generative Models, Department of Mathematical Sciences, Tsinghua University. Jun. 2022.
-
Lecturer, Advanced Machine Learning (generative model section), jointly hosted by Microsoft Research Asia and Department of Electronic Engineering, Tsinghua University. 2019-2022.