Microsoft Research AI4Science
No. 5 Danling Street
Beijing, China. 100080.
Email: liuchangsmail at gmail dot com; changliu at microsoft dot com;
chang-li14 at mails dot tsinghua dot edu dot cn
|
|
Biography
I am currently a senior researcher at Microsoft Research AI4Science (Asia) (科学智能中心).
Before that, I was in Microsoft Research Asia, Machine Learning Group.
I received my Ph.D. degree (with honor) in 2019 from the TSAIL Group at the Department of Computer Science and Technology of Tsinghua University, supervised by Prof. Jun Zhu.
I received my B.Sc. degree in 2014 from the Department of Physics of Tsinghua University.
I also visited Prof. Lawrence Carin's group at Duke University from Oct. 2017 to Oct. 2018.
My research interest focuses on machine learning, including Bayesian inference methods (e.g., variational inference, MCMC), generative modeling,
and the interaction with scientific computing (e.g., quantum chemistry, molecular dynamics, statistical mechanics).
Research
[Google Scholar]
[GitHub]
[Semantic Scholar]
Publications
("*" denotes equal contribution)
-
Geometry in sampling methods: A review on manifold MCMC and particle-based variational inference methods
(free access link before Nov. 12).
Chang Liu, Jun Zhu.
Handbook of Statistics (vol. 46), Elsevier, 2022.
[Full-text paper]
[BibTeX]
-
Direct Molecular Conformation Generation.
Jinhua Zhu, Yingce Xia, Chang Liu, Lijun Wu, Shufang Xie, Yusong Wang, Tong Wang, Tao Qin, Wengang Zhou, Houqiang Li, Haiguang Liu, Tie-Yan Liu.
Transactions on Machine Learning Research, 2022.
[arXiv]
[Code]
-
Invertible Rescaling Network and Its Extensions.
Mingqing Xiao, Shuxin Zheng, Chang Liu, Zhouchen Lin, Tie-Yan Liu.
International Journal of Computer Vision, 2022.
[Full-text access]
[arXiv]
[Code]
[BibTeX]
-
Rapid Inference of Nitrogen Oxide Emissions Based on a Top-Down Method with a Physically Informed Variational Autoencoder.
Jia Xing, Siwei Li, Shuxin Zheng, Chang Liu, Xiaochun Wang, Lin Huang, Ge Song, Yihan He, Shuxiao Wang, Shovan Kumar Sahu, Jia Zhang, Jiang Bian, Yun Zhu, Tie-Yan Liu, Jiming Hao.
Environmental Science & Technology, 2022.
[BibTeX]
-
Sampling with Mirrored Stein Operators.
Jiaxin Shi, Chang Liu, Lester Mackey.
International Conference on Learning Representations (ICLR; Spotlight), 2022.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[BibTeX]
-
PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior.
Sang-gil Lee, Heeseung Kim, Chaehun Shin, Xu Tan, Chang Liu, Qi Meng, Tao Qin, Wei Chen, Sungroh Yoon, Tie-Yan Liu.
International Conference on Learning Representations (ICLR), 2022.
[Paper & Appendix]
[arXiv]
[Code: vocoder, acoustic model]
[BibTeX]
-
On the Generative Utility of Cyclic Conditionals.
Chang Liu, Haoyue Tang, Tao Qin, Jintao Wang, Tie-Yan Liu.
Neural Information Processing Systems (NeurIPS), 2021.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[Video]
[Media (Chinese)]
[BibTeX]
-
Learning Causal Semantic Representation for Out-of-Distribution Prediction.
Chang Liu, Xinwei Sun, Jindong Wang, Haoyue Tang, Tao Li, Tao Qin, Wei Chen, Tie-Yan Liu.
Neural Information Processing Systems (NeurIPS), 2021.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[Video]
[Media (Chinese)]
[BibTeX]
-
Recovering Latent Causal Factor for Generalization to Distributional Shifts.
(Formerly named "Latent Causal Invariant Model")
Xinwei Sun, Botong Wu, Xiangyu Zheng, Chang Liu, Wei Chen, Tao Qin, Tie-yan Liu.
Neural Information Processing Systems (NeurIPS), 2021.
[Paper]
[Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[Media (Chinese)]
[BibTeX]
-
Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning.
Jongjin Park*, Younggyo Seo*, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu.
Neural Information Processing Systems (NeurIPS), 2021.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[Media (Chinese)]
[BibTeX]
-
Generalizing to Unseen Domains: A Survey on Domain Generalization.
Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Wenjun Zeng, Tao Qin.
International Joint Conference on Artificial Intelligence (IJCAI; survey track), 2021.
New long version published at IEEE Transactions on Knowledge and Data Engineering (TKDE), 2022.
[Paper]
[arXiv]
[BibTeX]
-
Invertible Image Rescaling.
Mingqing Xiao, Shuxin Zheng, Chang Liu, Yaolong Wang, Di He, Guolin Ke, Jiang Bian, Zhouchen Lin, Tie-Yan Liu.
European Conference on Computer Vision (ECCV; Oral), 2020.
[Paper & Appendix]
[arXiv]
[Code]
[BibTeX]
-
Variance Reduction and Quasi-Newton for Particle-Based Variational Inference.
Michael H. Zhu, Chang Liu, Jun Zhu.
International Conference on Machine Learning (ICML), 2020.
[Paper]
[Appendix]
[Code]
[Slides]
[Video]
[BibTeX]
-
Understanding MCMC Dynamics as Flows on the Wasserstein Space.
Chang Liu, Jingwei Zhuo, Jun Zhu.
International Conference on Machine Learning (ICML), 2019.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[BibTeX]
-
Understanding and Accelerating Particle-Based Variational Inference.
Chang Liu, Jingwei Zhuo, Pengyu Cheng, Ruiyi Zhang, Jun Zhu, Lawrence Carin.
International Conference on Machine Learning (ICML), 2019.
[Paper & Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[BibTeX]
-
Variational Annealing of GANs: A Langevin Perspective.
Chenyang Tao, Shuyang Dai, Liqun Chen, Ke Bai, Junya Chen, Chang Liu, Ruiyi Zhang, Georgiy Bobashev, Lawrence Carin.
International Conference on Machine Learning (ICML), 2019.
[Paper]
[Appendix]
[BibTeX]
-
Straight-Through Estimator as Projected Wasserstein Gradient Flow.
Pengyu Cheng, Chang Liu, Chunyuan Li, Dinghan Shen, Ricardo Henao, Lawrence Carin.
NeurIPS 2018 Bayesian Deep Learning Workshop, 2018.
[Paper]
[arXiv]
[BibTeX]
-
Message Passing Stein Variational Gradient Descent.
Jingwei Zhuo, Chang Liu, Jiaxin Shi, Jun Zhu, Ning Chen, Bo Zhang.
International Conference on Machine Learning (ICML), 2018.
[Paper]
[Appendix]
[arXiv]
[BibTeX]
-
Riemannian Stein Variational Gradient Descent for Bayesian Inference.
Chang Liu, Jun Zhu.
AAAI Conference on Artificial Intelligence (AAAI), 2018.
[Paper]
[Appendix]
[arXiv]
[Code]
[Slides]
[Poster]
[BibTeX]
-
Stochastic Gradient Geodesic MCMC Methods.
Chang Liu, Jun Zhu, Yang Song.
Neural Information Processing Systems (NeurIPS), 2016.
[Paper]
[Appendix]
[Code]
[Slides]
[Poster]
[BibTeX]
Technical Reports
-
Learning to Match Distributions for Domain Adaptation.
Chaohui Yu, Jindong Wang, Chang Liu, Tao Qin, Renjun Xu, Wenjie Feng, Yiqiang Chen, Tie-Yan Liu. 2020.
-
Modeling Lost Information in Lossy Image Compression.
Yaolong Wang, Mingqing Xiao, Chang Liu, Shuxin Zheng, Tie-Yan Liu. 2020.
Thesis
Talk & Teaching Materials
Academic Service
-
Area Chair: NeurIPS (2022).
-
Reviewer: NeurIPS (2016,2018-2021), ICML (2019-2022), ICLR (2021-2022), AAAI (2020-2021), UAI (2019), IJCAI (2021), AISTATS (2023). ICML 2020 top-33% reviewer.
Teaching
-
Lecturer: Summer School on Deep Generative Models, Department of Mathematical Sciences, Tsinghua University. Jun. 2022.
-
Lecturer (on the generative model section): Advanced Machine Learning, jointly hosted by Microsoft Research Asia, and Department of Electronic Engineering, Tsinghua University. 2019-2022.
-
Teaching Assistant: Duke-Tsinghua Machine Learning Summer School. Aug. 2016.
-
Teaching Assistant: Advanced Calculus (undergraduate), instructed by Prof. Yinghua Ai, Department of Mathematical Sciences, Tsinghua University. Sep. 2014 - Jan. 2015.