Research overview

My research began focused on projection methods for solving convex feasibility problems (particularly in the context of CT imaging). This morphed into studying operators in further generality and operator splitting methods (e.g., ADMM, forward backward splitting) that cover a wide class of continuous optimization problems. The latest shift in focus is the subject of my thesis research: how to fuse the advantages of machine learning machinery with the theoretical guarantees afforded by operator-based methods. In the context of deep learning, there are many exciting and ongoing developments connecting these two fields. Several works are in progress and will be revealed this spring (e.g., submissions for NeurIPS 2021).

 

Key words:  Convex Optimization, Deep Learning, Plug and Play (PnP), Learning to Optimize (L2O), Implicit Depth Learning, Wasserstein GANs, Deep Equilibrium Models, Deep Unrolling, Operator Splitting, Convex Feasibility Problems

Selected Publications

J.Shen, X. Chen, H. Heaton, T. Chen, J. Liu, W. Yin, Z. Wang,

Learning A Minimax Optimizer: A Pilot Study.  

ICLR, 2021.

H. Heaton, S. Wu Fung, A.T. Lin, S. Osher, W. Yin.

Projecting to Manifolds via Unsupervised Learning. 

arXiv preprint: 2008.02200, 2020.

H. Heaton, X. Chen, Z. Wang, W. Yin.

Safeguarded Learned Convex Optimization

arXiv preprint: 2003.01880, 2020

H. Heaton, Y. Censor.

Asynchronous sequential inertial iterations for common fixed points problems with an application to linear systems.

Journal of Global Optimization, 2019.

Y. Censor, H. Heaton, R. Schulte.

Derivative-free superiorization with component-wise perturbations.

Numerical Algorithms, 2018.

©2021 by Howard Heaton