My photo

Usman A. Khan (bio)

Amazon Scolar, Amazon Robotics
Professor, Electrical and Computer Engineering, Tufts
Professor, Computer Science, Secondary Appointment, Tufts

Office: 318.B
574 Boston Ave
Medford, MA 02155
Phone: (617) 627-5299
Email: khan AT ece DOT tufts DOT edu


Postdoc, University of Pennsylvania, 2009-2010
PhD, Carnegie Mellon University, 2009 (graduation photo with Prof. José Moura)
MS, University of Wisconsin-Madison, 2004
BS, University of Engineering and Technology, Lahore-Pakistan, 2002

Research Interests

Signal processing, Optimization and control, Stochastic dynamical systems


  • 2022 EURASIP Best Paper Award for our paper FORST
  • Best Student Paper Awards:
    S. Safavi (IEEE Asilomar 2016, 2014)
    M. Doostmohammadian (IEEE ISNSC 2014)

  • Lead Guest Editor, Proceedings of the IEEE Special Issue on Optimization for Data-driven Learning and Control, vol. 118, no. 11, Nov. 2020

  • Associate Editor, IEEE Transactions on Signal Processing, 2021-present
  • Associate Editor, IEEE Open Journal of Signal Processing, 2019-present
  • Associate Editor, IEEE Transactions on Signal and Information Processing over Networks, 2019-present

  • Guest Associate Editor, IEEE Letters to Control System Society Special Issue on Learning and Control, vol. 4, no. 3, Jul. 2020
  • Associate Editor, IEEE Letters to Control System Society, 2018-2020
  • Editor, IEEE Transactions on Smart Grid, 2014-2017

  • Technical Area Chair, Track C: Networks, Asilomar Conference on Signals, Systems, and Computers, Monterey, CA, Nov. 2020
  • Guest Professor, ACCESS Linnaeus Centre, KTH Sweden, Spring 2015
  • NSF CAREER award, Jan. 2014
    My photo

Talks and Demos


  • Oct. 2021: A stochastic proximal gradient framework for decentralized non-convex composite optimization: Topology-independent sample complexity. Arxiv

  • Sep. 2021: A fast randomized incremental gradient method for decentralized non-convex optimization, to appear in IEEE Transactions on Automatic Control. Arxiv; IEEE Xplore
  • Sep. 2021: A near-optimal stochastic gradient method for decentralized non-convex finite-sum optimization, to appear in SIAM Journal on Optimization. Arxiv
  • May 2021: A hybrid variance-reduced method for decentralized stochastic non-convex optimization, accepted in ICML 2021 (acceptance rate:~21%). Arxiv; MLR Press
  • Oct. 2020: Variance-reduced decentralized stochastic optimization with accelerated convergence, published in IEEE Transactions on Signal Processsing. Arxiv; IEEE Xplore
  • Aug. 2020: A general framework for decentralized optimization with first-order methods, published in the Proceedings of the IEEE. Arxiv; IEEE Xplore
  • Feb. 2020: Decentralized stochastic optimization and machine learning, published in IEEE Signal Processing Magazine. Arxiv; IEEE Xplore