Daniel Hsu


450 Computer Science Building
Mail Code 0401

Tel(212) 853-8473

Daniel Hsu develops algorithms for statistical analysis and machine learning. He focuses especially on settings that involve high-dimensional data or interaction. His research has produced the first computationally efficient algorithms for several statistical estimation tasks, provided new algorithmic frameworks for solving interactive machine learning problems, and has led to the creation of scalable tools for machine learning applications.

Research Interests

Algorithmic statistics, machine learning, privacy

Statistical models posit the presence of interesting structure in data; the goal of estimation is to measure and quantify this structure. Estimating parameters of statistical models is especially challenging when some of the variables in the model are not observed (such as in mixture models and hidden Markov models). Indeed, classical methods often require solving intractable optimization problems (e.g., maximum likelihood estimation), and therefore are challenging to use in applications involving large, high-dimensional data sets. Hsu develops and analyzes computationally efficient algorithms for estimation in hidden variable models (including mixture models, hidden Markov models, and topic models), as well as for other estimation problems where spectral analysis is especially important. Interactive machine learning concerns learning agents that adaptively make decisions that ultimately affect the data available to the agent. For example, the agent may be the learning procedure used by a data scientist, and it may interact with the data scientist to jointly construct an accurate classifier. Such scenarios extend beyond the standard frameworks for understanding machine learning algorithms. Hsu's research has developed new frameworks, algorithms, and analytic techniques for interactive learning.

Hsu holds a BS in computer science and engineering (2004) from UC Berkeley, and a MS (2007) and PhD (2010) in computer science from UC San Diego. He did postdoctoral research at Microsoft Research New England, and the Departments of Statistics at Rutgers University and the University of Pennsylvania. He received a 2014 Yahoo ACE Award, was selected by IEEE Intelligent Systems as one of "AI's 10 to Watch" in 2015, and received a 2016 Sloan Research Fellowship. He also serves on the editorial board for the ACM Transactions on Algorithms. Hsu joined the faculty of Columbia Engineering in 2013. 


  • Postdoctoral researcher, Microsoft Research New England, 2011-013
  • Postdoctoral associate, Department of Statistics, Rutgers University, 2010-2011
  • Visiting postdoctoral scholar, Department of Statistics, University of Pennsylvania, 2010-2011


  • Assistant professor of computer science, Columbia University, 2013-


  • Association for Computing Machinery


  • NSF award IIS-1563785 (09/2016–08/2020; $1,196,617; co-PI with PI Gravano), III: Medium: Adaptive Information Extraction from Social Media for Actionable Inferences in Public Health
  • Alfred P. Sloan Research Fellowship (9/2016–9/2018; $55,000; PI)
  • Bloomberg Research Grant (4/2016; $60,000; PI with co-PI Chaudhuri)
  • DARPA award W911NF-16-2-0035 (10/2015–09/2018; $11,998,962; co-PI with PI Shaman, co-PI Anthony, co-PI Chung, co-PI Freyer, co-PI Planet, and co-PI Rabadan), The Virome of Manhattan: A Testbed for Radically Advancing Understanding and Forecast of Viral Respiratory Infections
  • NSF award DMREF-1534910 (10/2015–9/2018; $982,786.00; co-PI with PI Billinge and co-PI Du), DMREF: Deblurring our View of Atomic Arrangements in Complex Materials for Advanced Technologies


  • Lee H. Dicker, Dean P. Foster, and Daniel Hsu. Kernel ridge vs. principal component regression: minimax bounds and adaptability of regularization operators. Electronic Journal of Statistics, 11(1):1022-1047, 2017.
  • Ji Xu, Daniel Hsu, and Arian Maleki. Global analysis of Expectation Maximization for mixtures of two Gaussians. In Advances in Neural Information Processing Systems 29, 2016.
  • Daniel Hsu, Aryeh Kontorovich, and Csaba Szepesvari. Mixing time estimation in reversible Markov chains from a single sample path. In Advances in Neural Information Processing Systems 28, 2015.
  • Cun Mu, Daniel Hsu, and Donald Goldfarb. Successive rank-one approximations for nearly orthogonally decomposable symmetric tensors. SIAM Journal on Matrix Analysis and Applications, 36(4):1638–1659, 2015.
  • Anima Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15(Aug):2773–2831, 2014.
  • Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and Robert E. Schapire. Taming the monster: a fast and simple algorithm for contextual bandits. In Thirty-First International Conference on Machine Learning, 2014.
  • Daniel Hsu and Sham M. Kakade. Learning mixtures of spherical Gaussians: moment methods and spectral decompositions. In Fourth Innovations in Theoretical Computer Science, 2013.
  • Alekh Agarwal, Dean P. Foster, Daniel Hsu, Sham M. Kakade, and Alexander Rakhlin. Stochastic convex optimization with bandit feedback. SIAM Journal on Optimization, 23(1):213–240, 2013.
  • Daniel Hsu, Sham M. Kakade, and Tong Zhang. A spectral algorithm for learning hidden Markov models. Journal of Computer and System Sciences, 78(5):1460–1480, 2012.
  • Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni. A general agnostic active learning algorithm. In Advances in Neural Information Processing Systems 20, 2007.