By using this site, you agree to its use of cookies. One research focus are dynamic algorithms (i.e. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. Contact. 2022 - Learning and Games Program, Simons Institute, Sept. 2021 - Young Researcher Workshop, Cornell ORIE, Sept. 2021 - ACO Student Seminar, Georgia Tech, Dec. 2019 - NeurIPS Spotlight presentation.
", "A special case where variance reduction can be used to nonconvex optimization (monotone operators). with Yair Carmon, Arun Jambulapati, Qijia Jiang, Yin Tat Lee, Aaron Sidford and Kevin Tian
en_US: dc.format.extent: 266 pages: en_US: dc.language.iso: eng: en_US: dc.publisher: Massachusetts Institute of Technology: en_US: dc.rights: M.I.T. Slides from my talk at ITCS. Done under the mentorship of M. Malliaris. Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. (arXiv), A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, In Symposium on Foundations of Computer Science (FOCS 2015), Machtey Award for Best Student Paper (arXiv), Efficient Inverse Maintenance and Faster Algorithms for Linear Programming, In Symposium on Foundations of Computer Science (FOCS 2015) (arXiv), Competing with the Empirical Risk Minimizer in a Single Pass, With Roy Frostig, Rong Ge, and Sham Kakade, In Conference on Learning Theory (COLT 2015) (arXiv), Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, In International Conference on Machine Learning (ICML 2015) (arXiv), Uniform Sampling for Matrix Approximation, With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng, In Innovations in Theoretical Computer Science (ITCS 2015) (arXiv), Path-Finding Methods for Linear Programming : Solving Linear Programs in (rank) Iterations and Faster Algorithms for Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2014), Best Paper Award and Machtey Award for Best Student Paper (arXiv), Single Pass Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco, An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations, With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia, In Symposium on Discrete Algorithms (SODA 2014), Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems, In Symposium on Fondations of Computer Science (FOCS 2013) (arXiv), A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time, With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu, In Symposium on the Theory of Computing (STOC 2013) (arXiv), SIAM Journal on Computing (arXiv before merge), Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space, With Jack Murtagh, Omer Reingold, and Salil Vadhan, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (arXiv), Lower Bounds for Finding Stationary Points II: First-Order Methods. Aaron Sidford Stanford University Verified email at stanford.edu. I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. "t a","H We also provide two . [pdf] [talk] [poster]
ReSQueing Parallel and Private Stochastic Convex Optimization. 2016. Before Stanford, I worked with John Lafferty at the University of Chicago. My long term goal is to bring robots into human-centered domains such as homes and hospitals. } 4(JR!$AkRf[(t
Bw!hz#0 )l`/8p.7p|O~ %PDF-1.4 I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. BayLearn, 2019, "Computing stationary solution for multi-agent RL is hard: Indeed, CCE for simultaneous games and NE for turn-based games are both PPAD-hard. [pdf] [talk]
4026. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. In Innovations in Theoretical Computer Science (ITCS 2018) (arXiv), Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space. the Operations Research group. [pdf]
Two months later, he was found lying in a creek, dead from . Aaron Sidford joins Stanford's Management Science & Engineering department, launching new winter class CS 269G / MS&E 313: "Almost Linear Time Graph Algorithms."
what is a blind trust for lottery winnings; ithaca college park school scholarships;
Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization . This improves upon previous best known running times of O (nr1.5T-ind) due to Cunningham in 1986 and (n2T-ind+n3) due to Lee, Sidford, and Wong in 2015.
I am affiliated with the Stanford Theory Group and Stanford Operations Research Group. with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian
with Aaron Sidford
Aaron Sidford (sidford@stanford.edu) Welcome This page has informatoin and lecture notes from the course "Introduction to Optimization Theory" (MS&E213 / CS 269O) which I taught in Fall 2019. Oral Presentation for Misspecification in Prediction Problems and Robustness via Improper Learning. Before attending Stanford, I graduated from MIT in May 2018. ", "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers. Mail Code. MS&E welcomes new faculty member, Aaron Sidford ! data structures) that maintain properties of dynamically changing graphs and matrices -- such as distances in a graph, or the solution of a linear system.
Authors: Michael B. Cohen, Jonathan Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford Download PDF Abstract: We show how to solve directed Laplacian systems in nearly-linear time. They will share a $10,000 prize, with financial sponsorship provided by Google Inc. In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation. Aaron Sidford is an Assistant Professor in the departments of Management Science and Engineering and Computer Science at Stanford University. [pdf] [poster]
Discrete Mathematics and Algorithms: An Introduction to Combinatorial Optimization: I used these notes to accompany the course Discrete Mathematics and Algorithms. Before attending Stanford, I graduated from MIT in May 2018. International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG
Links. by Aaron Sidford. University of Cambridge MPhil. Stanford University with Yair Carmon, Aaron Sidford and Kevin Tian
My CV. I received a B.S. CV; Theory Group; Data Science; CSE 535: Theory of Optimization and Continuous Algorithms. My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). SODA 2023: 4667-4767. [pdf] [talk]
(ACM Doctoral Dissertation Award, Honorable Mention.) In Symposium on Foundations of Computer Science (FOCS 2020) Invited to the special issue ( arXiv) With Jack Murtagh, Omer Reingold, and Salil P. Vadhan. Lower bounds for finding stationary points II: first-order methods. I am broadly interested in optimization problems, sometimes in the intersection with machine learning
ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods
Navajo Math Circles Instructor. We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second . Abstract. ", "Faster algorithms for separable minimax, finite-sum and separable finite-sum minimax. I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020). In particular, it achieves nearly linear time for DP-SCO in low-dimension settings. Prof. Erik Demaine TAs: Timothy Kaler, Aaron Sidford [Home] [Assignments] [Open Problems] [Accessibility] sample frame from lecture videos Data structures play a central role in modern computer science. International Conference on Machine Learning (ICML), 2021, Acceleration with a Ball Optimization Oracle
Google Scholar, The Complexity of Infinite-Horizon General-Sum Stochastic Games, The Complexity of Optimizing Single and Multi-player Games, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions, On the Sample Complexity for Average-reward Markov Decision Processes, Stochastic Methods for Matrix Games and its Applications, Acceleration with a Ball Optimization Oracle, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, The Complexity of Infinite-Horizon General-Sum Stochastic Games
Another research focus are optimization algorithms. In particular, this work presents a sharp analysis of: (1) mini-batching, a method of averaging many . July 2015. pdf, Szemerdi Regularity Lemma and Arthimetic Progressions, Annie Marsden.
O! The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. Given an independence oracle, we provide an exact O (nr log rT-ind) time algorithm. In Sidford's dissertation, Iterative Methods, Combinatorial .
Faster energy maximization for faster maximum flow. Multicalibrated Partitions for Importance Weights Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder ALT, 2022 arXiv . AISTATS, 2021. Source: www.ebay.ie Information about your use of this site is shared with Google. >> Stability of the Lanczos Method for Matrix Function Approximation Cameron Musco, Christopher Musco, Aaron Sidford ACM-SIAM Symposium on Discrete Algorithms (SODA) 2018. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory (COLT 2022)! ", "Sample complexity for average-reward MDPs? He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. The following articles are merged in Scholar. With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli. Google Scholar; Probability on trees and . July 8, 2022. I am currently a third-year graduate student in EECS at MIT working under the wonderful supervision of Ankur Moitra.
Articles 1-20. Computer Science. /Length 11 0 R Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford. arXiv | conference pdf (alphabetical authorship) Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with . With Yair Carmon, John C. Duchi, and Oliver Hinder. Goethe University in Frankfurt, Germany. ", "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains. ICML, 2016. F+s9H I graduated with a PhD from Princeton University in 2018. Nima Anari, Yang P. Liu, Thuy-Duong Vuong, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, FOCS 2022, Best Paper If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. Internatioonal Conference of Machine Learning (ICML), 2022, Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space
Many of my results use fast matrix multiplication
Aaron Sidford is an assistant professor in the departments of Management Science and Engineering and Computer Science at Stanford University. with Arun Jambulapati, Aaron Sidford and Kevin Tian
475 Via Ortega 172 Gates Computer Science Building 353 Jane Stanford Way Stanford University
Taunton Registry Office Parking,
What Happened To Amy Theismann,
Articles A