Taking Control by Convex Optimization Abstract: Linear dynamical systems LDSs are a class of time-series models widely used in robotics, finance, engineering, and meteorology. In it's general form when state transition dynamics are unknownlearning LDS is a classic non-convex problem, typically tackled with heuristics like gradient descent "backpropagation through time" or the EM algorithm. I will present our new "spectral filtering" approach to the identification and control of discrete-time general LDSs with multi-dimensional inputs, outputs, and a latent state.
Full Thesis text Abstract Semidefinite programming is a convex optimization problem which is wildly used in numerous fields of science and engineering. In combinatorial optimization and machine learning in particular, many algorithms that are based on solving semidefinite programs have been developed in recent years.
Although polynomial time algorithms which can solve general semidefinite programs accurately and even faster algorithms that solve such programs only approximately exist, their running may be prohibitive in practice when applied to very large scale problems such as those that are ubiquitous nowadays in machine learning.
Thus there is a constant need for faster algorithms for solving these programs. In this thesis we present an algorithm for solving approximately general semidefinite programs which enjoys a running time that is sublinear in the number of entries in the semidefinite instance and thus computes an approximated solution after reading only a small portion of the input.
The algorithm also has the benefit of producing low rank solutions which is computationally favorable.
Our algorithm is based on solving a Lagrange relaxation of the semidefinite program using the well known Multiplicative Updates Method and applying recent algorithmic machinery from online learning and random sampling.
We also present lower bounds on the running time of any approximation algorithm for semidefinite programming which demonstrate that our algorithm is close to optimal in certain cases.My doctoral thesis is on new frameworks for optimization algorithms.
In , I went to Princeton/IAS for a postdoc. I worked on machine learning and optimization theory with Prof. Elad Hazan, Prof. Avi Wigderson, and a number of amazing students. CS / Stat , Fall Learning in Sequential Decision Problems PhD Thesis.
Université Paris-Sud. Showed that mixing the uniform distribution into EXP3 is unnecessary. Elad Hazan and Satyen Kale. Better Algorithms for Benign Bandits. JMLR −, View Oren Anava’s profile on LinkedIn, the world's largest professional community. Oren has 4 jobs listed on their profile.
Research advisor: Dr. Elad Hazan. Title: Research Staff Member at Bay . Preparation and characterization of chiral metal nanostructures and their interaction with chiral molecules Thesis submitted for the degree of "Doctor of Philosophy" Elad Cohen, Omri Bar Elli, Sebastian Luneburg and Ronny Barnea, I learned a lot from them; and it was a.
Tech Report Ph.D. thesis. Princeton Tech Report TR, Approximating Quadratic Programs with Positive Semidefinite Constraints. Elad Hazan and Satyen Kale. Tech Report Princeton Tech Report TR Global Convergence of Policy Gradient Methods for Linearized Control Problems.
Maryam Fazel, Rong Ge, Sham M. Kakade, Mehran Mesbahi In ICML,