Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
- On Structured Prediction Theory with Calibrated Convex Surrogate Losses
- Domain-Adversarial Training of Neural Networks
- Linearly Convergent Randomized Iterative Methods for Computing the Pseudoinverse
- Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance
- Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization
- Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains
- AdaBatch: Efficient Gradient Aggregation Rules for Sequential and Parallel Stochastic Gradient Methods
- Structure-Adaptive, Variance-Reduced, and Accelerated Stochastic Optimization
- Exponential convergence of testing error for stochastic gradient methods
- Optimal algorithms for smooth and strongly convex distributed optimization in networks
- Stochastic Composite Least-Squares Regression with convergence rate O(1/n)
- Sharpness, Restart and Acceleration
- PAC-Bayes and Domain Adaptation
- Kernel Square-Loss Exemplar Machines for Image Retrieval
- Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization
- PAC-Bayesian Analysis for a two-step Hierarchical Multiview Learning Approach
- Integration Methods and Accelerated Optimization Algorithms
- GANs for Biological Image Synthesis
- Nonlinear Acceleration of Stochastic Algorithms
- Algorithmic Chaining and the Role of Partial Feedback in Online Nonparametric Learning
- Frank-Wolfe Algorithms for Saddle Point Problems
- Convex optimization over intersection of simple sets: improved convergence rate guarantees via an exact penalty approach
- A Generic Approach for Escaping Saddle points
- Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods
- Combinatorial Penalties: Which structures are preserved by convex relaxations?
- On the Consistency of Ordinal Regression Methods
- Iterative hard clustering of features
- Asaga: Asynchronous Parallel Saga
- Sparse Accelerated Exponential Weights
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
- On Structured Prediction Theory with Calibrated Convex Surrogate Losses
- Domain-Adversarial Training of Neural Networks
- Linearly Convergent Randomized Iterative Methods for Computing the Pseudoinverse
- Sharp asymptotic and finite-sample rates of convergence of empirical measures in Wasserstein distance
- Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization
- Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains
- AdaBatch: Efficient Gradient Aggregation Rules for Sequential and Parallel Stochastic Gradient Methods
- Structure-Adaptive, Variance-Reduced, and Accelerated Stochastic Optimization
- Exponential convergence of testing error for stochastic gradient methods
- Optimal algorithms for smooth and strongly convex distributed optimization in networks
- Stochastic Composite Least-Squares Regression with convergence rate O(1/n)
- Sharpness, Restart and Acceleration
- PAC-Bayes and Domain Adaptation
- Kernel Square-Loss Exemplar Machines for Image Retrieval
- Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization
- PAC-Bayesian Analysis for a two-step Hierarchical Multiview Learning Approach
- Integration Methods and Accelerated Optimization Algorithms
- GANs for Biological Image Synthesis
- Nonlinear Acceleration of Stochastic Algorithms
- Algorithmic Chaining and the Role of Partial Feedback in Online Nonparametric Learning
- Frank-Wolfe Algorithms for Saddle Point Problems
- Convex optimization over intersection of simple sets: improved convergence rate guarantees via an exact penalty approach
- A Generic Approach for Escaping Saddle points
- Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods
- Combinatorial Penalties: Which structures are preserved by convex relaxations?
- On the Consistency of Ordinal Regression Methods
- Iterative hard clustering of features
- Asaga: Asynchronous Parallel Saga
- Sparse Accelerated Exponential Weights
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography