ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Random Separating Hyperplane Theorem and Learning Polytopes

Bhattacharyya, C and Kannan, R and Kumar, A (2024) Random Separating Hyperplane Theorem and Learning Polytopes. In: 51st International Colloquium on Automata, Languages, and Programming, ICALP 2024, 8 July 2024through 12 July 2024, Tallinn.

[img] PDF
lei_int_pro_inf_297_2024 - Published Version
Restricted to Registered users only

Download (880kB) | Request a copy
Official URL: https://doi.org/10.4230/LIPIcs.ICALP.2024.25

Abstract

The Separating Hyperplane theorem is a fundamental result in Convex Geometry with myriad applications. The theorem asserts that for a point a not in a closed convex set K, there is a hyperplane with K on one side and a strictly on the other side. Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes. RSH asserts that if the distance between a and a polytope K with k vertices and unit diameter in �d is at least δ, where δ is a fixed constant in (0,1), then a randomly chosen hyperplane separates a and K with probability at least 1/poly(k) and margin at least Ω(δ/�d). RSH has algorithmic applications in learning polytopes. We consider a fundamental problem, denoted the �Hausdorff problem�, of learning a unit diameter polytope K within Hausdorff distance δ, given an optimization oracle for K. Using RSH, we show that with polynomially many random queries to the optimization oracle, K can be approximated within error O(δ). To our knowledge, this is the first provable algorithm for the Hausdorff Problem in this setting. Building on this result, we show that if the vertices of K are well-separated, then an optimization oracle can be used to generate a list of points, each within distance O(δ) of K, with the property that the list contains a point close to each vertex of K. Further, we show how to prune this list to generate a (unique) approximation to each vertex of the polytope. We prove that in many latent variable settings, e.g., topic modeling, LDA, optimization oracles do exist provided we project to a suitable SVD subspace. Thus, our work yields the first efficient algorithm for finding approximations to the vertices of the latent polytope under the well-separatedness assumption. This assumption states that each vertex of K is far from the convex hull of the remaining vertices of K, and is much weaker than other assumptions behind algorithms in the literature which find vertices of the latent polytope. © Chiranjib Bhattacharyya, Ravindran Kannan, and Amit Kumar.

Item Type: Conference Paper
Publication: Leibniz International Proceedings in Informatics, LIPIcs
Publisher: Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
Additional Information: The copyright for this article belongs to Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing.
Keywords: Approximation algorithms; Geometry; Topology, Closed convex setss; Convex geometry; Hausdorff; Learning polytope; Myriad applications; Optimisations; Optimization oracle; Polytopes; Separating hyperplane; Separating hyperplane theorem, Set theory
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 18 Dec 2024 05:01
Last Modified: 18 Dec 2024 05:01
URI: http://eprints.iisc.ac.in/id/eprint/85835

Actions (login required)

View Item View Item