ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Improved learning of k-parities

Bhattacharyya, A and Gadekar, A and Rajgopal, N (2020) Improved learning of k-parities. In: Theoretical Computer Science, 840 . pp. 249-256.

the_com_sci_840_249-256_2020.pdf - Published Version

Download (316kB) | Preview
Official URL: https://doi.org/10.1016/j.tcs.2020.08.025


We consider the problem of learning k-parities in the online mistake-bound model: given a hidden vector x∈{0,1}n where the hamming weight of x is k and a sequence of “questions” a1,a2,…∈{0,1}n, where the algorithm must reply to each question with 〈ai,x〉(mod2), what is the best trade-off between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et al. [3] by an exp⁡(k) factor in the time complexity. Next, we consider the problem of learning k-parities in the PAC model in the presence of random classification noise of rate [Formula Presented]. Here, we observe that even in the presence of classification noise of non-trivial rate, it is possible to learn k-parities in time better than (nk/2), whereas the current best algorithm for learning noisy k-parities, due to Grigorescu et al. [9], inherently requires time (nk/2) even when the noise rate is polynomially small.

Item Type: Journal Article
Publication: Theoretical Computer Science
Publisher: Elsevier B.V.
Additional Information: The copyright of this article belongs to the Author.
Keywords: Economic and social effects; Learning systems, Hamming weights; Hidden vectors; Mistake bound models; Noise rate; Non-trivial; Time complexity; Trade off, Learning algorithms
Department/Centre: Division of Electrical Sciences > Computer Science & Automation
Date Deposited: 13 Sep 2022 10:11
Last Modified: 13 Sep 2022 10:16
URI: https://eprints.iisc.ac.in/id/eprint/76902

Actions (login required)

View Item View Item