Symmetry of information and bounds on nonuniform randomness extraction via Kolmogorov extractors

Symmetry of information and bounds on nonuniform randomness extraction   via Kolmogorov extractors
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

We prove a strong Symmetry of Information relation for random strings (in the sense of Kolmogorov complexity) and establish tight bounds on the amount on nonuniformity that is necessary for extracting a string with randomness rate 1 from a single source of randomness. More precisely, as instantiations of more general results, we show: (1) For all n-bit random strings x and y, x is random conditioned by y if and only if y is random conditioned by x, and (2) while O(1) amount of advice regarding the source is not enough for extracting a string with randomness rate 1 from a source string with constant random rate, \omega(1) amount of advice is. The proofs use Kolmogorov extractors as the main technical device.


💡 Research Summary

The paper tackles two fundamental problems at the intersection of algorithmic information theory and randomness extraction. The first problem concerns the symmetry of information (SOI) for random strings measured by Kolmogorov complexity. The classic Kolmogorov–Levin theorem guarantees that for any strings x and y the difference |I(x : y) − I(y : x)| is bounded by O(log|x| + log|y|), where I(x : y)=C(y)−C(y|x). This logarithmic loss is unavoidable in the general case, but the authors show that for n‑bit strings that are individually random (i.e., C(x|n)≥n−O(1) and C(y|n)≥n−O(1)), the loss can be reduced to a constant. The key technical tool is a refined chain rule: they prove that for such random x and y, \


Comments & Academic Discussion

Loading comments...

Leave a Comment