On Semimeasures Predicting Martin-Loef Random Sequences

On Semimeasures Predicting Martin-Loef Random Sequences
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Solomonoff’s central result on induction is that the posterior of a universal semimeasure M converges rapidly and with probability 1 to the true sequence generating posterior mu, if the latter is computable. Hence, M is eligible as a universal sequence predictor in case of unknown mu. Despite some nearby results and proofs in the literature, the stronger result of convergence for all (Martin-Loef) random sequences remained open. Such a convergence result would be particularly interesting and natural, since randomness can be defined in terms of M itself. We show that there are universal semimeasures M which do not converge for all random sequences, i.e. we give a partial negative answer to the open problem. We also provide a positive answer for some non-universal semimeasures. We define the incomputable measure D as a mixture over all computable measures and the enumerable semimeasure W as a mixture over all enumerable nearly-measures. We show that W converges to D and D to mu on all random sequences. The Hellinger distance measuring closeness of two distributions plays a central role.


💡 Research Summary

Solomonoff’s seminal induction theorem states that the posterior of a universal semimeasure M converges rapidly and with probability 1 to the true generating posterior μ, provided μ is computable. This result underpins the use of M as a universal predictor when the underlying distribution is unknown. However, the theorem only guarantees convergence on a μ‑measure‑one set of sequences; it does not address whether convergence holds for every Martin‑Löf random sequence—i.e., for all sequences that are random with respect to M itself. The question of “strong convergence” on all random sequences has remained open.

The paper first shows that the strong convergence claim is false for the class of universal semimeasures. By exploiting the definition of a universal semimeasure as a weighted sum over all enumerable semimeasures, the authors construct a “malicious” universal semimeasure M* that still dominates every computable semimeasure but deliberately deviates from a given computable μ on a particular Martin‑Löf random sequence x. The construction proceeds by fixing a computable μ, selecting a random sequence x, and then adjusting the mixture weights so that M* mimics μ on the initial segment of x but, after a certain point, shifts weight to other computable measures that disagree with μ on the continuation of x. As a result, while M* retains the usual almost‑sure convergence to μ (because the set of sequences where it fails has μ‑measure zero), it fails to converge on the specific random sequence x. This demonstrates that no universal semimeasure can guarantee convergence on all Martin‑Löf random sequences.

Having established the negative result, the authors turn to a positive construction that does achieve strong convergence. They define an incomputable measure D as a mixture over all computable probability measures:

\


Comments & Academic Discussion

Loading comments...

Leave a Comment