Exact Decoding on Latent Variable Conditional Models is NP-Hard
Latent variable conditional models, including the latent conditional random fields as a special case, are popular models for many natural language processing and vision processing tasks. The computational complexity of the exact decoding/inference in latent conditional random fields is unclear. In this paper, we try to clarify the computational complexity of the exact decoding. We analyze the complexity and demonstrate that it is an NP-hard problem even on a sequential labeling setting. Furthermore, we propose the latent-dynamic inference (LDI-Naive) method and its bounded version (LDI-Bounded), which are able to perform exact-inference or almost-exact-inference by using top-$n$ search and dynamic programming.
💡 Research Summary
The paper investigates the computational complexity of exact decoding (inference) in latent variable conditional models, focusing on latent conditional random fields (LCRFs), which are widely used in natural language processing and computer vision to capture hidden structures. While conventional conditional random fields (CRFs) admit efficient exact inference via the Viterbi algorithm, the presence of latent variables in LCRFs makes inference non‑trivial. The authors first formalize LCRFs: given an observation sequence (x = (x_1,\dots,x_m)) and a label sequence (y = (y_1,\dots,y_m)), each label (y_i) is associated with a disjoint set of latent states (H(y_i)). The conditional probability of a labeling is obtained by summing the probabilities of all compatible latent labelings: \
Comments & Academic Discussion
Loading comments...
Leave a Comment