Discriminated Belief Propagation

Reading time: 7 minute
...

📝 Original Info

  • Title: Discriminated Belief Propagation
  • ArXiv ID: 0710.5501
  • Date: 2007-10-29
  • Authors: Uli Sorger

📝 Abstract

Near optimal decoding of good error control codes is generally a difficult task. However, for a certain type of (sufficiently) good codes an efficient decoding algorithm with near optimal performance exists. These codes are defined via a combination of constituent codes with low complexity trellis representations. Their decoding algorithm is an instance of (loopy) belief propagation and is based on an iterative transfer of constituent beliefs. The beliefs are thereby given by the symbol probabilities computed in the constituent trellises. Even though weak constituent codes are employed close to optimal performance is obtained, i.e., the encoder/decoder pair (almost) achieves the information theoretic capacity. However, (loopy) belief propagation only performs well for a rather specific set of codes, which limits its applicability. In this paper a generalisation of iterative decoding is presented. It is proposed to transfer more values than just the constituent beliefs. This is achieved by the transfer of beliefs obtained by independently investigating parts of the code space. This leads to the concept of discriminators, which are used to improve the decoder resolution within certain areas and defines discriminated symbol beliefs. It is shown that these beliefs approximate the overall symbol probabilities. This leads to an iteration rule that (below channel capacity) typically only admits the solution of the overall decoding problem. Via a Gauss approximation a low complexity version of this algorithm is derived. Moreover, the approach may then be applied to a wide range of channel maps without significant complexity increase.

💡 Deep Analysis

Deep Dive into Discriminated Belief Propagation.

Near optimal decoding of good error control codes is generally a difficult task. However, for a certain type of (sufficiently) good codes an efficient decoding algorithm with near optimal performance exists. These codes are defined via a combination of constituent codes with low complexity trellis representations. Their decoding algorithm is an instance of (loopy) belief propagation and is based on an iterative transfer of constituent beliefs. The beliefs are thereby given by the symbol probabilities computed in the constituent trellises. Even though weak constituent codes are employed close to optimal performance is obtained, i.e., the encoder/decoder pair (almost) achieves the information theoretic capacity. However, (loopy) belief propagation only performs well for a rather specific set of codes, which limits its applicability. In this paper a generalisation of iterative decoding is presented. It is proposed to transfer more values than just the constituent beliefs. This is achiev

📄 Full Content

D ECODING error control codes is the inversion of the encoding map in the presence of errors. An optimal decoder finds the codeword with the least number of errors. However, optimal decoding is generally computationally infeasible due to the intrinsic non linearity of the inversion operation. Up to now only simple codes can be optimally decoded, e.g., by a simple trellis representation. These codes generally exhibit poor performance or rate [11].

On the other hand, good codes can be constructed by a combination of simple constituent codes (see e.g., [14, pp.567ff]). This construction is interesting as then a trellis based inversion may perform almost optimally: BERROU et al. [2] showed that iterative turbo decoding leads to near capacity performance. The same holds true for iterative decoding of Low Density Parity Check (LDPC) codes [6]. Both decoders are conceptually similar and based on the (loopy) propagation of beliefs [16] computed in the constituent trellises. However, (loopy) belief propagation is often limited to idealistic situations. E.g., turbo decoding generally performs poorly for multiple constituent codes, complex channels, good constituent codes, and/or relatively short overall code lengths.

In this paper a concept called discrimination is used to generalise iterative decoding by (loopy) belief propagation. The generalisation is based on an uncertainty or distance discriminated investigation of the code space. The overall results of the approach are linked to basic principles in information theory such as typical sets and channel capacity [18,15,13].

Overview: The paper is organised as follows: First the combination of codes together with the decoding problem and its relation to belief propagation are reviewed. Then the concept of discriminators together with the notion of a common belief is introduced. In the second section local discriminators are discussed. By a local discriminator a controllable amount of parameters (or generalised beliefs) are transferred. It is shown that this leads to a practically computable common belief that may be used in an iteration. Moreover, a fixed point of the obtained iteration is typically the optimal decoding decision. Section 3 finally considers a low complexity approximation and the application to more complex channel maps.

To review the combination of constituent codes we here consider only binary linear codes C given by the encoding map C : x = (x 1 , . . . , x k ) → c = (c 1 , . . . , c n ) = xG mod 2 with G the (k × n) generator matrix with x i , c i , and G i,j ∈ Z 2 = {0, 1}.

The map defines for rank(G) = k the event set E(C) of 2 k code words c. The rate of the code is given by R = k/n and it is for an error correcting code smaller than One.

The event set E(C) is by linear algebra equivalently defined by a ((n -k) × n) parity matrix H with HG T = 0 mod 2 and thus E(C) = {c : Hc T = 0 mod 2}.

Note that the modulo operation is in the sequel not explicitly stated.

E(C) is a subset of the set S of all 2 n binary vectors of length n. The restriction to a subset is interesting as this leads to the possibility to correct corrupted words. However, the correction is a difficult operation and can usually only be practically performed for simple or short codes.

On the other hand long codes can be constructed by the use of such simple constituent codes. Such constructions are reviewed in this section.

The two constituent linear systematic coding maps l) ] with l = 1, 2 and a direct coupling gives the overall code E(C (a) ) with c (a) = x • [I P (1) P (2) ].

The constituent codes used for turbo decoding [2] are two systematic convolutional codes [10] with low trellis decoding complexity (See Appendix A.1). The overall code is obtained by a direct coupling as depicted in the figure to the right. The encoding of the non-systematic part P (l) can be done by a recursive encoder. The Π describes a permutation of the input vector x, which significantly improves the overall code properties but does not affect the complexity of the constituent decoders. If the two codes have rate 1/2 then the overall code will have rate 1/3.

x x x c c c (r1) Π Remark 1 (Generalised Concatenation) A concatenation can be used to construct codes with defined properties as usually a large minimum HAMMING distance. Note that generalised concatenated [3,4] codes exhibit the same basic concatenation map. There distance properties are investigated under an additional partitioning of code G (2) .

Another possibility to couple codes is given in the following definition. This method will show to be very general, albeit rather non intuitive as the description is based on parity check matrices H.

The overall code

H (2) c T = 0 is obtained by a dual coupling of the constituent codes C (l) := E(C (l) ) = {c : H (l) c T = 0} for l = 1, 2.

By a dual coupling the obtained code space is obtained by the intersection C (a) = C (1) ∩ C (2) of the constituent code spaces.

Example 2 A duall

…(Full text truncated)…

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut