Benchmarking simulation of hybrid decoding scheme for parity-encoded spin systems
This paper presents classical benchmark simulations of a practical hybrid decoding scheme for parity-encoded spin systems, which is well-suited to the development of quantum annealing devices based on on-chip superconducting technology. We compared t…
Authors: Yoshihiro Nambu
Benc hmarking sim ulation of h ybrid deco ding sc heme for parit y-enco ded spin systems Y oshihiro Nam bu ∗ NEC-AIST Quantum T e chnology Co op er ative R ese ar ch L ab or atory National Institute of A dvanc e d Industrial Scienc e and T e chnolo gy (Dated: Marc h 31, 2026) This pap er presen ts classical b enc hmark sim ulations of a practical hybrid decoding scheme for parit y-enco ded spin systems, which is well-suited to the developmen t of quantum annealing devices based on on-chip sup erconducting technology . W e compared the p erformance of finding the optimal solution using tw o embedding schemes for em ulating all-to-all connectivity from lo cal interactions: the SLHZ mo del, prop osed by Sourlas, Lec hner, Hauke, and Zoller, and the commonly used minor em b edding (ME) scheme. W e found that the SLHZ scheme is more efficient than the ME scheme when combined with p ostreadout classical deco ding based on the classical bit-flipping algorithm, although the SLHZ scheme itself is substantially less efficient than the ME scheme. Quan tum annealing (QA) is exp ected to hold great p oten tial as a fast solv er for the hard combinatorial op- timization problem [1, 2]. It is well-kno wn that most problems, suc h as a quadratic unconstrained binary op- timization problem (QUBO), can b e mapp ed to 2-lo cal couplings b et ween spin pairs [3–5]. Implemen ting a large n umber of spins with controllable all-to-all pairwise con- nectivit y is required for QA devices. Ho wev er, this de- mand is challenging in actual devices b ecause it requires individually programmable long-range in teractions. In practice, this is incompatible with interactions that are implemen ted via short-range physical links in actual de- vices. So, several tec hniques can be used to circumv en t this problem. One solution is based on the minor em b ed- ding (ME) scheme, named from graph theory [6, 7]. In the ME scheme, the N spins given by the original Ising problem, which w e refer to as the logical spins, are re- placed b y c hains of ph ysical spins implemen ted on the device. Strong ferromagnetic interactions within an indi- vidual c hain imp ose energy p enalties to align the ph ysical spins to behav e as a single logical spin. As a result, the N spins of the logical problem are encoded into K = O ( N 2 ) ph ysical spins. The ME sc heme is actually used in D- W a ve QA devices [8–10]. Alternativ ely , Lec hner, Hauke, and Zoller hav e pro- p osed another clev er embedding scheme [11]. In their sc heme, K = N 2 ph ysical spins are arranged on a t wo- dimensional lattice with a square unit cell, and the four spins at each corner of the lattice are coupled via a lo cal four-b ody in teraction. The K physical spins encode the parit y , i.e., whether they are aligned or anti-aligned, of K p ossible pairs of logical spins, and the K logical cou- plings are mapped to lo cal fields acting on the K physical spins, resp ectiv ely . It is interesting to note that the same sc heme w as previously prop osed by Sourlas as an instance of the soft-annealing concept based on an isomorphism b et w een the Ising mo del and the classical error-correcting co des (ECC) [12–14]. F rom the viewp oin t of the ECC, the v alues of physical spins corresp ond to the co dew ord, ∗ y-nambu@aist.go.jp and the pro duct of the v alues of the four ph ysical spins at the corners of the unit cell corresponds to the syn- drome. The four-b ody interactions within the unit cell imp ose an energy p enalt y on eac h syndrome to satisfy the parity constrain ts of ECC. This scheme will b e re- ferred to as the SLHZ scheme hereafter. The adv antages of this scheme are as follows. First, b ecause it enables a quasiplanar la yout, it is w ell-suited to developing QA devices based on on-chip sup erconducting tec hnology[15– 18]. Actually , several researchers are conducting studies based on the SLHZ scheme and sup erconducting device tec hnology[19–23]. Second, this sc heme demonstrates ex- cellen t scalability in hardware implementation when the n umber of logical qubits increases. Third, since the SLHZ sc heme is closely related to the low-densit y parit y-chec k (LDPC) co de [24], w e can use classical deco ding tech- niques to sp eed up the search for an optimal solution, as sho wn b elo w. Both these sc hemes require the same K = O ( N 2 ) phys- ical resource cost when the logical N spins are embedded in to them. These costs are obviously higher than the cost of the original logical problem. As a result, the penalty o ccurs for b oth schemes. In this pap er, we present the b enc hmarking exp erimen t for these embedding schemes and show that the ov erhead dep ends not only on the num- b er of physical spins but also on other imp ortant factors. W e suggest that although the ov erhead of the original SLHZ scheme is far larger than that of the ME scheme [25], we can cancel the ov erhead by classical p ostreadout deco ding. W e b elieve that this suggests the p oten tial of the SLHZ scheme o ver the ME scheme as a platform for the sup erconducting QA devices. W e used classical Mark o v Chain Monte Carlo (MCMC) sampler to b enchmark the different embedding sc hemes. Although this benchmark is not necessarily reflect the p erformance of the actual QA devices, it is enough to gain insight ab out what is imp ortan t to reduce the ov er- head in the SLHZ scheme. W e generated temp oral se- quence by using MCMC sampler. The ensem ble of the samples in the sequence hav e statistical distribution as- so ciated with the Gibbs ensemble. W e in v estigated the probabilit y that the ensemble, as generated ab o ve or the randomly generated ensem ble explained below, in v olv e 2 the optimal solution corresp onding to the logical ground state, which will b e referred to as the success probabil- it y , hereafter. W e ev aluated the success probabilities as a function of sample size and compared the p erformance of eac h sc heme based on the results. If the same probabilit y is obtained with fewer samples, or if a higher probability is obtained with the same n umber of samples, we decide the p erformance is sup erior. In this w ork, four b enc hmark exp erimen ts schemati- cally sho wn in Fig.1 (a)-(d) w ere conducted for ev ery em b edding scheme presented later: (a) As theoretical baseline data, w e ev aluated the suc- cess probability using the M randomly generated samples, whic h amoun ts to exhaustiv e searc h of the optimal state. (b) W e generated a temp oral sequence of M samples from a randomly generated sample using MCMC sampler, and ev aluated the success probabilit y us- ing these M samples. This exp erimen t amounts to a classical annealing-based search. (c) W e made p ostreadout deco ding for M samples ob- tained by MCMC sampler, and ev aluated the suc- cess probability using the decoded samples. This exp erimen t amounts to a classical annealing-based searc h combined with classical postreadout deco d- ing. (d) Additionally , w e made postreadout decoding for M randomly generated samples, and ev aluated the success probabilit y using the deco ded samples. This exp eriment amounts to exhaustiv e search com bined with classical p ostreadout deco ding. W e considered the optimization based on the following all-to-all connected logical Ising Hamiltonian: H log ical ( Z ) = − X i
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment