On the Effects of Pseudo and Quantum Random Number Generators in Soft Computing
In this work, we argue that the implications of Pseudo and Quantum Random Number Generators (PRNG and QRNG) inexplicably affect the performances and behaviours of various machine learning models that require a random input. These implications are yet…
Authors: Jordan J. Bird, Aniko Ekart, Diego R. Faria
On the Effects of Pseudo and Quan tum Random Num b er Generators in Soft Computing Jordan J. Bird · Anik´ o Ek´ art · Diego R. F aria Received: date / Accepted: date Abstract In this w ork, we argue that the implications of Pseudo and Quantum Random Number Generators (PRNG and QRNG) inexplicably affect the p erformances and b ehaviours of v arious machine learning models that require a random input. These implications are yet to b e explored in Soft Computing un til this work. W e use a CPU and a QPU to generate random n um b ers for m ul- tiple Mac hine Learning techniques. Random n umbers are employ ed in the random initial weigh t distributions of Dense and Con volutional Neural Netw orks, in which results show a profound difference in learning patterns for the tw o. In 50 Dense Neural Net works (25 PRNG/25 QRNG), QRNG increases ov er PRNG for accent clas- sification at +0.1%, and QRNG exceeded PRNG for men tal state EEG classification by +2.82%. In 50 Con- v olutional Neural Netw orks (25 PRNG/25 QRNG), the MNIST and CIF AR-10 problems are b enc hmark ed, in MNIST the QRNG exp eriences a higher starting ac- curacy than the PRNG but ultimately only exceeds it b y 0.02%. In CIF AR-10, the QRNG outp erforms PRNG b y +0.92%. The n-random split of a Random T ree is en- hanced to wards and new Quan tum Random T ree (QR T) mo del, which has differing classification abilities to its classical coun terpart, 200 trees are trained and com- Jordan J. Bird School of Engineering and Applied Science Aston Universit y E-mail: birdj1@aston.ac.uk Anik´ o Ek´ art School of Engineering and Applied Science Aston Universit y E-mail: d.faria@aston.ac.uk Diego R. F aria School of Engineering and Applied Science Aston Universit y E-mail: d.faria@aston.ac.uk pared (100 PRNG/100 QRNG ). Using the accen t and EEG classification datasets, a QR T seemed inferior to a R T as it p erformed on av erage w orse b y -0.12%. This pattern is also seen in the EEG classification problem, where a QR T p erforms w orse than a R T by -0.28%. Finally , the QR T is ensembled in to a Quantum Ran- dom F orest (QRF), which also has a noticeable effect when compared to the standard Random F orest (RF). 10 to 100 ensembles of T rees are b enc hmarked for the accen t and EEG classification problems. In accen t clas- sification, the b est RF (100 R T) outp erforms the best QRF (100 QRF) b y 0.14% accuracy . In EEG classifica- tion, the b est RF (100 R T) outperforms the b est QRF (100 QR T) by 0.08% but is extremely more complex, requiring twice the amount of trees in committee. All differences are observed to b e situationally p ositiv e or negativ e and thus are likely data dep enden t in their observ ed functional b eha viour. Keyw ords Quantum Computing · Soft Computing · Mac hine Learning · Neural Net works · Classification 1 Introduction Quan tum and Classical hypotheses of our reality are individually definitive and yet are indep enden tly para- do xical, in that they are b oth scien tifically verified though con tradictory to one another. These concurrently anti- thetical, nev ertheless infallible natures of the t wo mo d- els ha ve enflamed debate b et ween researc hers since the da ys of Alb ert Einstein and Erwin Schr¨ odinger during the early 20 th cen tury . Though the lac k of a Standar d Mo del of the Universe contin ues to provide a problem for physicists, the field of Computer Science thrives b y making use of b oth in Classical and Quantum comput- ing paradigms since they are indep enden tly observ able 2 Jordan J. Bird et al. in nature. Though the v ast ma jorit y of computers a v ailable are classical, Quan tum Computing has been emerging since the late 20 th Cen tury , and is becoming more and more a v ailable for use by researc hers and priv ate institutions. Cloud platforms developed by industry leaders suc h as Go ogle, IBM, Microsoft and Rigetti are quic kly gro wing in resources and op erational size. This rapidly expand- ing av ailability of quantum computational resources al- lo ws for researchers to perform computational exp eri- men ts, such as heuristic searc hes or machine learning, but allow for the use of the laws of quan tum mec hanics in their pro cesses. F or example, for n computational bits in a state of entanglemen t, only one needs to b e measured for all n bits to b e measured, since they all exist in parallel or anti-parallel relationships. Through this process, computational complexity has been re- duced by a factor of n . Bounded-error Quan tum Polyno- mial time (BQP) problems are a set of computational problems which cannot b e solved by a classical com- puter in p olynomial time, whereas a quantum pro cessor has the ability to with its different la ws of physics. Optimisation is a large multi-field conglomeration of researc h, whic h is rapidly accelerating due to the gro w- ing av ailability of p o w erful computing hardware such has CUD A. Examples include Ant Colony Optimisa- tion inspired b y the pheromone-dictated b eha viour of an ts Deng et al. (2019), orthoganal translations to de- riv e a Principle Component Analysis Zhao et al. (2019), v elo cit y-based searc hes of particle sw arms Deng et al. (2017), as well as en tropy-based methods of data anal- ysis and classification Zhao et al. (2018). There are several main contributions presented by this research: 1. A comparison of the abilities of Dense Neural Net- w orks with their initial random weigh t distributions deriv ed by Pseudorandom and Quantum Random metho ds. 2. An exploration of Random T ree models compared to Quantum R andom T r e e mo dels, which utilise Pseu- dorandom and Quan tum Random Num b er Genera- tors in their generation resp ectiv ely . 3. A b enc hmark of the num b er of Random T rees in a Random F orest mo del compared to the num b er of Quantum Random T rees in a Quantum Random F orest mo del. 4. A comparison of the effects of Pseudo and T rue ran- domness in initial random weigh t distributions in Computer Vision, applied to Deep Neural Netw orks and Conv olutional Neural Netw orks. Although Quan tum, Quantum-inspired, and Hybrid Clas- sical/Quan tum algorithms are explored, as w ell as the lik ewise metho ds for computing, the use of a Quan tum Random Number Generator is rarely explored within a classical mac hine learning approac h in which an RNG is required Kretzsc hmar et al. (2000). This research aims to compare approac hes for ran- dom n um b er generation in Soft Computing for tw o la ws of physics which directly defy one another; the Classi- cal true r andomness is imp ossible and the Quan tum true r andomness is p ossible Calude and Sv ozil (2008). Through the application of b oth Classical and Quan- tum Computing, simulated and true random n umber generation are tested and compared via the use of a Cen tral Pro cessing Unit (CPU) and an electron spin- based Quan tum Pro cessing Unit (QPU) via placing the subatomic particle into a state of quantum superp osi- tion. Logic w ould conjecture that the results b et ween the t wo ought to b e indistinguishable from one another, but exp erimen tation within this study suggests other- wise. The rest of this article is structured as follows: Section 2 gives an ov erview of the background to this pro ject and imp ortan t related theories and works. Sp ecifically , Quantum Computing, the differing ideas of randomness in b oth Classical and Quan tum com- puting, applications of quantum theory in computing and finally a short subsection on the machine learning theories used in this study . Section 3 describ es the con- figuration of the mo dels as w ell as the metho ds used sp ecifically to realise the scientific studies in this arti- cle, b efore being presen ted and analysed in Section 4. The exp erimen tal results are divided into four individ- ual exp erimen ts: – Exp erimen t 1 - On random weigh t distribution in Dense Neural Netw orks: Pseudorandom and Quan- tum Random Num b er Generators are used to ini- tialise the weigh ts in Neural Netw ork mo dels. – Exp erimen t 2 - On Random T ree splits: The n Ran- dom Splits for a Random T ree classifier are formed b y Pseudo and Quantum Random num b ers. – Exp erimen t 3 - On Random T ree splits in Random F orests: The Quantum T r e e mo del deriv ed from Ex- p erimen t 2 is used in a Quantum R andom F or est ensem ble classifier. – Exp erimen t 4 - On Computer Vision: A Deep Neu- ral Net work and Con volutional Neural Netw ork are trained on t wo image recognition datasets with pseudo and true random w eight distributions for the appli- cation of Computer Vision. Exp erimen ts are separated in order to fo cus up on the effects of differing random num b er generators on a spe- cific mo del. Explored in these are the effects of Pseudo- random and Quantum Random n umber generation in On the Effects of Pseudo and Quantum Random Num b er Generators in Soft Computing 3 their processes, and a discussion of similarities and dif- ferences betw een the tw o in terms of statistics as well as their wider effect on the classification pro cess. Section 5 outlines p ossible extensions to this study for future w orks, and finally , a conclusion is presented in Section 6. 2 Background and Related W orks 2.1 Quantum Computing Pioneered b y P aul Benioff ’s 1980 work Benioff (1980), Quan tum Computing is a system of computation that mak es computational use of phenomena outside of clas- sical physics suc h as the en tanglement and sup erposi- tion of subatomic particles Gershenfeld and Chuang (1998). Whereas classical computing is concerned with electronic bits that hav e v alues of 0 or 1 and logic gates to pro cess them, quantum computing uses b oth classi- cal bits and gates as w ell as new possible states; such as a bit b eing in a state of sup erposition (0 and 1) or en- tangled with other bits. Entanglemen t means that the v alue of the bit, even b efore measurement, can b e as- sumed to be parallel or anti-parallel to another bit of whic h it is entangled to Bell (1964). These extended la ws allow for the solving of problems far more effi- cien tly than computers. F or example, a 64-bit system (2 63 − 1) has approximately 9.22 quintillion v alues with its individual bits at v alues 1 or 0, whereas unlik e a three-state ternary system which QPUs are often mis- tak en for, the la ws sup erposition and the degrees of state would allow a small array of qubits to represent all of these v alues at once - theoretically allowing quan- tum computers to solve problems that classical comput- ers will never b e able to p ossibly solve. Since the sta- bilit y of entanglemen t decreases with the more compu- tational qubits used, only very small-scale experiments ha ve b een performed as of to da y . Quantum Processing Units (QPUs) made a v ailable for use b y Rigetti, Go ogle and IBM hav e up to 16 av ailable qubits for computing via their cloud platforms. 2.2 Randomness in Classical and Quan tum Computing In classical computing, randomness is not random, rather, it is simulated by a pseudo-r andom pro cess. Pro cessor arc hitectures and Op erating Systems hav e individual metho ds of generating pseudo-random num b ers whic h m ust conform to cybersecurity standards suc h as NIST Bark er and Kelsey (2007). Ma jor issues arise with the p ossibilit y of b ackdo ors , notably for example In tel’s pseudo random generator whic h, after hijacking, allow ed for complete con trol of a computer system for malicious in- ten t Degabriele et al. (2016); Schneier (2007). The In tel issue w as far from a lone incident, the RANDU system w as crac ked b y the NSA for unpreceden ted access to the RSA BSAFE cryptographic library , as well as in 2006 when Debian Op enSSL’s random num b er generator was also crack ed, leading to Debian b eing compromised for t wo years Mark owsky (2014). Though there are many metho ds of deriving a pseudo-random n umber, all clas- sical metho ds, due to the laws of classical physics pro- viding limitation, are sourced through arbitrary y et de- terministic even ts Gallego et al. (2013); such as a com- bination of, time since n last key press, hardw are tem- p erature, system clo c k, lunar calendar etc. This arbi- tration could p ossibly hamp er or improv e algorithms that rely on random num b ers, since the state of the ex- ecuting platform could indeed directly influence their b eha viour. According to Bell’s famous theorem, ”No physic al the ory of lo c al hidden variables c an ever r epr o duc e al l of the pr e dictions of quantum me chanics” Bell (1964). This directly argued against the p osition put forward b y Einstein et. al in which it is claimed that the Quan tum Mec hanical ’parado x’ is simply due to incomplete the- ory Einstein et al. (1935). Using Bell’s theorem, demon- strably random n umbers can be generated through the fact that observing a particle’s state while in sup erpo- sition gives a true 50/50 outcome (qubit v alue 0, 1) Pironio et al. (2010). This concretely random output for the v alue of the single bit can b e used to build in- tegers comprised of larger n umbers of bits whic h, since they are all individually random, their pro duct is to o. This process is known as a Quan tum Random Number Generator (QRNG). Beha viours in Quantum Mec hanics suc h as, but not limited to, branching path sup erp osition Jennew ein et al. (2000), time of arriv al W ayne et al. (2009), parti- cle emission count Ren et al. (2011), atten uated pulse W ei and Guo (2009), and v acuum fluctuations Gabriel et al. (2010) are all entirely random - and hav e b een used to create true QRNGs. In 2000, it was observed that a true random num b er generator could be formed through the observ ation of photons Stefano v et al. (2000). Firstly , a b eam of light is split in to t wo streams of entangled photons, noise is reduced after whic h the photons of b oth streams are observed. The t wo detec- tors correlate to 0 and 1 v alues, and a detection will amend a bit to the result. The detection of a photon is non-deterministic b etw een the tw o, and therefore a completely random series of v alues are the result of this 4 Jordan J. Bird et al. Fig. 1 The F amous Schr¨ odinger’s Cat Thought Experiment. When unobserved, the cat arguably exists in tw o opp osite states (alive and dead), whic h itself constitutes a third sup er- state Schr¨ odinger (1935). exp erimen t. This study makes use of the branching path sup er- p osition metho d for the base QRNG, in that the ob- serv ed state of a particle c at time t , the state of c is non-deterministic un til only after observ ation t . In the classical mo del, the la w of sup erp osition simply states that for prop erties A and B with outcomes X and Y , b oth prop erties can lead to state XY . F or example, the translation and rotation of a wheel can lead to a rolling state Cullerne (2000), a third sup erstate of the tw o p ossible states. This translates in to quantum ph ysics, where quan tum states can b e superp osed in to an addi- tional v alid state Dirac (1981). This is b est exemplified with Erwin Schr¨ odinger’s famous thought exp erimen t, kno wn as Schr¨ odinger’s Cat Schr¨ odinger (1935). As seen in Fig. 1, a cat sits in a box along with a Geiger Counter and a source of radiation. If alpha radiation is detected, which is a completely random even t, the counter releases a p oi- son into the b o x, killing the cat. The thought exp er- imen t explains sup erposition in such a wa y , that al- though the cat has t wo states (Aliv e or Dead), when unobserv ed, the cat is b oth simultaneously aliv e and dead. In terms of computing, this means that the t wo classical b eha viours of a single bit, 1 or 0, can b e super- p osed in to an additional state, 1 and 0 . Just as the c at only b e c omes alive or de ad when observe d, a sup erp ose d qubit only b e c omes 1 or 0 when me asur e d. A Blo c h Sphere is a graphical representation of a qubit in sup erposition Bloch (1946) and can b e seen in Fig. 2. In this diagram, the basis states are in terpreted b y eac h p ole, denoted as | 0 i and | 1 i . Other b eha viours, the rotations of spin ab out points ψ , φ , and θ are used to sup erp ose the t wo states to a degree. Thus dep end- ing on the metho d of interpretation, man y v alues can Fig. 2 A Bloch Sphere Represents the Two Basis States of a Qubit (0, 1) as w ell as the States of Superp osition In-betw een. b e encoded within only a single bit of memory . The Hadamard Gate within a QPU is a logical gate whic h co erces a qubit in to a state of sup erp osition based on a basis (input) state. 0 is mapp ed as follo ws: | 0 i 7→ | 0 i + | 1 i √ 2 (1) The other p ossible basis state, 1, is mapp ed as: | 0 i 7→ | 0 i − | 1 i √ 2 (2) This single qubit quan tum F ourier transform is thus represen ted through the following matrix: H = 1 √ 2 1 1 1 − 1 (3) Just as in the though t exp erimen t describ ed in whic h Sc hr¨ odinger’s cat is b oth aliv e and dead, the qubit now exists in a state of quan tum sup erposition; it is b oth 1 and 0. That is, until it is measured, in which there will b e an equal probability that the observed state is 1 or 0, giving a completely randomly generated bit v alue. This is the logical basis of all QRNGs. 2.3 Quantum Theory in Related State-of-the-art Computing Application The field of Quan tum Computing is young, and thus there are man y frontiers of research of which none hav e b een mastered. Quan tum theory , though, has b een sho wn in some cases to improv e current ideas in Computer Science as w ell as endow a system with abilities that On the Effects of Pseudo and Quantum Random Num b er Generators in Soft Computing 5 w ould b e imp ossible on a classical computer. This sec- tion outlines some of the state of the art applications of quantum theory in computing. Quan tum P erceptrons are a theoretical approac h to deriving a quan tum equiv alen t of a p erceptron unit (neuron) within an Artificial Neural Net work Sc huld et al. (2014). Current lines of research fo cus around the p ossibilities of asso ciativ e memory through quan- tum entanglemen t of in ternal states within the neurons of the net work. The approach is heavily inspired by the notion that the biological brain may op erate within b oth classical and quantum physical space Hagan et al. (2002). Preliminary w orks hav e found Quantum Neural Net works hav e a slight statistical adv an tage ov er clas- sical techniques within larger and more complex do- mains Nara yanan and Menneer (2000). A v ery limited exten t of researc h suggest quantum effects in a net work to b e the possible source of consciousness Hameroff and P enrose (1996), pro viding an exciting av en ue for Arti- ficial In telligence researc h in the field of Artificial Con- sciousness. Inspiration from quan tum mec hanics has led to the implementation of a Neural Netw orks based on fuzzy logic systems Purushothaman and Karayian- nis (1997), research show ed that QNNs are capable of structure recognition, whic h sigmoid-activ ated hidden units within a netw ork cannot. There are man y statistical processes that are either more efficient or ev en simply p ossible through the use of Quan tum Pro cessors. Simon’s Problem prov ides initial pro of that there are problems that can b e s olv ed expo- nen tially faster when executed in quantum space Arora and Barak (2009). Based on Simon’s Problem, Shor’s Algorithm uses quan tum computing to deriv e the prime factors of an integer in p olynomial time Shor (1999), something which a classical computer is not able to do. Some of the most prominent lines of researc h in quan tum algorithms for Soft Computing are the ex- ploration of Computational Intelligence tec hniques in quan tum space such as meta-heuristic optimisation, heuris- tic searc h, and probabilistic optimisation etc. Pheromone trails in Ant Colony Optimisation searches generated and measured in the form of qubits with op erations of en tanglement and sup erposition for measurement and state scored highly on the T ennesse e Eastman Pr o c ess b enc hmark problem, due to the optimal op erations in- v olved W ang et al. (2007). This w ork was applied by researc hers, who in turn found that combining Support V ector Mac hines with Quantum Ant Colon y Optimi- sation searc h provided a highly optimised strategy for solving fault diagnosis problems W ang et al. (2008), greatly impro ving the base SVM. P arallel Ant Colony Optimisation has also been observ ed to greatly impro ve in p erformance when op erating similar techniques Y ou et al. (2010). Similar tec hniques hav e also b een used in the genetic search of problem spaces, with quantum logic gates p erforming genetic op erations and proba- bilistic represen tations of solution sets in sup erposi- tion/en tanglement, the technique is observed to b e su- p erior o ver its classical coun terpart when b enc hmarked on the combinatorial optimisation problem Han et al. (2001). Statistical and Deep Learning tec hniques are often useful in other scien tific fields such as engineering Nader- p our et al. (2019); Naderp our and Mirrashid (2019), medicine Khan et al. (2001); Penn y and F rost (1996), c hemistry Sc h ¨ utt et al. (2019); Gastegger et al. (2019), and astrophysics Krastev (2019); Kimmy W u et al. (2019) among a great man y others Carlini and W agner (2017). As of yet, the applications of quan tum solutions ha ve not b een applied within these fields tow ards the p ossible impro vemen t of soft computing technique. 3 Exp erimen tal Setup and Design F or the generation of true random bit v alues, an electron- based superp osition state is observed using a QPU. The Quan tum Assem bly Language co de for this is giv en in App endix A; an electron is transformed using a Hadamard Gate and thus now exists in a state of sup erp osition. When the bit is observed, it tak es on a state of either 0 or 1, whic h is a non-deterministic 50/50 outcome ie. p erfect randomness. A VM example of how these op- erations are formed into a random in teger are given in App endix B; the superp osition state particle is sequen- tially observ ed and each derived bit is amended to a result until 32 bits hav e b een generated. These 32 bits are then treated as a single binary num b er. The result of this pro cess is a truly random unsigned 32-bit integer. F or the generation of bounded random n umbers, the result is normalised with the upp er b ound b eing the highest p ossible v alue of the intended n umber. F or those that also ha v e low er b ounds b elo w zero, a simple subtraction is p erformed on a higher b ound of normali- sation to giv e a range. F or example, if a random w eight distribution for neural netw ork initialisation is to b e generated b et ween -0.5 and 0.5, the random 32-bit in- teger is normalised b et ween 0-1 and 0.5 is subtracted from the result, giving the desired range. This pro cess is used for the generation of both PRN and QRN since they are therefore then directly comparable with one 6 Jordan J. Bird et al. another and thus also directly relative in their effects up on a mac hine learning pro cess. F or the first dataset in each exp erimen t, a publicly a v ailable Accent Classification dataset is retriev ed 1 . This dataset was gathered from sub jects from the United Kingdom and Mexico, all speaking the same seven pho- netic sounds ten times each. A flat dataset is pro duced via 27 logs of their Mel-frequency Cepstral Co efficien ts ev ery 200ms to pro duce a mathematical description of the audio data. A four-class problem arises in the pre- diction of the lo cale of the sp eak er (W est Midlands, London, Mexico Cit y , Chih uahua). The second dataset in eac h exp erimen t is an EEG brainw av e dataset sourced from a previous study Bird et al. (2018) 2 . The w av e data has b een extracted from the TP9, AF7, AF8 and TP10 electro des, and has b een processed in a similar w ay to the sp eec h in the first dataset, except is done so through a m uch larger set of mathematical descriptors. F or the four-sub ject EEG dataset, a three-class problem arises; the concen trative state of the sub ject (concen- trating, neutral, relaxed). The feature generation pro- cess from this dataset was observ ed to b e effective for men tal state classification in the aforemen tioned study , as well as for emotional classification from the same EEG electro des Bird et al. (2019a). F or the final exp erimen t, t wo image classification datasets are used. Firstly , the MNIST image dataset is retriev ed 3 LeCun and Cortes (2010) for the MLP . This dataset is comprised of 60,000 32x32 handwritten single digits 0-9, a 10-class problem with eac h class being that of the digit written. Secondly , the CIF A-10 dataset is retriev ed 4 Krizhevsky et al. (2009) for a CNN. This, as with the MNIST dataset, is comprised of 60,000 32x32 10-class images of entities (eg. bird, cat, deer). F or the generation of pseudorandom num b ers, an AMD FX8320 pro cessor is used with given b ounds for exp erimen t 1a and 1b. The Jav a Virtual Machine gen- erates pseudorandom n umbers for exp erimen ts 2 and 3. All of the pseudorandom n umber generators had their seed set to the order of execution, ie. the first mo del has a seed of 1 and the n th mo del has a seed of n . Due to the high resource usage of training a large vol- ume of neural netw orks, the CUDA cores of an Nvidia GTX980Ti were utilised and they were trained on a 1 h ttps://www.k aggle.com/birdy654/sp eec h-recognition- dataset-england-and-mexico 2 h ttps://www.k aggle.com/birdy654/eeg-brainw av e- dataset-men tal-state 3 h ttp://yann.lecun.com/exdb/mnist/ 4 h ttps://www.cs.toronto.edu/kriz/cifar.h tml 70/30 train/test split of the datasets. F or the Machine Learning Mo dels explored in Exp erimen ts 2 and 3, 10- fold cross v alidation was used due to the av ailability of computational resources to do so. 3.1 Exp erimen tal Pro cess In this subsection, a step-by-step process is giv en de- scribing how each model is trained tow ards comparison b et w een PRNG and QRNG metho ds. MLP and CNN RNG metho ds are op erated through the same technique and as such are describ ed together, following this, the Random T ree (R T) and Quan tum Random T ree (QR T) are describ ed. Finally the ensem bles of the t wo t yp es of trees are then finally describ ed as Random F orest (RF) and Quan tum Random F orest (QRF). Each set of mod- els is tested and compared for t wo differen t datasets, as previously describ ed. F or replicability of these exp er- imen ts, the co de for Random Bit Generation is given in App endix A (for construction of an n-bit integer). Construction of the n-bit integer through electron ob- serv ation lo op is giv en in App endix B. F or the Random Neural Netw orks, all use the ADAM Sto c hastic Optimiser for w eight tuning Kingma and Ba (2014), and the activ ation function of all hidden la yers is ReLU Agarap (2018). F or Random T rees, K randomly chosen attributes is defined b elo w (acquired via either PRNG or QRNG) and the minimum p ossi- ble v alue for k is 1, no pruning is performed. Minim um class v ariance is set to − inf since the datasets are well- balanced, the maximum depth of the tree is not limited, and classification m ust alwa ys be p erformed even if con- fusion o ccurs. The chosen Random T ree attributes are also used for all trees within F orests, where the random n umber generator for selection of data subsets is also decided by a PRNG or QRNG. The algorithmic com- plexit y for a Random T ree is giv en as O ( v × nl og ( n )) where n is the n umber of data ob jects in the dataset and v is the num b er of attributes b elonging to a data ob ject in the set. Algorithmic complexity of the neural net works are dep endent on chosen topologies for each problem, and the complexity is presented as an O ( n 2 ) problem. Giv en n num b er of netw orks to be benchmark ed for x ep o c hs, generally , the MLP and CNN exp erimen ts are automated as follows: 1. Initialise n/ 2 neural netw orks with initial random w eights generated by an AMD CPU (pseudoran- dom). On the Effects of Pseudo and Quantum Random Num b er Generators in Soft Computing 7 2. Initialise n/ 2 neural netw orks with initial random w eights generated by a Rigetti QPU (true random). 3. T rain all n neural netw orks. 4. Consider classification accuracy at each ep och 5 for comparison as well as statistical analysis of all n/ 2 net works. Giv en n n umber of trees with a decision v ariable K x ( K randomly c hosen attributes at node x ), the pro cess of training Random T rees (R T) and Quan tum Random T rees (QR T) are giv en as follows: 1. T rain n/ 2 Random T rees, in whic h the RNG for deciding set K for every x is executed by an AMD CPU (pseudorandom) 2. T rain n/ 2 Quantum Random T rees, in which the RNG for deciding set K for every x is executed b y a Rigetti QPU (true random). 3. Considering the b est and worst mo dels, as w ell as the mean result, compare the tw o sets of n/ 2 mo dels in terms of statistical difference 6 Finally , the Random T ree and Quantum Random T ree are b enc hmarked as an ensemble, through Ran- dom F orests and Quantum Random F orests. This is p erformed mainly due to the unpruned Random T ree lik ely o verfitting to training data Hastie et al. (2005). The pro cess is as follows 7 : 1. F or the Random F orests, benchmark 10 forests con- taining { 10, 20, 30 ... 100 } Random T ree Mo dels (as generated in the R andom T r e e Exp erimental Pr o c ess list ab o ve). 2. F or the Quantum Random F orests, b enc hmark 10 forests containing { 10, 20, 30 ... 100 } Quantum Ran- dom T ree Mo dels (as generated in the R andom T r e e Exp erimental Pr o c ess list ab o ve). 3. Compare abilities of all 20 mo dels, in terms of clas- sification abilit y as well as the statistical differences, if any , b et w een different n umbers of trees in the for- est. 4 Results and Discussion In this section, results are presented and discussed for m ultiple Machine Learning mo dels when their random n umber generator is either Pseudo-randomly , or T rue (Quan tum) Randomly generated. Please note that in neural net work training, lines do not correlate on a one- to-one basis. Each line is the accuracy of a neural net- w ork throughout the training pro cess, and line colour 5 Accuracy/ep och graphs are given in Section 4 6 Bo x and whisker comparisons giv en in Section 4. 7 F or further detail on the Random Decision F orest classi- fier selected for this study , please refer to Breiman (2001) 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 Epoch 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Classification Accuracy Accent Classification Experiment Pseudorandom True Random Fig. 3 The Main Learning Curve Experienced for 50 Dense Neural Net works, 25 with PRNG and 25 with QRNG Initially Distributed W eigh ts in Accent Classification defines how that netw ork had its weigh ts initialised ie. whether or not it has pseudo or quantum random n um- b ers as its initial weigh ts. 4.1 MLP: Random Initialisation of Dense Neural Net work W eights F or Exp erimen t 1, a total of fift y dense neural net works w ere trained for eac h dataset. All netw orks were iden- tical except for their initial w eight distributions. Initial random weigh ts within bounds of -0.5 and 0.5 w ere set, 25 of the netw orks derived theirs from a PRNG, and the other 25 from a QRNG. 4.1.1 A c c ent Classific ation F or Exp erimen t 1a, the accent classification dataset w as used. In this experiment, w e observed initial sparse learning pro cesses b efore stabilisation occurs at appro x- imately ep och 30 and the tw o con verge up on a similar result. Fig. 3 shows this con vergence of the learning pro cesses the initial learning curve exp erienced during the first half of the process, in this graph it can b e observ ed that the b eha viour of pseudorandom weigh t distribution is far less erratic than that of the quan- tum random num b er generator. This shows that the t wo methods of random num b er generators do ha ve an observ able effect on the learning pro cesses of a neural net work. F or PRNG, the standard deviation b et ween all 25 final results w as 0.00098 suggesting that a classification maxima was being conv erged up on. The standard devi- ation for QRNG was considerably larger, but statisti- 8 Jordan J. Bird et al. Fig. 4 The F ull Learning Process of 50 Dense Neural Net- w orks, 25 with PRNG and 25 with QRNG Initially Dis- tributed W eigh ts in Mental State EEG Classification Fig. 5 The Final Ep ochs of Learning for 50 Dense Neural Netw orks, 25 with PRNG and 25 with QRNG Initially Dis- tributed W eigh ts in Mental State EEG Classification cally minimal at 0.0017. Mean final results were 98.73% for PRNG distributions and 98.8% for QRNG distribu- tions. The maxim um classification accuracy achiev ed b y the PRNG initial distribution was 98.8% whereas QRNG ac hieved a slightly higher result of 98.9% at ep och 49. F or this problem, the differences betw een the initial distribution of PRNG and QRNG are minimal, QRNG distribution results are somewhat more en tropic than PRNG but otherwise the tw o sets of results are in- distinguishable from one another, and most lik ely sim- ply due to random noise. 4.1.2 Mental State Classific ation F or Exp erimen t 1b, the Mental State EEG classifica- tion dataset was used Bird et al. (2018). Fig. 4 shows the full learning pro cess of the netw orks from initial ep och 0 up un til backpropagation ep och 100, though this graph is erratic and cro wded, the emergence of a pattern b ecomes ob vious within ep ochs 20-30 where the learning pro cesses split into tw o distinct groups. In this figure, a more uniform b eha viour of QRNG meth- o ds are noted, unlik e the previous exp erimen t. The be- ha viours of PRNG distributed mo dels are extremely er- ratic and in some cases, very slo w in terms of improv e- men ts made. Fig. 5 sho w a higher resolution view of the data in terms of the end of the learning process when terminated at ep och 100, a clear distinction of results can b e seen and a concrete separation can b e drawn b et w een the t wo groups of mo dels except for t wo inter- secting pro cesses. It should be noted that by this point, the learning pro cess has not settled to w ards a true b est fitness, but a v ast and clear separation has o ccurred. F or PRNG, the standard deviation b et ween all 25 results was 0.98. The standard deviation for QRNG w as somewhat smaller at 0.74. The mean of all results w as 63.84% for PRNG distributions and 66.45% for QRNG distribution, a slightly superior result. The max- im um classification accuracy ach ieved b y the PRNG ini- tial distribution w as 65.35% whereas QRNG achiev ed a somewhat higher b est result of 68.17%. The w orst-b est result for PRNG distribution net works w as 62.28%, and w as 65.31% for QRNG distribution net works. F or this problem, the differences betw een the initial distribution of PRNG and QRNG w eights are noticeable, QRNG distribution results are consistently b etter than PRNG approac hes to initial weigh t distribution. 4.2 Random T ree and Quan tum Random T ree Classifiers Exp erimen ts 2a and 2b make use of the same datasets as in 1a and 1b respectively . In this experiment, 200 Ran- dom T ree classifiers are trained for eac h dataset. These are, again, comprised of tw o sets; firstly 100 Random T ree (R T) classifiers whic h use Pseudorandom n umbers, and secondly , 100 Quantum R andom T r e e (QR T) clas- sifiers, which s ource their random num b ers from the QRNG. Random Num b ers are used to select the n- random attribute subsets at each split. 4.2.1 A c c ent Classific ation 200 Exp erimen ts are graphically represented as a b o x- and-whisk er in Fig. 6. The most sup erior classifier was On the Effects of Pseudo and Quantum Random Num b er Generators in Soft Computing 9 Fig. 6 A Comparison of results from 200 Random T ree Clas- sifiers, 100 using PRNG and 100 using QRNG on the Accent Classification Dataset the R T with a b est result of 86.64% and w orst of 85.68%, on the other hand, the QR T achiev ed a b est accuracy of 86.52% and worst of 85.62%. Best and worst results of the tw o mo dels are extremely similar. The standard deviation of results of the R T was 0.19 and the QR T similarly had a standard deviation of 0.17. The range of the R T results w as 0.96 and QR T results had a sim- ilar range of 0.9. In terestingly , a similar pattern is not only found in results, but also with the high outlier too when considered relative to the mo del’s median p oin t. Though an ov erall slight sup eriorit y is seen in pseudo- random num b er generation, the t w o models are consid- erably similar in their abilities. 4.2.2 Mental State Classific ation Fig. 7 shows the distribution for the 200 Random T ree classifiers trained on the Men tal State dataset. The standard deviation of results from the R T was 0.81 whereas it was slightly low er for QR T at 0.73. The b est result achiev ed b y the R T was 79.68% classifica- tion accuracy whereas the b est result from the QR T w as 79.4%. The range of results for R T and QR T were a similar 3.31 and 3.47 respectively . Overall, v ery little difference betw een the tw o mo dels o ccurs. The distri- bution of results can be seen to be extremely similar to the first R T/QR T exp erimen t when compared to Fig. 6. 4.3 Random F orest and Quan tum Random F orest Classifiers In this third exp erimen t, the datasets are classified us- ing t wo mo dels. Random F orests (RF) whic h use a com- Fig. 7 A Comparison of results from 200 Random T ree Clas- sifiers, 100 using PRNG and 100 using QRNG on the Mental State EEG Dataset 10 20 30 40 50 60 70 80 90 100 90 90 . 5 91 91 . 5 92 92 . 5 Num b er of T rees in the F orest Classification Accuracy Accen t Classification Exp erimen t Pseudorandom T rue Random Fig. 8 Classification Accuracies of 10 Random F orest and 10 Quan tum F orest Models on the Accen t Classification Dataset mittee of Random T rees to vote on a Class, and Quan- tum Random F orests (QRF) which use a committee of Quan tum T rees to vote on a class. F or eac h dataset, 10 of these mo dels are trained, with a committee of 10 to 100 T rees resp ectiv ely . 4.3.1 A c c ent Classific ation The results from the Accent Classification dataset for the RF and QRF metho ds can b e observed in Fig. 8. The most superior mo dels b oth used a committee of 100 of their resp ectiv e trees, scoring tw o similar results of 91.86% with Pseudo-randomness and 91.78% for Quan- tum randomness. Standard deviation of RF results are 0.5% whereas QRF has a slightly low er deviation of 10 Jordan J. Bird et al. 10 20 30 40 50 60 70 80 90 100 0 . 84 0 . 85 0 . 86 0 . 87 0 . 88 Num b er of T rees in the F orest Classification Accuracy Men tal State Classification Exp erimen t Pseudorandom T rue Random Fig. 9 Classification Accuracies of 10 Random F orest and 10 Quantum F orest Mo dels on the EEG Mental State Clas- sification Dataset 0.43. The worst result b y RF was 90.31% classification accuracy at 10 Random T rees, the worst result b y the QRF was similarly 10 Quantum T rees at 90.36% classi- fication accuracy (+0.05). The range of RF results w as 1.55, compared to the QRF results with a range of 1.43. 4.3.2 Mental State Classific ation The results from the Mental State EEG Classification dataset for the RF and QRF metho ds can b e observ ed in Fig. 9. The most sup erior mo del for the RF was 86.91% with a committee of 100 trees whereas the b est result for QRF was 86.83% achiev ed by committees of b oth 100 and 60 trees. The range of QRF results w ere sligh tly low er than that of the RF, measured at 2.34 and 2.42 resp ectiv ely . Although initially considered neg- ligible, this same pattern was observed in the previous exp erimen t in Fig. 8. Additionally , the standard devia- tion of RF w as higher at 0.69 compared to 0.65 in QRF. Though v ery similar results were produced, the first QRF b est result required approximately 60% of the computational resources to achiev e compared to the b est RF result. Unlike the first F orest exp erimen t, the patterns of the tw o different mo dels are v astly differen t and often alternate erratically . This suggests somewhat that the tw o mo dels should b oth b e b enc hmarked in order to increase the chances of disco vering a more su- p erior model, considering the level of data dep endency on the classification accuracies of the mo dels. Fig. 10 The F ull Learning Pro cess of 50 Deep Neural Net- w orks, 25 with PRNG and 25 with QRNG Initially Dis- tributed W eigh ts in MNIST Image Dataset Classification 4.4 CNN: Initial Random W eight Initialisation for Computer Vision Exp erimen t 4a and 4b make use of the MNIST and CIF AR-10 image datasets resp ectiv ely . In 4a, an ANN is initialised following the same PRNG and QRNG meth- o ds utilised in Exp erimen t 1 and trained to classify the MNIST handwritten digits dataset. In 4b, the fi- nal dense lay er of the CNN are initiliased through the same metho ds. 4.4.1 MNIST Image Classific ation F or the purp ose of scientific recreation, the architec- ture for MNIST classification is derived from the of- ficial Keras example 8 . This is given as t wo sets of tw o iden tical lay ers, a hidden la yer of 512 densely connected neurons follow ed by a drop out lay er of 0.2 to prev ent o ver-fitting. All hidden neurons, as with other exp eri- men ts in this study , are initialised randomly within the standard -0.5 to 0.5 range. 25 of these are generated b y a PRNG and the other 25 by a QRNG, producing observ able results of 50 mo dels in total. Due to the concise nature and close results observed in the full pro cess sho wed in Fig. 10, tw o additional graphs are presented; firstly , the graph in Fig. 11 shows the classification abilities of the mo dels before any train- ing o ccurs. Within this, a clear distinction can b e made, the starting w eigh ts generated b y QRNG are almost ex- clusiv ely sup erior to those generated b y PRNG, pro vid- ing the QRNG mo dels with a sup erior starting point for learning. The distinction contin ues to o ccur through- 8 http s://github.com/k eras-team/keras/tree/master/examples On the Effects of Pseudo and Quantum Random Num b er Generators in Soft Computing 11 Fig. 11 Initial (pre-training) Classification Abilities of 50 Deep Neural Netw orks, 25 with PRNG and 25 with QRNG Initially Distributed W eigh ts in MNIST Image Dataset Clas- sification Fig. 12 The Initial Learning Curv e Exp erienced for 50 Deep Neural Net works, 25 with PRNG and 25 with QRNG Initially Distributed W eigh ts in MNIST Image Dataset Classification out the initial learning curve, observed in Fig. 12, not to o dissimilar to the results in the previous exp eri- men t. A t the pre-training abilities of the tw o metho ds of weigh t initialisation, dense areas can b e observed at appro x 77.5% Finally , at around ep ochs 10-14, the re- sultan t mo dels b egin to conv erge and the separation b ecomes less prominent. This is shown through b oth sets of models having iden tical b est classification accu- racies of 98.64%m suggesting a true b est fitness may p ossibly hav e b een ac hiev ed. W orst-best accuracies are also indistinguishably close, 98.27% for QRNG mo dels and 98.25% for PRNG mo dels, p opulation fitnesses are extremely dense and little entrop y exists throughout the whole set of final results. Fig. 13 The F ull Learning Process of 50 Conv olutional Neu- ral Netw orks, 25 with PRNG and 25 with QRNG Initially Distributed W eigh ts for the Final Hidden Dense Lay er in CIF AR-10 Image Dataset Classification 4.4.2 CIF AR-10 Image Classific ation In the CNN experiment, the CIF AR-10 image dataset is used to train a Conv olutional Neural Netw ork. The t wo n umber generators are applied for the initial ran- dom weigh t distribution of the final hidden dense la yer, after feature extraction has b een p erformed b y the CNN op erations. The netw ork architecture is constructed as is the official Keras Developmen t T eam example for Sci- en tific purposes in ease of recreation of the exp eriment. In this arc hitecture, one hidden dense lay er of 512 units precedes the final classification output, and w eights are generated within the b ounds of -0.5 and 0.5 as is a standard in neural netw ork generation. 50 CNNS are trained, all of which are structurally identical except for that 25 hav e their dense la yer weigh ts initialised b y PRNG and the other 25 ha ve their dense la yer w eights initialised by QRNG. Fig. 13 shows the full learning process of the tw o differen t metho ds of initial weigh t distribution. It can b e observed that there are roughly three partitions of results b etw een the t wo metho ds, the pattern is visually similar to the ANN learning curv e in the MNIST Com- puter Vision exp erimen t. Fig 14 sho ws the pre-training classification abilities of the initial w eights, distribution is relatively equal and unremark able unless compared to the final results of the training process in Fig. 15; the four best initial distributions of net work weigh ts, all are of that whic h hav e b een generated by the QRNG, con- tin ue to b e the four sup erior ov erall models. It must b e noted although, that the rest of the mo dels regardless of RNG metho d, are extremely similar and no other di- 12 Jordan J. Bird et al. Fig. 14 Initial (pre-training) Classification Abilities of 50 Con volutional Neural Netw orks, 25 with PRNG and 25 with QRNG Initially Distributed W eights for the Final Hidden Dense Lay er in CIF AR-10 Image Dataset Classification Fig. 15 The Learning within the Final Epo c hs for 50 Con vo- lutional Neural Net w orks, 25 with PRNG and 25 with QRNG Initially Distributed W eigh ts for the Final Hidden Lay er in CIF AR-10 Image Dataset Classification vide is seen by the end of the pro cess. The six ov erall most sup erior mo dels were all ini- tialised b y QRNG, the b est result b eing a classifica- tion accuracy of 75.35% at ep o c h 50. The seven th b est mo del was the highest scoring mo del that had dense la yer weigh ts initialised by PRNG, scoring a classifica- tion accuracy of 74.43%. The w orst model pro duced by the QRNG was that whic h had a classification accuracy of 71.91%, slightly b ehind this was the o verall worst mo del from all exp erimen ts, a mo del initialised by the PRNG with an o verall classification abilit y of 71.82%. The QRNG initialisation therefore outp erformed PRNG b y 0.92 in the best case, and outperformed PRNG by 0.09 in the w orst case. The av erage result from b oth metho ds of distribution. The a v erage result betw een the t wo mo dels w as equal, at 73.3% accuracy . It must b e noted that by ep och 50 the training pro- cess w as still pro ducing increasingly b etter results, but computational resources av ailable limited the 50 net- w orks to b e trained for this amount of time. 5 F uture W ork It was observed in those exp erimen ts that did stabilise, results as exp ected reached closer similarities. With re- sources, future w ork should concern the further training of mo dels to observ e this pattern with a greater reach of examples. Extensiv e computational resources w ould be required to train such an extensiv e amount of net works. F urthermore, the patterns in Fig. 9, Quantum vs Random F orest for Mental State Classification, suggest that the tw o forests hav e greatly different situational classification abilities and may produce a stronger ov er- all mo del if b oth used in an ensem ble. This conjecture is strengthened through a preliminary exp erimen t; a vote of maximum probabilit y b et ween the t wo b est models in this exp erimen t (QF(60) and RF(100)) pro duces a re- sult of 86.96% which is a sligh t, and y et sup erior classifi- cation abilit y . The forests ensem bled with other forests of their on type on the other hand do not impro ve. With this disco v ery , a future study should consider en- sem ble metho ds b et w een the tw o for b oth deriving a stronger ov erall classification pro cess, as well as to ex- plore the patterns in the ensem ble of QRNG and PRNG based learning techniques. This, at the very minim um, w ould require the time and computational resources to train 100 mo dels to explore the t wo sets of ten mo dels pro duced in the related exp erimen t, though exploring b ey ond this, or even a full bruteforce of eac h mo del in- creasing their p opulation of forests b y 1 rather than 10 w ould pro duce a clearer view of the patterns within. Of the most noticeable effects of QRNG and PRNG in machine learning, man y of the neural netw ork ex- p erimen ts show greatly differing patterns in learning patterns and their ov erall results when using PRNG and QRNG metho ds to generate the initial w eights for eac h neuron within hidden la yers. F ollo wing this, fur- ther t yp es of neural netw ork approaches should b e ex- plored to observe the similarities and differences that o ccur. In addition to this, the arc hitectures of netw orks are by no means at an optim um, the heuristic nature of the netw ork should also b e explored, b y tec hniques On the Effects of Pseudo and Quantum Random Num b er Generators in Soft Computing 13 suc h as a genetic search, for it to o requires the idea of random influence Bird et al. (2019b,c). 6 Conclusion T o conclude, this study p erformed 8 individual exp er- imen ts to observe the effects of Quantum and Pseudo- random Number Generators when applied to multiple mac hine learning techniques. Some of the results w ere somewhat unremark able as exp ected, but some effects presen ted profound differences b et ween the t wo, many of which are as of yet greatly unexplored. Based on these effects, p ossibilities of future w ork has b een laid out in order to prop erly explore them. Though observing superp osition provides p erfectly true randomness, this also provides a scientific issue in the replication of exp erimen ts since results cannot be co erced in the same nature a PRNG can through a seed. In terms of cyb ersecurit y , this nature is ideal Y ang et al. (2014); Stip cevic (2012), but provides frustration in a researc h environmen t since only generalised patterns at time t can b e analysed Sv ore and T ro yer (2016). This is ov ercome to an exten t by the nature of rep etition in the given exp erimen ts, man y coun tless classifiers are trained to provide a more a verage ov erview of the sys- tems. The results for all of these experiments suggest that data dependency leads to no concrete p ositiv e or nega- tiv e effect conclusion for the use of QRNG and PRNG since there is no clear sup erior metho d. Although this is true, pseudo-randomness on mo dern pro cessors are argued to be indistinguishable from true randomness, but clear patterns hav e emerged b et ween the t wo. The t wo metho ds do inexplicably produce different results to one another when employ ed in machine learning, an unpreceden ted, and as of y et, relativ ely unexplored line of scientific research. In some cases, this w as observed to b e a relatively unremark able, small, and p ossibly co- inciden tal difference; but in others, a clear division sep- arated the tw o. The results in this study are indicative of a pro- found effect on patterns observ ed in machine learning tec hniques when random num b ers are generated either b y the rules of classical or quan tum physics. Their ef- fects b eing positive or negativ e are seemingly dep enden t on the data at hand, but regardless, the fact that tw o metho ds of randomness ostensibly cause suc h disparate effects juxtap ose to the current scientific pro cesses of their usage should not b e underestimated. Rather, it should b e explored. App endices 1 Quantum Assembly Language for Random Num- b er Generation Note: Co de commen ts (#) are not Quan tum Assembly Language and are simply for explanatory purp oses. The follo wing code will place a quan ta in to sup erposition via the Hadamard gate and then subsequen tly measure the state and store the observed v alue. The state is equally lik ely to b e observ ed at either 1 or 0. #Electron zero to Hadamard Gate H 0 #Declare memory space ’ro’ of one bit DECLARE ro BIT[1] #Measure the qubit at 0th index of ’ro’ MEASURE 0 ro[0] 2 Python Co de for Generating a String of Ran- dom Bits The following code generates a random 32-bit integer b y observing an electron in sup erposition whic h produces a true random result of either 1 or 0. The result is amended at each individual observ ation until 32 bits ha ve b een generated. Decimal con version tak es place and tw o files are generated; a raw text file containing the decimal results and a CSV containing a column of binary integers and their decimal equiv alents. from pyquil.quil import Program from pyquil.gates import H # Select the lattice of Qubits lattice = "Aspen-1-5Q-B" # Initialise QPU qpu = get_qc(lattice) #Place electron 0 into superposition numbers = Program(H(0)) #Observe the superposition getNum = numbers.measure_all() #Print the Quantum Assembly Language print (getNum) compiled_program = qpu. compile (numbers) #Length of integer to generate numbers = 32 #How many integers to generate toGenerate = 1 print ( "\n Random number of " + str (numbers) + " bits:" ) for y in range (0, 10000): output = "" 14 Jordan J. Bird et al. for x in range (0, toGenerate): #Run the code on a Quantum Processing Unit result = qpu.run(compiled_program) #Observe the superposition result = result[0][0] output += str (result) print ( "\n\n Random no." + str (y) + " is: " + output) decimal = int (output, 2) with open ( "numbers.txt" , "a" ) as myfile: myfile.write( "\n" + str (decimal)) with open ( "random.csv" , "a" ) as myfile: myfile.write( "\n" + str (output) + "," + str (decimal)) Ac knowledgmen ts The authors w ould lik e to thank Rigetti Computing for gran ting access to their Quantum Computing Platform. References Agarap AF (2018) Deep learning using rectified linear units (relu). arXiv preprint arXiv:180308375 Arora S, Barak B (2009) Computational complexity: a mo dern approac h. Cambridge Univ ersity Press Bark er EB, Kelsey JM (2007) Recommendation for ran- dom num b er generation using deterministic random bit generators (revised). US Department of Com- merce, T ec hnology Administration, National Insti- tute of Bell JS (1964) On the einstein p o dolsky rosen paradox. Ph ysics Physique Fizik a 1(3):195 Benioff P (1980) The computer as a ph ysical system: A microscopic quan tum mechanical hamiltonian mo del of computers as represen ted by turing mac hines. Journal of statistical physics 22(5):563–591 Bird JJ, Manso LJ, Ribiero EP , Ek art A, F aria DR (2018) A study on mental state classification using eeg-based brain-machine interface. In: 9th In terna- tional Conference on Intelligen t Systems, IEEE Bird JJ, Ek art A, Buckingham CD, F aria DR (2019a) Men tal emotional sen timent classification with an eeg-based brain-machine interface. In: The Interna- tional Conference on Digital Image and Signal Pro- cessing (DISP’19), Springer Bird JJ, Ek art A, F aria DR (2019b) Ev olutionary opti- misation of fully connected artificial neural netw ork top ology . In: SAI Computing Conference 2019, SAI Bird JJ, F aria DR, Manso LJ, Ek art A, Buc king- ham CD (2019c) A deep evolutionary approac h to bioinspired classifier optimisation for brain-machine in teraction. Complexit y 2019, DOI 10.1155/2019/ 4316548, URL https://doi.org/10.1155/2019/ 4316548 Blo c h F (1946) Nuclear induction. Physical review 70(7- 8):460 Breiman L (2001) Random forests. Machine Learning 45(1):5–32 Calude CS, Sv ozil K (2008) Quantum randomness and v alue indefiniteness. Adv anced Science Letters 1(2):165–168 Carlini N, W agner D (2017) T o wards ev aluating the ro- bustness of neural net works. In: 2017 IEEE Symp o- sium on Security and Priv acy (SP), IEEE, pp 39–57 Cullerne J (2000) The P enguin dictionary of physics. P enguin Bo oks Degabriele JP , P aterson K G, Sch uldt JC, W oo dage J (2016) Backdoors in pseudorandom num b er gener- ators: P ossibility and imp ossibilit y results. In: An- n ual In ternational Cryptology Conference, Springer, pp 403–432 Deng W, Zhao H, Y ang X, Xiong J, Sun M, Li B (2017) Study on an improv ed adaptive pso algorithm for solving multi-ob jectiv e gate assignment. Applied Soft Computing 59:288–302 Deng W, Xu J, Zhao H (2019) An impro ved ant colon y optimization algorithm based on h ybrid strategies for sc heduling problem. IEEE access 7:20281–20292 Dirac P AM (1981) The principles of quan tum mec han- ics. 27, Oxford universit y press Einstein A, P o dolsky B, Rosen N (1935) Can quan tum- mec hanical description of physical reality b e consid- ered complete? Physical review 47(10):777 Gabriel C, Wittmann C, Sych D, Dong R, Mauerer W, Andersen UL, Marquardt C, Leuc hs G (2010) A gen- erator for unique quantum random n umbers based on v acuum states. Nature Photonics 4(10):711 Gallego R, Masanes L, De La T orre G, Dhara C, Aolita L, Ac ´ ın A (2013) F ull randomness from arbitrarily deterministic even ts. Nature communications 4:2654 Gastegger M, Sch¨ utt K, Sauceda H, M ¨ uller KR, Tk atc henko A (2019) Mo deling molecular sp ectra with in terpretable atomistic neural netw orks. In: APS Meeting Abstracts Gershenfeld N, Chuang IL (1998) Quantum computing with molecules. Scientific American 278(6):66–71 Hagan S, Hameroff SR, T uszy ´ nski JA (2002) Quan tum computation in brain microtubules: Decoherence and biological feasibilit y . Ph ysical Review E 65(6):061901 Hameroff S, Penrose R (1996) Orc hestrated reduction of quan tum coherence in brain microtubules: A mo del On the Effects of Pseudo and Quantum Random Num b er Generators in Soft Computing 15 for consciousness. Mathematics and computers in sim ulation 40(3-4):453–480 Han KH, Park KH, Lee CH, Kim JH (2001) Paral- lel quantum-inspired genetic algorithm for combina- torial optimization problem. In: Pro ceedings of the 2001 Congress on Ev olutionary Computation (IEEE Cat. No. 01TH8546), IEEE, vol 2, pp 1422–1429 Hastie T, Tibshirani R, F riedman J, F ranklin J (2005) The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelli- gencer 27(2):83–85 Jennew ein T, Simon C, W eihs G, W einfurter H, Zeilinger A (2000) Quan tum cryptography with entangled photons. Physical Review Letters 84(20):4729 Khan J, W ei JS, Ringner M, Saal LH, Ladanyi M, W est- ermann F, Berthold F, Manfred S, An tonescu CR, P eterson C (2001) Classification and diagnostic pre- diction of cancers using gene expression profiling and artificial neural netw orks. Nature medicine 7(6):673 Kimm y W u W, T riv edi S, Caldeira J, Avestruz C, Story K, Nord B (2019) Deep cm b: Lensing recon- struction of the cosmic microw av e background with deep neural net w orks. In: American Astronomical So- ciet y Meeting Abstracts# 233, v ol 233 Kingma DP , Ba J (2014) Adam: A metho d for stochas- tic optimization. arXiv preprint arXiv:14126980 Krastev PG (2019) Real-time detection of gra vitational w av es from binary neutron stars using artificial neu- ral netw orks. arXiv preprint arXiv:190803151 Kretzsc hmar R, Bueler R, Karayiannis NB, Eggimann F (2000) Quan tum neural netw orks versus conv en- tional feedforward neural netw orks: an exp erimental study . In: Neural Netw orks for Signal Pro cessing X. Pro ceedings of the 2000 IEEE Signal Pro cessing So- ciet y W orkshop (Cat. No. 00TH8501), IEEE, vol 1, pp 328–337 Krizhevsky A, Nair V, Hinton G (2009) Cifar-10 (cana- dian institute for adv anced researc h) URL http: //www.cs.toronto.edu/ ~ kriz/cifar.html LeCun Y, Cortes C (2010) MNIST handwritten digit database URL http://yann.lecun.com/exdb/ mnist/ Mark owsky G (2014) The sad history of random bits. Journal of Cyb er Securit y and Mobility 3(1):1–24 Naderp our H, Mirrashid M (2019) Shear failure ca- pacit y prediction of concrete b eam–column join ts in terms of anfis and gmdh. Practice Periodical on Structural Design and Construction 24(2):04019006 Naderp our H, Mirrashid M, Nagai K (2019) An inno- v ativ e approach for b ond strength mo deling in frp strip-to-concrete joints using adaptive neuro–fuzzy inference system. Engineering with Computers pp 1– 18 Nara yanan A, Menneer T (2000) Quantum artificial neural net work arc hitectures and components. Infor- mation Sciences 128(3-4):231–255 P enny W, F rost D (1996) Neural netw orks in clinical medicine. Medical Decision Making 16(4):386–398 Pironio S, Ac ´ ın A, Massar S, de La Giro da y AB, Mat- suk evich DN, Maunz P , Olmsc henk S, Hay es D, Luo L, Manning T A (2010) Random num b ers certified by b ells theorem. Nature 464(7291):1021 Purushothaman G, Karayiannis NB (1997) Quantum neural netw orks (qnns): inheren tly fuzzy feedforw ard neural netw orks. IEEE T ransactions on neural net- w orks 8(3):679–693 Ren M, W u E, Liang Y, Jian Y, W u G, Zeng H (2011) Quantum random-n umber generator based on a photon-n um b er-resolving detector. Ph ysical Re- view A 83(2):023820 Sc hneier B (2007) Did nsa put a secret backdoor in new encryption standard. URL: http://www wired com/p olitics/securit y/commen tary/securitymat- ters/2007/11/securit ymatters 1115:2007 Sc hr¨ odinger E (1935) Die gegen w¨ artige situation in der quan tenmechanik. Naturwissensc haften 23(49):823– 828 Sc huld M, Sinayskiy I, Petruccione F (2014) The quest for a quantum neural net work. Quantum Information Pro cessing 13(11):2567–2586 Sc h ¨ utt K, Gastegger M, Tk atc henko A, M ¨ uller KR, Maurer R (2019) Unifying mac hine learning and quan tum c hemistry–a deep neural netw ork for molec- ular wa v efunctions. arXiv preprint arXiv:190610033 Shor PW (1999) P olynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM review 41(2):303–332 Stefano v A, Gisin N, Guinnard O, Guinnard L, Zbinden H (2000) Optical quantum random n umber genera- tor. Journal of Mo dern Optics 47(4):595–598 Stip cevic M (2012) Quan tum random num b er gen- erators and their applications in cryptography . In: Adv anced Photon Coun ting T echniques VI, Interna- tional So ciet y for Optics and Photonics, vol 8375, p 837504 Sv ore KM, T roy er M (2016) The quantum future of computation. Computer 49(9):21–30 W ang L, Niu Q, F ei M (2007) A nov el quantum ant colon y optimization algorithm. In: International Con- ference on Life System Mo deling and Simulation, Springer, pp 277–286 W ang L, Niu Q, F ei M (2008) A nov el quantum ant colon y optimization algorithm and its application to fault diagnosis. T ransactions of the Institute of Mea- suremen t and Control 30(3-4):313–329 16 Jordan J. Bird et al. W ayne MA, Jeffrey ER, Akselro d GM, Kwiat PG (2009) Photon arriv al time quan tum random n umber generation. Journal of Mo dern Optics 56(4):516–522 W ei W, Guo H (2009) Quan tum random num b er gen- erator based on the photon n umber decision of weak laser pulses. In: Conference on Lasers and Electro- Optics/P acific Rim, Optical So ciet y of America, p TUP5 41 Y ang YG, Jia X, Sun SJ, Pan QX (2014) Quantum cryptographic algorithm for color images using quan- tum fourier transform and double random-phase en- co ding. Information Sciences 277:445–457 Y ou Xm, Liu S, W ang Ym (2010) Quantum dynamic mec hanism-based parallel an t colon y optimization al- gorithm. In ternational Journal of Computational In- telligence Systems 3(sup01):101–113 Zhao H, Y ao R, Xu L, Y uan Y, Li G, Deng W (2018) Study on a no vel fault damage degree iden tifica- tion metho d using high-order differen tial mathemat- ical morphology gradien t spectrum entrop y . Entrop y 20(9):682 Zhao H, Zheng J, Xu J, Deng W (2019) F ault diagnosis metho d based on principal comp onen t analysis and broad learning system. IEEE Access 7:99263–99272
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment