A toy detector has been designed to simulate central detectors in reactor neutrino experiments in the paper. The samples of neutrino events and three major backgrounds from the Monte-Carlo simulation of the toy detector are generated in the signal region. The Bayesian Neural Networks(BNN) are applied to separate neutrino events from backgrounds in reactor neutrino experiments. As a result, the most neutrino events and uncorrelated background events in the signal region can be identified with BNN, and the part events each of the fast neutron and $^{8}$He/$^{9}$Li backgrounds in the signal region can be identified with BNN. Then, the signal to noise ratio in the signal region is enhanced with BNN. The neutrino discrimination increases with the increase of the neutrino rate in the training sample. However, the background discriminations decrease with the decrease of the background rate in the training sample.
Deep Dive into Applying Bayesian Neural Networks to Separate Neutrino Events from Backgrounds in Reactor Neutrino Experiments.
A toy detector has been designed to simulate central detectors in reactor neutrino experiments in the paper. The samples of neutrino events and three major backgrounds from the Monte-Carlo simulation of the toy detector are generated in the signal region. The Bayesian Neural Networks(BNN) are applied to separate neutrino events from backgrounds in reactor neutrino experiments. As a result, the most neutrino events and uncorrelated background events in the signal region can be identified with BNN, and the part events each of the fast neutron and $^{8}$He/$^{9}$Li backgrounds in the signal region can be identified with BNN. Then, the signal to noise ratio in the signal region is enhanced with BNN. The neutrino discrimination increases with the increase of the neutrino rate in the training sample. However, the background discriminations decrease with the decrease of the background rate in the training sample.
arXiv:0808.0240v1 [physics.data-an] 2 Aug 2008
Applying
Ba
y
esian
Neural
Net
w
o
rks
to
Sepa
rate
Neutrino
Events
from
Ba kgrounds
in
Rea to
r
Neutrino
Exp
eriments
Y
e
Xua∗
,
Yixiong
Menga
,
W
eiw
ei
Xua
a
Departmen
t
of
Ph
ysi s,
Nank
ai
Univ
ersit
y
,
Tianjin
300071,
P
eople's
Republi
of
China
Abstra t
A
to
y
dete tor
has
b
een
designed
to
sim
ulate
en
tral
dete tors
in
rea tor
neutrino
ex-
p
erimen
ts
in
the
pap
er.
The
samples
of
neutrino
ev
en
ts
and
three
ma
jor
ba
kgrounds
from
the
Mon
te-Carlo
sim
ulation
of
the
to
y
dete tor
are
generated
in
the
signal
region.
The
Ba
y
esian
Neural
Net
w
orks(BNN)
are
applied
to
separate
neutrino
ev
en
ts
from
ba
k-
grounds
in
rea tor
neutrino
exp
erimen
ts.
As
a
result,
the
most
neutrino
ev
en
ts
and
un orrelated
ba
kground
ev
en
ts
in
the
signal
region
an
b
e
iden
tied
with
BNN,
and
the
part
ev
en
ts
ea
h
of
the
fast
neutron
and 8
He/9
Li
ba
kgrounds
in
the
signal
region
an
b
e
iden
tied
with
BNN.
Then,
the
signal
to
noise
ratio
in
the
signal
region
is
enhan ed
with
BNN.
The
neutrino
dis rimination
in reases
with
the
in rease
of
the
neutrino
rate
in
the
training
sample.
Ho
w
ev
er,
the
ba
kground
dis riminations
de rease
with
the
de rease
of
the
ba
kground
rate
in
the
training
sample.
Keyw
o
rds:
Ba
y
esian
neural
net
w
orks,
neutrino
os illation,
iden
ti ation
P
A
CS
n
um
b
ers:
07.05.Mh,
29.85.Fj,
14.60.Pq
1
Intro
du tion
The
main
goals
of
rea tor
neutrino
exp
erimen
ts
are
to
dete t ¯νe →¯νx
os illation
and
pre isely
measure
the
mixing
angle
of
neutrino
os illation θ13
.
The
exp
erimen
t
is
designed
to
dete t
rea tor ¯νe
's
via
the
in
v
erse β
-de a
y
rea tion
¯νe + p →e+ + n.
The
signature
is
a
dela
y
ed
oin iden e
b
et
w
een e+
and
the
neutron
aptured
signals.
In
the
pap
er,
only
three
imp
ortan
t
sour es
of
ba
kgrounds
are
tak
en
in
to
a oun
t
and
they
are
the
un orrelated
ba
kground
from
natural
radioa tivit
y
and
the
orrelated
ba
kgrounds
from
fast
neutrons
and 8
He/9
Li.
The
ba
kgrounds
lik
e
the
neutrino
ev
en
ts
onsist
of
t
w
o
signals,
a
fast
signal
and
a
dela
y
signal.
It
∗
Corresp
onding
author,
e-mail
address:
xuy
e76 nank
ai.edu. n
1
2
The
Classi ation
with
BNN[1
,
5
℄
2
is
vital
to
separate
neutrino
ev
en
ts
from
ba
kgrounds
a urately
in
the
rea tor
neutrino
exp
erimen
ts.
The
sele tion
of
the
neutrino
ev
en
ts
based
on
the
uts
is
a
metho
ds
that
the
ev
en
t
spa e
is
divided
in
to
t
w
o
regions
b
y
a
h
yp
er- ub
oid
based
on
the
uts,
and
the
ev
en
ts
inside
the
h
yp
er- ub
oid,
alled
the
signal
region,
are
regarded
as
neutrino
ev
en
ts
and
the
ev
en
ts
outside
the
h
yp
er- ub
oid
are
regarded
as
ba
kgrounds.
In
fa t,
the
ba
kgrounds
in
the
signal
region
ouldn't
b
e
reje ted
b
y
the
metho
d.
The
Ba
y
esian
neural
net
w
orks
(BNN)[1℄
is
an
algorithm
of
the
neural
net
w
orks
trained
b
y
Ba
y
esian
statisti s.
It
is
not
only
a
non-linear
fun tion
as
neural
net
w
orks,
but
also
on
trols
mo
del
omplexit
y
.
So
its
exibilit
y
mak
es
it
p
ossible
to
dis o
v
er
more
general
relationships
in
data
than
the
traditional
statisti al
metho
ds
and
its
preferring
simple
mo
dels
mak
e
it
p
ossible
to
solv
e
the
o
v
er-tting
problem
b
etter
than
the
general
neural
net
w
orks[2
℄.
BNN
has
b
een
used
to
parti le
iden
ti ation
and
ev
en
t
re onstru tion
in
the
exp
erimen
ts
of
the
high
energy
ph
ysi s,
su
h
as
Ref.[3,
4,
5℄.
In
this
pap
er,
BNN
will
b
e
applied
to
dis riminate
the
neutrino
ev
en
ts
from
the
ba
kground
ev
en
ts
in
the
signal
region
in
the
rea tor
neutrino
exp
erimen
ts.
2
The
Classi ation
with
BNN[1
,
5℄
The
idea
of
Ba
y
esian
neural
net
w
orks
is
to
regard
the
pro
ess
of
training
a
neural
net
w
ork
as
a
Ba
y
esian
inferen e.
Ba
y
es'
theorem
is
used
to
assign
a
p
osterior
densit
y
to
ea
h
p
oin
t, ¯θ
,
in
the
parameter
spa e
of
the
neural
net
w
orks.
Ea
h
p
oin
t ¯θ
denotes
a
neural
net
w
ork.
In
the
metho
d
of
the
Ba
y
esian
neural
net
w
ork,
one
p
erforms
a
w
eigh
ted
a
v
erage
o
v
er
all
p
oin
ts
in
the
parameter
spa e
of
the
neural
net
w
ork,
that
is,
all
neural
net
w
orks.
The
metho
ds
mak
e
use
of
training
data (x1, t1), (x2, t2), ..., (xn, tn),
where
ti
is
the
kno
wn
lab
el
asso
iated
with
data
xi
. ti = 0, 1, ...N −1 ,
if
there
are N
lasses
in
the
problems
of
lassi ation; xi
has P
omp
onen
ts
if
there
are P
fa tors
on
whi
h
the
lassi ation
is
inuen ed.
That
is
the
set
of
data x = (x1, x2, ..., xn)whi
h
orresp
onds
to
the
set
of
target
t = (t1, t2, ..., tn).The
p
osterior
densit
y
assigned
to
the
p
oin
t ¯θ
,
that
is,
to
a
neural
net
w
ork,
is
giv
en
b
y
Ba
y
es'
theorem
p
¯θ | x, t
=
p
x, t | ¯θ
p
¯θ
p (x, t)
=
p
t | x, ¯θ
p
x | ¯θ
p
¯θ
p (t | x) p (x)
=
p
t | x, ¯θ
p
¯θ
p (t | x)
(1)
where
data x
do
not
dep
end
on ¯θ
,
so p (x | θ) = p (x)
.
W
e
need
the
lik
eliho
o
d
p
t | x, ¯θ
and
the
prior
densit
y p
¯θ
,
in
order
to
assign
the
p
osterior
densit
y
p
¯θ | x, t
to
a
neural
net
w
ork
dened
b
y
the
p
oin
t ¯θ
. p (t | x)
is
alled
eviden e
and
pla
…(Full text truncated)…
This content is AI-processed based on ArXiv data.