In the paper, it is discussed by using Monte-Carlo simulation that the Bayesian Neural Network(BNN) is applied to determine neutrino incoming direction in reactor neutrino experiments and supernova explosion location by scintillator detectors. As a result, compared to the method in Ref.\cite{key-1}, the uncertainty on the measurement of the neutrino direction using BNN is significantly improved. The uncertainty on the measurement of the reactor neutrino direction is about 1.0$^\circ$ at the 68.3% C.L., and the one in the case of supernova neutrino is about 0.6$^\circ$ at the 68.3% C.L.. Compared to the method in Ref.\cite{key-1}, the uncertainty attainable by using BNN reduces by a factor of about 20. And compared to the Super-Kamiokande experiment(SK), it reduces by a factor of about 8.
Deep Dive into Applying Bayesian Neural Network to Determine Neutrino Incoming Direction in Reactor Neutrino Experiments and Supernova Explosion Location by Scintillator Detectors.
In the paper, it is discussed by using Monte-Carlo simulation that the Bayesian Neural Network(BNN) is applied to determine neutrino incoming direction in reactor neutrino experiments and supernova explosion location by scintillator detectors. As a result, compared to the method in Ref.\cite{key-1}, the uncertainty on the measurement of the neutrino direction using BNN is significantly improved. The uncertainty on the measurement of the reactor neutrino direction is about 1.0$^\circ$ at the 68.3% C.L., and the one in the case of supernova neutrino is about 0.6$^\circ$ at the 68.3% C.L.. Compared to the method in Ref.\cite{key-1}, the uncertainty attainable by using BNN reduces by a factor of about 20. And compared to the Super-Kamiokande experiment(SK), it reduces by a factor of about 8.
arXiv:0812.2713v1 [physics.data-an] 15 Dec 2008
Applying
Ba
y
esian
Neural
Net
w
o
rk
to
Determine
Neutrino
In oming
Dire tion
in
Rea to
r
Neutrino
Exp
eriments
and
Sup
ernova
Explosion
Lo
ation
b
y
S intillato
r
Dete to
rs
W
eiw
ei
Xua
,
Y
e
Xua∗
,
Yixiong
Menga
,
Bin
W
ua
a
Departmen
t
of
Ph
ysi s,
Nank
ai
Univ
ersit
y
,
Tianjin
300071,
The
P
eople's
Republi
of
China
Abstra t
In
the
pap
er,
it
is
dis ussed
b
y
using
Mon
te-Carlo
sim
ulation
that
the
Ba
y
esian
Neural
Net
w
ork(BNN)
is
applied
to
determine
neutrino
in oming
dire tion
in
rea tor
neutrino
exp
erimen
ts
and
sup
erno
v
a
explosion
lo
ation
b
y
s in
tillator
dete tors.
As
a
result,
ompared
to
the
metho
d
in
Ref.[1
℄,
the
un ertain
t
y
on
the
measuremen
t
of
the
neutrino
dire tion
using
BNN
is
signi an
tly
impro
v
ed.
The
un ertain
t
y
on
the
measuremen
t
of
the
rea tor
neutrino
dire tion
is
ab
out
1.0◦
at
the
68.3%
C.L.,
and
the
one
in
the
ase
of
sup
erno
v
a
neutrino
is
ab
out
0.6◦
at
the
68.3%
C.L..
Compared
to
the
metho
d
in
Ref.[1
℄,
the
un ertain
t
y
attainable
b
y
using
BNN
redu es
b
y
a
fa tor
of
ab
out
20.
And
ompared
to
the
Sup
er-Kamiok
ande
exp
erimen
t(SK),
it
redu es
b
y
a
fa tor
of
ab
out
8.
Keyw
o
rds:
Ba
y
esian
neural
net
w
ork,
neutrino
in oming
dire tion,
rea tor
neutrino,
su-
p
erno
v
a
neutrino
P
A
CS
n
um
b
ers:
07.05.Mh,
29.85.Fj,
14.60.Pq,
95.85.Ry
1
Intro
du tion
The
lo
ation
of
a ν
sour e
is
v
ery
imp
ortan
t
to
study
gala ti
sup
erno
v
a
explo-
sion.
The
determination
of
neutrino
in oming
dire tion
an
b
e
used
to
lo
ate
a
sup
erno
v
a,
esp
e ially
,
if
the
sup
erno
v
a
is
not
opti ally
visible.
The
metho
d
based
on
the
in
v
erse β
de a
y
, ¯νe + p →e+ + n,
has
b
een
dis ussed
in
the
Ref.[1℄.
The
metho
d
an
b
e
applied
to
determine
a
rea tor
neutrino
dire tion
and
a
sup
er-
no
v
a
neutrino
dire tion.
But
the
un ertain
t
y
of
lo
ation
of
the ν
sour e
attainable
b
y
using
the
metho
d
is
not
small
enough
and
almost
2
times
as
large
as
that
in
the
Sup
er-Kamiok
ande
exp
erimen
t(SK).
So
w
e
try
to
apply
the
Ba
y
esian
neural
net
w
ork(BNN)[2
℄
to
lo
ate ν
sour es
in
order
to
de rease
the
un ertain
t
y
on
the
measuremen
t
of
the
neutrino
in oming
dire tion.
∗
Corresp
onding
author,
e-mail
address:
xuy
e76 nank
ai.edu. n
1
2
Regression
with
BNN[2
,
6℄
2
BNN
is
an
algorithm
of
the
neural
net
w
orks
trained
b
y
Ba
y
esian
statisti s.
It
is
not
only
a
non-linear
fun tion
as
neural
net
w
orks,
but
also
on
trols
mo
del
om-
plexit
y
.
So
its
exibilit
y
mak
es
it
p
ossible
to
dis o
v
er
more
general
relationships
in
data
than
the
traditional
statisti al
metho
ds
and
its
preferring
simple
mo
dels
mak
e
it
p
ossible
to
solv
e
the
o
v
er-tting
problem
b
etter
than
the
general
neural
net
w
orks[3
℄.
BNN
has
b
een
used
to
parti le
iden
ti ation
and
ev
en
t
re onstru tion
in
the
exp
erimen
ts
of
the
high
energy
ph
ysi s,
su
h
as
Ref.[4,
5
,
6,
7℄.
In
this
pap
er,
it
is
dis ussed
b
y
using
Mon
te-Carlo
sim
ulation
that
the
metho
d
of
BNN
is
applied
to
determine
neutrino
in oming
dire tion
in
rea tor
neutrino
exp
erimen
ts
and
sup
erno
v
a
explosion
lo
ation
b
y
s in
tillator
dete tors.
2
Regression
with
BNN[2
,
6℄
The
idea
of
BNN
is
to
regard
the
pro
ess
of
training
a
neural
net
w
ork
as
a
Ba
y
esian
inferen e.
Ba
y
es'
theorem
is
used
to
assign
a
p
osterior
densit
y
to
ea
h
p
oin
t, ¯θ
,
in
the
parameter
spa e
of
the
neural
net
w
orks.
Ea
h
p
oin
t ¯θ
denotes
a
neural
net
w
ork.
In
the
metho
d
of
BNN,
one
p
erforms
a
w
eigh
ted
a
v
erage
o
v
er
all
p
oin
ts
in
the
parameter
spa e
of
the
neural
net
w
ork,
that
is,
all
neural
net
w
orks.
The
metho
ds
mak
e
use
of
training
data
{(x1
,t1
),
(x2
,t2
),...,(xn
,tn
)},
where ti
is
the
kno
wn
target
v
alue
asso
iated
with
data xi
,
whi
h
has P
omp
onen
ts
if
there
are
P
input
v
alues
in
the
regression.
That
is
the
set
of
data x = (x1
,x2
,...,xn
)
whi
h
orresp
onds
to
the
set
of
target t = (t1
,t2
,...,tn
).
The
p
osterior
densit
y
assigned
to
the
p
oin
t ¯θ
,
that
is,
to
a
neural
net
w
ork,
is
giv
en
b
y
Ba
y
es'
theorem
p
¯θ | x, t
=
p
x, t | ¯θ
p
¯θ
p (x, t)
=
p
t | x, ¯θ
p
x | ¯θ
p
¯θ
p (t | x) p (x)
=
p
t | x, ¯θ
p
¯θ
p (t | x)
(1)
where
data x
do
not
dep
end
on ¯θ
,
so p (x | θ) = p (x)
.
W
e
need
the
lik
eliho
o
d
p
t | x, ¯θ
and
the
prior
densit
y p
¯θ
,
in
order
to
assign
the
p
osterior
densit
y
p
¯θ | x, t
to
a
neural
net
w
ork
dened
b
y
the
p
oin
t ¯θ
. p (t | x)
is
alled
eviden e
and
pla
ys
the
role
of
a
normalizing
onstan
t,
so
w
e
ignore
the
eviden e.
That
is,
Posterior ∝Likelihood × Prior
(2)
W
e
onsider
a
lass
of
neural
net
w
orks
dened
b
y
the
fun tion
y
x, ¯θ
= b +
H
X
j=1
vjsin
aj +
P
X
i=1
uijxi
!
(3)
The
neural
net
w
orks
ha
v
e P
inputs,
a
single
hidden
la
y
er
of H
hidden
no
des
and
one
output.
In
the
parti ular
BNN
des rib
ed
here,
ea
h
neural
net
w
ork
has
the
same
stru ture.
The
parameter uij
and vj
are
alled
the
w
eigh
ts
and aj
and b
are
alled
the
biases.
Both
sets
of
parameters
are
ge
…(Full text truncated)…
This content is AI-processed based on ArXiv data.