Kaleidoscopic Scintillation Event Imaging

Scintillators are transparent materials that interact with high-energy particles and emit visible light as a result. They are used in state of the art methods of measuring high-energy particles and radiation sources. Most existing methods use fast si…

Authors: Alex Bocchieri, John Mamish, David Appleyard

Kaleidoscopic Scintillation Event Imaging
Kaleidoscopic Scintillation Event Imaging Alex Bocchieri 1 John Mamish 2 David Apple yard 3 Andreas V elten 1 1 Uni versity of W isconsin - Madison 2 Georgia Institute of T echnology 3 Ubicept https://bocchs.github.io/kaleidoscopic_scintillator Abstract Scintillators ar e transpar ent materials that interact with high-ener gy particles and emit visible light as a r esult. The y ar e used in state of the art methods of measuring high- ener gy particles and r adiation sour ces. Most existing meth- ods use fast single-pixel detectors to detect and time scin- tillation events. Cameras pr o vide spatial r esolution but can only capture an avera ge over many events, making it diffi- cult to image the events associated with an individual par- ticle. Emer ging single-photon avalanche diode cameras combine speed and spatial r esolution to enable capturing images of individual events. This allows us to use machine vision techniques to analyze events, enabling new types of detectors. The main challeng e is the very low brightness of the events. T echniques have to work with a very limited number of photons. W e propose a kaleidoscopic scintillator to incr ease light collection in a single-photon camera while pr eserving the event’ s spatial information. The kaleidoscopic geome- try cr eates mirr or r eflections of the event in known loca- tions for a given event location that are captur ed by the camera. W e intr oduce theory for imaging an event in a kaleidoscopic scintillator and an algorithm to estimate the event’ s 3D position. W e find that the kaleidoscopic scintil- lator design pr ovides sufficient light collection to perform high-r esolution event measurements for advanced radia- tion imaging techniques using a commer cial CMOS single- photon camera. 1. Introduction High energy particles are created in nuclear processes in- cluding from man-made devices and natural interactions. There is a wide variety of particles with varying properties. Detecting and characterizing them is crucial for a broad range of applications including nuclear security [ 34 ], nu- clear reactor and stockpile imaging [ 7 ], medical imaging [ 16 ], archeology [ 25 , 31 ], and astronomy [ 32 ]. A particu- larly useful property of these particles is that many of them Figure 1. Method overview . An image is composed of light emit- ted from the scintillation ev ent that reaches the camera either di- rectly or after reflecting off mirrors of a kaleidoscopic scintillator . The spatial relationship between the ev ent and mirror reflections is embedded in a Gaussian mixture model whose likelihood is max- imized to estimate the ev ent’ s location using the EM algorithm. can penetrate through dense materials and therefore allow us to image through barriers and inside solid structures. The same penetrating properties, howe ver , make it challenging to build a camera that can perform imaging or vision using high energy particles. A detector needs to hav e a large vol- ume and density for a particle to int eract. At the same time, we would like to kno w the location and shape of the inter- action if it happens. One common approach to achiev e this is by using a scintillator . A scintillator is a transparent crystal that conv erts ion- izing radiation into visible light. It provides mass for an incident particle to be absorbed and detected with a photo- sensor . While a particle is propagating through a scintilla- tor , it may deposit energy and cause a “scintillation ev ent”. In the case of gamma-ray radiation, a gamma-ray collides with an electron in the scintillator and causes the electron to recoil ov er a random walk. A finite number of scintillation photons proportional to how much energy is deposited is emitted isotropically from the electron’ s path over a decay time. The photons propagate out of the scintillator and are captured by a sensor . Measurement of the e vent, such as its 1 position, time, and energy deposition, is performed with the obtained signal. These measurements are then used in var- ious do wnstream tasks to characterize the radiation source. Regardless of the task, optimizing light collection onto the sensor maximizes the signal to noise ratio (SNR) and ev ent measurement performance (e.g. spatial, temporal, and en- ergy resolutions). This work focuses on optimizing imaging geometry in designs that estimate the 3D position of individual scintil- lation ev ents with a single-photon camera. Currently , such designs estimate an event’ s 3D position using depth from defocus and perspecti ve projection [ 8 ]. These designs suf- fer from low light collection and are susceptible to noise from dark counts. High-speed, single-photon sensors are re- quired for tasks that measure individual events due to high particle incidence rates and the finite number of photons emitted by an e vent. For these kinds of tasks, increasing ex- posure time does not increase light collection per ev ent and decreases SNR, which makes maximizing light through- put crucial. In general, large, thick scintillators are desir- able because they increase the likelihood of events. How- ev er , estimating the 3D position of ev ents in a larger vol- ume is more challenging. Applications where 3D event locations are measured include the Compton camera [ 28 ], neutron scatter camera [ 35 ], positron emission tomography (PET), and single-photon emission computed tomography (SPECT) [ 16 ]. W e propose a kaleidoscopic scintillator geometry with specular surfaces to increase light collection in a camera while preserving the ev ent’ s spatial information. A kalei- doscope consists of planar mirrors oriented like a pyramid frustum that contains the object being imaged. Kaleido- scopes and light traps ha ve been studied for imaging and re- constructing extended objects with abundant light [ 30 , 36 ]. In this paper, we in vestigate the kaleidoscope as a tech- nique for increasing light collection and reconstructing a point source in a photon-starved en vironment. W e study the kaleidoscope in the context of scintillation ev ent imag- ing and pose a radiation detection problem as a computer vision problem. The image of an e vent in a kaleidoscopic scintillator contains direct light from the event and indirect light from the e vent’ s mirror reflections. Mirror reflections appear as ev ents in locations determined by the event’ s location and the kaleidoscope’ s geometry , as illustrated in Fig. 1 . The mirror reflections provide multiple views of the e vent and encode depth information with robustness to dark counts and lo w photon counts. Therefore, our proposed design im- prov es event localization performance compared to previ- ous non-kaleidoscopic methods. W e introduce a Gaussian mixture model (GMM) in which the spatial relationships between the event and mirror reflections are embedded and an algorithm that estimates the ev ent’ s 3D location via max- imum likelihood. An ov erview of the method is sho wn in Fig. 1 . Our contributions are summarized as follo ws: • A ne w scintillator design for increasing light collection in a camera while preserving the ev ent’ s spatial information. • Theory for modeling light from a kaleidoscopic ev ent that arriv es at the camera sensor . • A probabilistic model of the image of a kaleidoscopic ev ent with very fe w photons. • An algorithm to estimate an ev ent’ s 3D location in a kalei- doscopic scintillator . The algorithm is validated on ex- perimental data captured with a single-photon av alanche diode (SP AD) camera and a gamma-ray source. Im- prov ed 3D localization performance is demonstrated on experimentally-calibrated simulations. 2. Related W ork Kaleidoscopic vision. Mirrors and kaleidoscopic designs hav e been used for stereo vision [ 14 , 15 , 27 ] and multi- view 3D reconstructions [ 2 , 3 , 6 , 11 , 21 , 26 , 30 , 33 ] of an extended object using one picture from a single camera. A “light-trap” design consists of mirrors oriented such that light entering the trap reaches nearly every position inside the trap [ 36 ]. Time-of-flight is used to reconstruct a 3D ob- ject inside the trap. In fact, the pyramid-shaped light-trap was found to provide the best object cov erage, which is the same shape as the kaleidoscope we use in this work. Scintillation e vent measurement designs. Numerous de- signs exist for measuring scintillation ev ents. Sensors such as silicon photomultiplier (SiPM) arrays or photomultiplier tubes (PMT’ s) can be coupled to the surface of a scintil- lator that is monolithic or pixelated with internal wa veg- uides [ 16 , 18 , 20 , 22 , 23 ]. Camera designs measure e vents in a monolithic scintillator by imaging the scintillator with a lens [ 1 , 4 , 5 , 8 , 9 , 13 , 17 , 24 , 29 , 37 ]. Using a camera to estimate the 2D position of an ev ent in a thin scintilla- tor , not including depth, is performed using center of mass algorithms [ 17 , 19 , 24 ]. There is limited work that estimates the 3D position of individual events with a camera. One work attempts to re- mov e dark counts and estimate depth from defocus [ 8 ]. An- other work uses stereo vision with two objectives and mir - rors inside the imaging system to project two vie ws of a cu- bic scintillator onto a hybrid photon detector [ 12 ]. Our pro- posed system provides multiple views, is more efficient, and uses a commercially av ailable CMOS single-photon cam- era. A plenoptic SP AD camera was proposed to reconstruct ev ents [ 10 ]. The plenoptic camera design’ s experimental demonstration was limited by lo w light collection, obtain- ing up to 4 photons in an image, while its simulated recon- struction of neutrino tracks achiev ed high resolution. 2 3. Kaleidoscopic Event Imaging Theory W e consider a square pyramid scintillator throughout this paper . Each face of the scintillator, except for the base, is a specular surface. The scintillator has an index of refraction n > 1 and height h . 3.1. Imaging Configuration and Model The w orld coordinate system’ s origin is set at the pyramid’ s apex with the z -axis directed perpendicular toward the pyra- mid’ s base surface. The camera coordinate system’ s origin is at the center of the lens with its z -axis directed tow ard the scintillator in the opposite direction of the world’ s z - axis. The x and y axes among the two coordinate systems are in the same direction. W e denote a z -coordinate in the world coordinate system as z w and camera coordinate sys- tem as z c . Transforming between the two coordinate sys- tems is done by subtracting the z -coordinate from the lens’ z -coordinate in the world coordinate system. W e use a thin lens to model the camera. Three planes to note are the focal plane, thin lens plane, and sensor plane. The focal plane is set at the apparent depth of the pyramid’ s apex at z w = h − h/n . A thin lens with diameter A is placed at a distance S 1 from the focal plane. The sensor is placed at a distance S 2 from the lens. The lens’ focal length is set to f = ( S − 1 1 + S − 1 2 ) − 1 . P arameters are illustrated in Fig. 2 a. Throughout this paper, the scintillator is oriented so that its base surface’ s edges are parallel with the sensor’ s respec- tiv e edges. In this manner , the x - y dimensions of the nor - mal vector for each specular surface of the scintillator are aligned along either the x or y axes. W e denote the scintil- lator’ s four specular surfaces as + x , + y , - x , or - y mirrors. A scintillation ev ent is approximated as a point source of light emitting in all directions. Since the optical setup is constrained to short imaging distances, the images of an ev ent and its mirror reflections exhibit defocus blur and hav e nonzero diameters. An image’ s diameter on the sen- sor varies according to the ev ent’ s distance from the focal plane, following the circle of confusion model. Note that the camera sees an e vent at its apparent depth rather than its real depth due to the scintillator’ s index of refraction. An ev ent’ s real and apparent locations are separated by d − d/n along the z -dimension, where d is the e vent’ s real depth in the scintillator from the base surface. W e denote an appar- ent location using the superscript “ ( a ) ” and a real location by the lack of a superscript. For an ev ent at an apparent location ( x 0 , y 0 , z ( a ) c 0 ) , the circle of confusion model yields c = A S 2 S 1 | S 1 − z ( a ) c 0 | z ( a ) c 0 (1) where c is the image diameter at the sensor . Figure 2. (a) Imaging parameters and coordinate systems. Only light emitted directly from the event to the camera is sho wn. (b) Image truncation example in 3D. The image of an ev ent and one mirror reflection is shown. Light corresponding to the mirror reflection is truncated at the mirror’ s edge, resulting in a truncation line in the image. The truncation line only applies to that mirror reflection. 3.2. Mirror Reflections and A pertures Consider the real location of an e vent at p 0 . Mirror k pro- duces a mirror reflection with a real location at p k = T k p 0 where T k = I 3 × 3 − 2 n k n T k (2) is the mirror’ s transformation, I is the identity matrix, and n k is the mirror’ s normal vector . The mirror reflection of an event is also a point source of light. The captured im- age of the mirror reflection is obtained from the photons that reflect off the mirror and into the camera, exhibiting the same defocus blur as if an ev ent were located at p k . How- ev er , due to the finite mirror size, the image on the sensor may be truncated along lines corresponding to the mirror’ s edges. This occurs when p k is behind another mirror from the camera’ s perspectiv e. The photons that are truncated from the image are those that reflect off the mirror adjacent to mirror k near the shared edge, as shown in Fig. 2 b . Essen- tially , mirror k behav es like an aperture to a light source at p k . W e denote the area on the sensor where photons from a mirror reflection cannot arriv e as the “truncation zone” and its complement as the “acceptance zone”. A line that sepa- rates the truncation zone and acceptance zone is a “trunca- tion line”. A 3D visualization of light truncation is sho wn in Fig. 2 b. Supplementary Fig. 6 sho ws a 2D view of the propagation of light and how truncations form in various scenarios. T runcation lines are derived in Appendix A. Light that reflects ov er multiple mirrors may be stopped by any of those mirrors’ edges and also cause image trunca- tions. W e term a mirror reflection’ s “order” as the number of reflections its light underwent before forming. Light re- flecting off mirror k can reflect of f another mirror l and gen- erate a mirror reflection at p l = T l p k . The light incident on 3 Figure 3. Simulated kaleidoscopic image with theoretical acceptance zones. Acceptance zones are derived for each mirror reflection and ov erlaid in gray on the image for the (a) + x , (b) + y , (c) - x , and (d) - y mirror reflections. mirror l from p k passes through the aperture of mirror k . If this light spans a partial area A l of mirror l , then mirror l ’ s aperture is A l . Otherwise, mirror l ’ s aperture is simply the mirror . Mirror l ’ s aperture affects the light emitted from p l tow ard another mirror for a higher-order reflection and to- ward the camera for imaging. Higher-order reflections and imaging continue in the same manner . 3.3. Imaging Theory V alidation W e validate the theory on mirror reflections and image trun- cations by observing that theoretically deriv ed truncation lines align with photon arriv als in a simulated image. W e simulate the image of an ev ent in a kaleidoscopic scintilla- tor in the shape of a square pyramid using a thin lens and ray tracing. W e simulate an unrealistically high number of pho- tons so that image truncations are clearly observ able. Simu- lation details are in Appendix B. Acceptance zones for each mirror reflection are deri ved as in Appendix A and o verlaid on the image. The resulting image is sho wn in Fig. 3 . 4. Event Localization Algorithm 4.1. Image Model W e adopt a 2D Gaussian N ( t ; µ , σ 2 ) = 1 2 π σ 2 exp  − || t − µ || 2 2 2 σ 2  (3) as the camera’ s point spread function and assume a circular Gaussian with cov ariance matrix Σ = σ 2 I 2 × 2 . σ = ac , where a is a proportionality constant determined by the op- tical configuration, and c is the circle of confusion diameter in Eq. ( 1 ). t is a 2D coordinate on the sensor plane. An ev ent or mirror reflection at apparent location ( x k , y k , z ( a ) ck ) is modeled as a point source of light, so its image on the sensor consists of photon arriv als spatially dis- tributed o ver a 2D Gaussian with mean µ k = " S 2 z ( a ) ck x k , S 2 z ( a ) ck y k # (4) and standard deviation σ k = aA S 2 S 1 | S 1 − z ( a ) ck | z ( a ) ck (5) where Eq. ( 4 ) is deriv ed from perspectiv e projection. W e model the image of an ev ent in a kaleidoscopic scin- tillator with K mirror reflections and N photons as a GMM. The GMM has K + 1 components that correspond to the ev ent or a mirror reflection, and each photon in the im- age is a sample of the GMM. The location of each mirror reflection at p k for k = 1 ...K generated from an ev ent at p 0 = ( x 0 , y 0 , z w 0 ) is known based on the kaleido- scope’ s geometry . For any mirror reflection k , each coor- dinate in ( x k , y k , z wk ) is a linear combination of the e vent’ s ( x 0 , y 0 , z w 0 ) coordinates based on the mirror’ s reflection transformation. All p k ’ s can be written in terms of p 0 us- ing Eq. ( 2 ), and each Gaussian component’ s µ k and σ k can be written in terms of p 0 by transforming to camera coor - dinates and using Eqs. ( 4 ) and ( 5 ). Thus, ev ery Gaussian component is constrained to p 0 . This results in an opti- mization problem for estimating p 0 that captures the global information of the ev ent and all mirror reflections: argmax x 0 ,y 0 ,z w 0 , π Q − λ K X k =0   µ k − µ 0 k   2 2 (6) where Q = X i X k r ik  log ( π k ) + log ( w i ) − log (2 π σ k 2 ) − w i 2 σ k 2 || t i − µ k || 2 2  (7) r ik = π k N ( t i ; µ k , σ 2 k ) P K k ′ =0 π k ′ N ( t i ; µ k ′ , σ 2 k ′ ) (8) 4 Q is the expected v alue of the weighted complete-data log-likelihood, r ik is the probability of photon i belong- ing to component k giv en current parameter v alues, w i is the weight assigned to photon i , π k is the mixing weight for component k , µ 0 k is the initialization point for µ k as described in the initialization procedure below , and λ is a regularization coef ficient. W e adopt a density-based weighting scheme for photon samples to minimize the in- fluence of sparsely distributed dark counts and define w i = P j ∈ S q i exp  − || t i − t j || 2 2 ν  , where S q i is the set of q nearest neighbors of photon i , and ν is a positi ve scalar . W e use the EM algorithm to optimize Eq. ( 6 ). The above GMM formulation and definitions are deri ved in Appendix C. p k is written in terms of p 0 in Sec. I.1. 4.2. Optimization Algorithm W e assume a square pyramid kaleidoscope geometry , up to first-order reflections, and the presence of at least two mir- ror reflections in an image. Each edge of the scintillator’ s square surf ace is parallel to each respectiv e edge of the sen- sor . In this configuration, mirror reflections’ x - y coordi- nates will be located along either the ± x or ± y directions from the ev ent’ s location. One or more mirror reflections may be missing from the image due to truncations depending on the e vent’ s location. Therefore, determining which mirror reflections are present is required to compute Eq. ( 6 ). W e run the follo wing initial- ization procedure to determine the presence of mirror re- flections and to initialize the e vent’ s estimated location for the EM algorithm. Initialization procedure. The initialization procedure can be summarized as maximizing Eq. ( 6 ) over a set of possible ev ent locations and C ∈ { 3 , 4 , 5 } clusters (number of mir- ror reflections plus the e vent). First, centroids are computed using weighted KMeans with C clusters and the photons’ assigned weights. Centroids are then classified as either the ev ent, or + x , − x , + y , or − y mirror reflections based on their relativ e positioning. This is done by taking combi- nations of subsets of centroids and computing the standard deviation of a subset’ s coordinates along the x and y di- mensions to determine which centroids are horizontally or vertically aligned. W e denote these centroids as µ 0 k . A set of possible e vent locations that is uniformly spaced o ver the depth ( z dimension) of the scintillator is computed using the ev ent’ s centroid and Eq. ( 4 ). The number of mirror reflec- tions, which reflections are present, and the initialization point for p 0 are those that correspond to the highest v alue of Eq. ( 6 ) out of this set and ov er C ∈ { 3 , 4 , 5 } . When com- puting Eq. ( 6 ), terms that correspond to a mirror reflection k are included only if that mirror reflection is determined to be present. π is initialized to the uniform distribution. Optimization procedure. During the E-step, r ik is updated using Eq. ( 8 ) and the current v alues of p 0 and π . If photon Figure 4. Experimental setup. (a) The setup without the gamma- ray source. (b) The kaleidoscopic scintillator . i is distant from any µ k and computing the denominator in Eq. ( 8 ) results in underflow , then we set r i : = 0 and do not use photon i in computations. During the M-step, p 0 is updated by fixing π and optimizing Eq. ( 6 ) with gradi- ent ascent. The gradient with respect to p 0 is derived in Appendix I. π is then updated using π k = 1 N P N i =1 r ik . In both the E and M steps, terms that correspond to a mirror re- flection k are included in the computation only if that mirror reflection is present in the image. Regularization. W e introduce the regularization term λ P K k =0   µ k − µ 0 k   2 2 to fav or solutions where all compo- nents’ µ ’ s are close to their corresponding µ 0 . This re gular- ization is most helpful in cases where the mirror reflection photon clusters are close to each other . Without regular - ization, the algorithm may find an optimum that groups the ev ent and one or more mirror reflections together into one component with a large σ while ha ving other components’ µ ’ s that do not coincide with a photon cluster . An example of this is shown in Supplementary Fig. 11. 5. Hardwar e Experiments Hardwar e and data collection. The experimental hard- ware consists of a SP AD array , lens, scintillator , and 1 µ Ci Co-60 gamma-ray source (1.17, 1.33 MeV). W e use the SP AD512 (Pi Imaging) array with microlenses for increased fill factor , which has 512 × 512 pixels and 16 µ m pixel pitch. The lens is a 50 mm focal length Nikkor lens set to a f/1.2 aperture. W e use a GAGG(Ce)-HL scintillator (Epic Crys- tal), which has a 150 ns decay constant, a 530 nm emission peak, and an index of refraction of 1.91. Its geometry is a square pyramid with a 20 mm wide base, 5.77 mm height, and a 120 degree opening angle at the apex. This wide angle is chosen so that second-order reflections are unlikely to oc- cur or be imaged. Four surfaces of the scintillator are coated with enhanced specular reflector . The SP AD array is con- figured to capture 1-bit images with 1.5 µ s integration time 5 Figure 5. Selected experimental images. Experimental images ov erlaid with the algorithm’ s estimated Gaussian components. Each dashed circle is centered on the Gaussian component’ s mean. The inner and outer circles are one and two standard deviations in radius, respectiv ely . Red and blue circles represent the event and mirror reflections, respectiv ely . Pixels with a photon are enlarged with a 3 × 3 filter for visualization purposes. The number of counts in the image and the algorithm’ s estimated e vent location are sho wn in each image. to minimize the accumulation of dark counts while allow- ing time for scintillation light to be emitted. The camera’ s lateral field of view (FO V) covers approximately 5 . 5 × 5 . 5 mm at the focal plane. The scintillator is positioned such that its apex is in-focus and centered in the camera’ s FO V , and that the scintillator’ s base edges are parallel with the sensor’ s respectiv e edges. An air gap of approximately 30 mm exists between the scintillator and the lens. The entire setup is placed inside a light-tight enclosure to keep ambi- ent light out. The experimental setup and scintillator are shown in Fig. 4 . The experimental camera focus is shown in Supplementary Fig. 16. Data collection took place at about 21 degrees Celsius ambient temperature. W e first captured 130,000 images without the gamma-ray source to compute a mask of 5% of pix els with the highest dark count rates. W e zero the pix- els in this mask in all experimental images. After zeroing 5% of pixels, we observe a median of 4 dark counts per image. A histogram of dark counts per image is sho wn in Supplementary Fig. 9. W e collect data with the the gamma-ray source placed adjacent to the scintillator’ s apex and passively capture images since the timing of gamma-ray emissions can- not be controlled. All computations are performed in post-processing. W e capture 13,000,000 images with the gamma-ray source present and discard images with less than 60 counts, resulting in 4,379 images. Histograms of counts in an image are shown in Supplementary Fig. 10. Parameter values. In all experiments, we set λ = 10 . W e clip all σ k ’ s to a a minimum v alue of 10 pixels to account for imperfect focusing and a non-ideal point source, and to prev ent singularities. The gradient of σ k is set to 0 if σ k gets clipped to the minimum value. See Appendix D for the values of all the algorithm’ s parameters and Appendix E for how camera parameters are calibrated. Kaleidoscopic image model validation. W e select six ex- perimental images and overlay the algorithm’ s estimated Gaussian components, shown in Fig. 5 , to v alidate the pres- 6 ence of mirror reflections in accordance with the kaleido- scopic model. Additional experimental images are shown in Supplementary Fig. 13. Localization algorithm cross-v alidation. Experimental ev ents cannot be controlled, so their ground truth locations are unknown. Therefore, to validate that the algorithm is measuring the ev ent’ s location, we report agreement of mul- tiple measurements of the ev ent’ s location as follows. W e use experimental images that contain the ev ent and four mir- ror reflections as test images. The number of mirror reflec- tions in an image is determined using the algorithm’ s ini- tialization procedure. Then, we create ne w images of the ev ent by removing combinations of one or two mirror re- flections from the image. Photon i is classified as belong- ing to mirror reflection k according to max k r ik using r ik values obtained after running the algorithm’ s optimization procedure on the original test image with four mirror re- flections. Thus, for one test image, we generate 4 images with one mirror reflection removed and 6 images with two mirror reflections removed for a total of 11 images corre- sponding to one ev ent. W e run the algorithm’ s initializa- tion and optimization procedures on each image to obtain 11 measurements of the ev ent’ s location. W e compute the mean estimated event location over the 11 images corre- sponding to one e vent. W e record the distance between the mean location and the estimated location for each individ- ual image. The distribution of this distance over all images is used to report the agreement in event location measure- ments, where short distances indicate good agreement. This metric also provides a measure of experimental precision. Selecting images by using the algorithm that contain four mirror reflections and at least 60 counts resulted in 1,606 test images. The median, mean, and standard deviation, re- spectiv ely , of distances are 0.10 mm, 0.17 mm, and 0.22 mm over 17,666 distances. Distances are reported in a his- togram in Supplementary Fig. 14. T o confirm that mirror reflections are being correctly identified and remov ed, we record the fraction of photon counts in an image that are removed. W e assume that an event and each mirror reflection hav e the same av er- age brightness. Therefore, we expect about 1/5 and 2/5 of counts in an image to be remov ed when removing one and two mirror reflections, respectively . The median, mean, and standard deviation, respectively , of the fraction of counts remov ed in an image are 0.19, 0.18, and 0.07 over 6,424 images where one mirror reflection has been removed, and 0.38, 0.37, and 0.08 over 9,636 images where two mirror re- flections ha ve been remo ved. The distribution of fractions of counts remov ed are reported in histograms in Supple- mentary Fig. 15. Regularization ablation study . W e perform an ablation study to demonstrate the ef fectiv eness of regularization in Eq. ( 6 ). Details are in Appendix F. A verage error and resolution (mm) ↓ N 0 Algorithm 3D Error x Res. y Res. z Res. 30 Kaleid. (ours) 0.14 0.16 0.14 0.14 Non-kaleid. (ours) 0.83 1.36 1.41 1.32 Noise-removing [ 8 ] 0.64 1.04 1.02 1.04 20 Kaleid. (ours) 0.16 0.15 0.17 0.16 Non-kaleid. (ours) 0.79 1.17 1.24 1.21 Noise-removing [ 8 ] 0.68 1.05 1.03 1.06 10 Kaleid. (ours) 0.29 0.27 0.31 0.27 Non-kaleid. (ours) 0.95 1.36 1.33 1.28 Noise-removing [ 8 ] 0.80 1.20 1.13 1.14 T able 1. Simulation results. 6. Simulations Comparison to prior art . Since scintillation ev ents are dif- ficult to produce at controlled locations, direct ev aluation of the method requires simulated data. Here, we simulate our system and compare its localization performance to prior 3D localization methods, which estimate depth from defo- cus in the absence of mirror reflections [ 8 ]. W e ev aluate po- sition accurac y and resolution o ver a grid of e vent locations and over different le vels of e vent brightness. W e use the same scintillator geometry , index of refraction, calibrated camera parameter v alues, and sensor array specifications as in the experiments. Image generation. For a gi ven e vent location, we generate 100 kaleidoscopic images for each of three brightness le vels as follows. W e compute the locations of the mirror reflec- tion and draw P oisson ( N 0 ) photons from each Gaussian component parameterized according to Eqs. ( 4 ) and ( 5 ) and accounting for the index of refraction. As in the experi- ments, all σ k ’ s are clipped to a minimum v alue of 10 pixels, which corresponds to z w = 0 . 82 mm with the given exper - imental parameter values. Acceptance zones are computed, as described in Appendix A, and photons that lie outside their mirror reflection’ s acceptance zone are removed. W e test brightness lev els ov er N 0 ∈ { 10 , 20 , 30 } . P oisson (10) dark counts are uniformly randomly added over each image. At each location, we also generate 100 non-kaleidoscopic images in the same manner , except that no mirror reflec- tions are generated. Event locations. The grid of e vent locations spans x ∈ (0 , 2 . 5) mm, y ∈ (0 , 2 . 5) mm, and z w ∈ (0 . 82 , 3 . 5) mm. This grid is selected so that ev ent locations span one x - y quadrant due to the system’ s symmetry , and σ 0 correspond- ing to the event’ s photon cluster is abo ve the minimum clip- ping threshold. Grid points are equispaced over 10 points in each dimension. If less than two mirror reflections’ cen- troids ( µ k ’ s) are in their acceptance zones and within the sensor , then the event location is discarded. These discarded locations correspond to points near the edge of the camera’ s 7 FO V , and we consider their resulting image to be inadequate for applying our proposed algorithm. Event locations that lie outside the scintillator are also discarded. This results in a total of 463 valid e vent locations. Algorithms. The proposed algorithm is tested on the kalei- doscopic images with the same parameter values and ini- tialization and optimization procedures as on the experi- mental data. A non-kaleidoscopic v ersion of the proposed algorithm is tested on non-kaleidoscopic images. The non- kaleidoscopic version of the algorithm is the same as the kaleidoscopic version, except that initialization points are lower -bounded at z w = 0 . 82 mm, only one cluster or Gaus- sian component is computed, and r ik = 1 { k = 0 } for all photons. W e also test a prior method on non-kaleidoscopic images that attempts to classify and remove dark counts and then directly solve for the e vent location’ s maximum likelihood estimate without weighting photons [ 8 ]. More details of this method are in Appendix G. The algorithms that are applied to non-kaleidoscopic images estimate depth from defocus. Perf ormance metrics. At each location, we compute the mean 3D error (Euclidean distance) over all images and the spatial resolution. Spatial resolution is defined as 2.355 σ e (full width half maximum), where σ e is the standard devia- tion of one dimension of the estimates of an event location. W e report the av erages of the mean 3D error and the spatial resolution in each dimension taken over all ev ent locations for each brightness le vel N 0 ∈ { 10 , 20 , 30 } for the three different algorithms. T ab . 1 contains results for the kalei- doscopic and non-kaleidoscopic versions of the proposed algorithm and for the prior method with T edg e = 80 pix- els (see Appendix G for details). The prior method’ s results using different T edg e values are reported in Supplementary T ab . 2. 7. Discussion Simulated validation of theoretical image truncations. The mirror reflection images generated with ray tracing and a thin lens are truncated along the theoretically deriv ed ac- ceptance zones, sho wn in Fig. 3 . This suggests the theoreti- cal models for mirror reflection locations and image trunca- tions are correct. In Fig. 3 , the + x and - y mirror reflections are partially truncated, while the + y and - x mirror reflec- tions are not truncated. Experimental validation of kaleidoscopic image model. V arious examples where the Gaussian components pre- dicted by the algorithm coincide with the photon clusters of the ev ent and mirror reflections are sho wn in Fig. 5 . This demonstrates that mirror reflections are indeed being cap- tured in accordance with the proposed model. Examples in- clude ev ents occurring at high (Fig. 5 a) and low (Fig. 5 b) z w -coordinates, mirror reflections lying outside the FO V (Fig. 5 c,d), and mirror reflections that are completely trun- cated (Fig. 5 e,f). Dark counts in Fig. 5 are higher than the median dark count rate of 4 pixels per image without the gamma-ray source present. This could be due to se veral f actors, includ- ing increased cross talk from higher photon collection, flu- orescence from ev ents from other particles or gamma-rays occurring before the beginning or at the end of an image’ s exposure time, imperfect mirror reflections, or internal re- flections. Experimental cr oss-validation of localization algorithm. The distances between the mean and indi vidual estimated ev ent locations over images with removed mirror reflections are small, demonstrating an agreement in ev ent location measurements among different images of the same ev ent with removed mirror reflections. This agreement indicates the algorithm is in fact measuring the event’ s location and not spurious light sources. The distributions of fraction of photon counts removed from an image when removing one and two mirror reflections are centered close to 1/5 and 2/5, indicating that photons from a mirror reflection are being classified correctly by the algorithm. These results would be unlikely if the algorithm were not measuring the event, or if images were of spurious light that does not originate from an event or follo w the kaleidoscopic model. Further proof that the algorithm is measuring events is seen in sim- ulations where the ground truth is known. Simulations. The results in T ab. 1 sho w that the kaleido- scopic design improves 3D localization performance com- pared to estimating depth from defocus and attains sub- millimeter resolution across all brightness le vels tested. The kaleidoscopic design’ s performance retains high accuracy and resolution ev en at lower brightness lev els, demonstrat- ing robustness to lo w photon counts and dark counts. Limitations and future work. In this work, we only test one specific kaleidoscopic geometry with first-order reflec- tions from one ev ent. Future work may test different kalei- doscopic geometries and higher-order reflections. Dev el- oping a method to measure multiple e vents in one image would allo w to construct a Compton camera or neutron scat- ter camera to localize the radiation source [ 28 , 35 ]. Internal reflections at the scintillator’ s base surface could possibly appear in an image after reflecting off a mirrored surface. An internal reflection has the same effect as adding a mirror to the scintillator’ s base surface. Then the follow- ing reflection off the physical mirror can be modeled as a second-order reflection. Since internal reflection occurs at incidence angles higher than the critical angle, the mir- ror reflection of an internal reflection will exist away from first-order mirror reflections in x and y directions along the mirror’ s normal vector . The camera’ s FO V in the experi- ments is limited so that imaging mirror reflections that fol- low from internal reflections at the scintillator’ s base sur- face is infrequent. Hardware configurations with a larger 8 camera FO V may capture internal reflections and hav e to account for them. Imaging internal reflections will increase light collection and image complexity . Acknowledgments This material is based upon work supported by the Depart- ment of Energy / National Nuclear Security Administration under A ward Number(s) DE-N A0004196. U.S. Department of Energy (Disclaimer): This work was prepared as an account of work sponsored by an agency of the United States Gov ernment. Neither the United States Gov ernment nor any agency thereof, nor any of their em- ployees, nor an y of their contractors, subcontractors or their employees, makes any warranty , e xpress or implied, or as- sumes any legal liability or responsibility for the accuracy , completeness, or any third parties use or the results of such use of any information, apparatus, product, or process dis- closed, or represents that its use would not infringe pri- vately o wned rights. Reference herein to any specific com- mercial product, process, or service by trade name, trade- mark, manuf acturer , or otherwise, does not necessarily con- stitute or imply its endorsement, recommendation, or fav or- ing by the United States Go vernment or any agency thereof or its contractors or subcontractors. The views and opin- ions of authors expressed herein do not necessarily state or reflect those of the United States Gov ernment or any agency thereof, its contractors or subcontractors. References [1] Robert Adams and Robert Zboray . Gamma radiography and tomography with a ccd camera and co-60 source. Applied Radiation and Isotopes , 127:82–86, 2017. 2 [2] Byeongjoo Ahn, Ioannis Gkioulekas, and Aswin C Sankara- narayanan. Kaleidoscopic structured light. ACM T ransac- tions on Graphics (T OG) , 40(6):1–15, 2021. 2 [3] Byeongjoo Ahn, Michael De Zeeuw , Ioannis Gkioulekas, and Aswin C Sankaranarayanan. Neural kale idoscopic space sculpting. In Pr oceedings of the IEEE/CVF Confer ence on Computer V ision and P attern Recognition , pages 4349– 4358, 2023. 2 [4] Stuart Baker , Kristina Brown, Alden Curtis, Stephen S Lutz, Russell Howe, Robert Malone, Stephen Mitchell, Jeremy Danielson, T odd Haines, and Kris Kwiatko wski. Scintillator efficienc y study with mev x-rays. In Hard X-Ray , Gamma- Ray , and Neutr on Detector Physics XVI , pages 78–87. SPIE, 2014. 2 [5] Swathi Lakshmi Balasubramanian and Ganapathy Krishna- murthi. X-ray scintillator lens-coupled with cmos camera for pre-clinical cardiac vascular imaging—a feasibility study . Plos one , 17(2):e0262913, 2022. 2 [6] Shaun Bangay and Judith D Radloff. Kaleidoscope config- urations for reflectance measurement. In Pr oceedings of the 3r d international conference on Computer graphics, virtual r eality , visualisation and interaction in Africa , pages 161– 170, 2004. 2 [7] Jonathan S Beaumont, Matthe w P Mellor, Mario V illa, and Malcolm J Joyce. High-intensity po wer-resolved radiation imaging of an operational nuclear reactor . Nature Communi- cations , 6(1):8592, 2015. 1 [8] Alex Bocchieri, Edoardo Charbon, and Andreas V elten. Scintillation ev ent imaging with a single photon av alanche diode camera. Communications Engineering , 3(1):135, 2024. 2 , 7 , 8 , 13 [9] Gabriele D’Amen, Michael Keach, Andrei Nomerotski, Pe- ter Svihra, and Alessandro T ricoli. Novel imaging technique for α -particles using a f ast optical camera. Journal of Instru- mentation , 16(02):P02006, 2021. 2 [10] Till Dieminger , Sa ´ ul Alonso-Monsalv e, Christoph Alt, Clau- dio Bruschini, Noemi B ¨ uhrer , Edoardo Charbon, K odai Kaneyasu, T im W eber , and Davide Sgalaberna. An ultra- fast plenoptic-camera system for high-resolution 3d parti- cle tracking in unsegmented scintillators. arXiv pr eprint arXiv:2511.09442 , 2025. 2 [11] Siyu Fan, Y inhu Xi, Y ingying Guo, Y ifeng Zhu, Shenao Liang, Haohao Zhang, and Nan Zhang. Kaleidoscopic imag- ing system for three-dimensional reconstruction of wear de- bris. Engineering Resear ch Expr ess , 7(4):045511, 2025. 2 [12] Mykhaylo Filipenko, Timur Iskhakov , Patrick Hufschmidt, Gisela Anton, Michael Campbell, Thomas Gleixner, Gerd Leuchs, T imo Tick, John V aller ga, Michael W agenpfeil, et al. Three-dimensional photograph of electron tracks through a plastic scintillator . The Eur opean Physical J ournal C , 74(11):3131, 2014. 2 [13] Tianqi Gao, Gabriele D’Amen, Sergey Burdin, Moham- mad Alsulimane, Paul Campbell, Cinzia Da V ia, Andrei Nomerotski, Adam Roberts, Peter Svihra, K onstantinos Mavrok oridis, et al. Novel imaging technique for thermal neutrons using a fast optical camera. Nuclear Instruments and Methods in Physics Researc h Section A: Accelerators, Spectr ometers, Detectors and Associated Equipment , 1046: 167604, 2023. 2 [14] Joshua Gluckman and Shree K Nayar . Planar catadioptric stereo: Geometry and calibration. In Pr oceedings. 1999 IEEE Computer Society Confer ence on Computer V ision and P attern Recognition (Cat. No PR00149) , pages 22–28. IEEE, 1999. 2 [15] Joshua Gluckman and Shree K Nayar . Rectified catadioptric stereo sensors. IEEE transactions on pattern analysis and machine intelligence , 24(2):224–236, 2002. 2 [16] Andrea Gonzalez-Montoro, Antonio J Gonzalez, Shirin Pourashraf, Robert S Miyaoka, Peter Bruyndonckx, Garry Chinn, Larry A Pierce, and Craig S Levin. Evolution of pet detectors and event positioning algorithms using monolithic scintillation crystals. IEEE T ransactions on Radiation and Plasma Medical Sciences , 5(3):282–305, 2021. 1 , 2 [17] Alex Gustschin, Y iyong Han, Adrian Losko, Alexander W olfertz, Daniel S Hussey , L ´ aszl ´ o Szentmikl ´ osi, Zolt ´ an Kis, Pa vel T rtik, Pierre Boillat, Anders Kaestner , et al. Event- based high-resolution neutron image formation analysis us- ing intensified cmos cameras. Scientific Reports , 14(1): 26941, 2024. 2 9 [18] Hiroki Hosokoshi, Jun Kataoka, Saku Mochizuki, Masaki Y oneyama, Soichiro Ito, Hiroaki Kiji, Fumiya Nishi, Shuji Miyamoto, and T atsushi Shima. De velopment and perfor- mance verification of a 3-d position-sensitiv e compton cam- era for imaging me v gamma rays. Scientific r eports , 9(1): 18551, 2019. 2 [19] Daniel S Hussey , Jacob M LaManna, Elias Baltic, and David L Jacobson. Neutron imaging detector with 2 µ m spa- tial resolution based on ev ent reconstruction of neutron cap- ture in gadolinium oxysulfide scintillators. Nuclear Instru- ments and Methods in Physics Resear ch Section A: Acceler- ators, Spectr ometers, Detectors and Associated Equipment , 866:9–12, 2017. 2 [20] J Kataoka, A Kishimoto, T Nishiyama, T Fujita, K T akeuchi, T Kato, T Nakamori, S Ohsuka, S Nakamura, M Hirayanagi, et al. Handy compton camera using 3d position-sensitive scintillators coupled with lar ge-area monolithic mppc arrays. Nuclear Instruments and Methods in Physics Researc h Sec- tion A: Accelerators, Spectrometer s, Detectors and Associ- ated Equipment , 732:403–407, 2013. 2 [21] Ryo Kawahara, Meng-Y u Jennifer Kuo, and Shohei Nobuhara. T eleidoscopic imaging system for microscale 3d shape reconstruction. In Pr oceedings of the IEEE/CVF Con- fer ence on Computer V ision and P attern Recognition , pages 20813–20822, 2023. 2 [22] M Kawula, TM Binder, S Liprandi, R V iegas, Katia Parodi, and Peter G Thirolf. Sub-millimeter precise photon inter- action position determination in large monolithic scintilla- tors via con volutional neural netw ork algorithms. Physics in Medicine & Biology , 66(13):135017, 2021. 2 [23] Hyounggun Lee, T aew oong Lee, and W onho Lee. Dev elop- ment of a position-sensitive 4 π compton camera based on a single se gmented scintillator . IEEE T ransactions on Nuclear Science , 67(12):2511–2522, 2020. 2 [24] AS Losk o, Y Han, B Schillinger , A T artaglione, M Mor gano, M Strobl, J Long, AS T remsin, and M Schulz. New perspec- tiv es for neutron imaging through advanced ev ent-mode data acquisition. Scientific r eports , 11(1):21360, 2021. 2 [25] M Menichelli, Stefano Ansoldi, M Bari, M Basset, R Bat- tiston, S Blasko, F Coren, E Fiori, G Giannini, D Iugov az, et al. A scintillating fibres tracker detector for archaeological applications. Nuclear Instruments and Methods in Physics Resear ch Section A: Accelerators, Spectr ometers, Detectors and Associated Equipment , 572(1):262–265, 2007. 1 [26] Hiroshi Mitsumoto, Shinichi T amura, K ozo Okazaki, Naoki Kajimi, and Y utaka Fukui. 3-d reconstruction using mirror images based on a plane symmetry reco vering method. IEEE T ransactions on P attern Analysis & Machine Intellig ence , 14 (09):941–946, 1992. 2 [27] Sameer A Nene and Shree K Nayar . Stereo with mirrors. In Sixth International Conference on Computer V ision (IEEE Cat. No. 98CH36271) , pages 1087–1094. IEEE, 1998. 2 [28] Raj Kumar Parajuli, Makoto Sakai, Ramila Parajuli, and Mutsumi T ashiro. Development and applications of comp- ton camera—a revie w . Sensors , 22(19):7374, 2022. 2 , 8 [29] Helena Pleinert, Eberhard Lehmann, and Sonja K ¨ orner . De- sign of a new ccd-camera neutron radiography detector . Nu- clear Instruments and Methods in Physics Resear ch Section A: Accelerators, Spectr ometers, Detectors and Associated Equipment , 399(2-3):382–390, 1997. 2 [30] Ilya Reshetouski, Alkhazur Manakov , Hans-Peter Seidel, and Iv o Ihrke. Three-dimensional kaleidoscopic imaging. In CVPR 2011 , pages 353–360. IEEE, 2011. 2 [31] Krysta Ryzewski, S Herringer , H Bilheux, Lakeisha W alker , B Sheldon, S V oisin, J-C Bilheux, and V Finocchiaro. Neu- tron imaging of archaeological bronzes at the oak ridge na- tional laboratory . Physics Pr ocedia , 43:343–351, 2013. 1 [32] V Schonfelder, R Diehl, GG Lichti, H Steinle, BN Swanen- bur g, AJM Deerenber g, H Aarts, J Lockwood, W W ebber , J Macri, et al. The imaging compton telescope comptel on the gamma ray observatory . IEEE T ransactions on Nuclear Science , 31(1):766–770, 1984. 1 [33] Kosuk e T akahashi and Shohei Nobuhara. Structure of mul- tiple mirror system from kaleidoscopic projections of single 3d point. IEEE T ransactions on P attern Analysis and Ma- chine Intelligence , 44(9):5602–5617, 2021. 2 [34] Kai V etter , Ross Barno wksi, Andre w Haefner, T enzing HY Joshi, Ryan Pavlovsk y , and Brian J Quiter . Gamma- ray imaging for nuclear security and safety: T owards 3-d gamma-ray vision. Nuclear Instruments and Methods in Physics Researc h Section A: Accelerators, Spectr ometers, Detectors and Associated Equipment , 878:159–168, 2018. 1 [35] Kyle W einfurther, John Mattingly , Erik Brubaker , and John Steele. Model-based design evaluation of a compact, high- efficienc y neutron scatter camera. Nuclear Instruments and Methods in Physics Resear ch Section A: Acceler ators, Spec- tr ometers, Detectors and Associated Equipment , 883:115– 135, 2018. 2 , 8 [36] Ruilin Xu, Mohit Gupta, and Shree K Nayar . Trapping light for time of flight. In Pr oceedings of the IEEE Conference on Computer V ision and P attern Recognition , pages 6219– 6227, 2018. 2 [37] Seiichi Y amamoto, Masao Y oshino, K ei Kamada, Ryuga Y a- jima, Akira Y oshikawa, Kohei Nakanishi, and Jun Kataoka. Dev elopment of an ultrahigh resolution real time alpha par- ticle imaging system for observing the trajectories of al- pha particles in a scintillator . Scientific reports , 13(1):4955, 2023. 2 10 Kaleidoscopic Scintillation Event Imaging Supplementary Material A. Theoretical Image T runcation Derivation An ev ent’ s mirror reflection generated from mirror k is lo- cated at p k with an apparent location at p ( a ) k . Each edge of the mirror may impose a truncation line on the camera sensor . A truncation line on the sensor, l sensor , is determined as follows. Denote a plane, P 1 , that contains p k and the mirror’ s edge. If dealing with a second-order reflection or higher , the mirror’ s edge corresponds to the area where light can arri ve on mirror k after being stopped at a previous mir- ror , which may be dif ferent than mirror k ’ s physical edge. Denote the line that intersects P 1 and the scintillator’ s base surface as l 1 . Compute the plane, P trunc , that contains l 1 and p ( a ) k . Denote the side of P trunc that f aces a way from the mir- ror using the normal v ector n trunc . Compute the intersection between P trunc and the focal plane to be the truncation line at the focal plane, l focal . Project n trunc onto the focal plane, de- noted as n focal . Scale l focal by the magnification, m = − S 2 S 1 , to obtain l sensor . Scale n focal by m to obtain n sensor . The truncation side of l sensor where photons do not arri ve depends on which side of the focal plane that p ( a ) k is located. If p ( a ) k is beyond the focal plane away from the lens at a distance greater than S 1 from the lens, then the side of l sensor pointed to by n sensor is truncated and contains no photon arriv als (Fig. 6 b). If p ( a ) k is between the focal plane and lens at a distance less than S 1 from the lens, then the side of l sensor opposite of n sensor is truncated and contains no photon arriv als (Fig. 6 d). The area on the sensor where photons cannot arri ve is the union of the truncation sides of each truncation line. W e denote this area as the “truncation zone” and its complement as the “acceptance zone” (Fig. 6 b). The e vent itself has no truncation zone on the sensor because its image is formed by light emitted directly to the camera without reflections. B. Imaging Theory Simulation Details Fig. 3 in the main paper is generated with the following con- figuration using ray tracing and a thin lens. The simulated scintillator has a 5.77 mm height, 20 mm base length, 120 degree opening angle, and inde x of refraction n = 1 . 5 . A thin lens with 35 mm focal length and 25 mm diameter is placed at S 1 = 45 mm aw ay from the focal plane at the scintillator’ s apex’ s apparent location. A 512 × 512 sensor with 18.6 µ m pixel pitch is placed at S 2 = 157 . 5 mm aw ay from the lens. 100,000 photons are emitted isotropically from ( − 0 . 5 , 0 . 75 , 1) mm (world coordinates). C. Gaussian Mixture Model F ormulation The complete-data likelihood function, L , for an image with N photons, one ev ent, and K mirror reflections is L ( θ ; t , z ) = N Y i =1 K Y k =0  π k N ( t i ; µ k , σ 2 k )  1 ( z i = k ) (9) where θ = ( µ , σ , π ) are model parameters, and π k is the mixing weight for component k . t = ( t 1 , t 2 , ..., t N ) are the 2D coordinates of each of N photon arri vals on the sensor and z = ( z 1 , z 2 , ..., z N ) are the latent v ariables for which component a photon belongs to. W e apply a density-based weighting scheme to photon samples to minimize the in- fluence of sparsely distributed dark counts. The weighted complete-data likelihood function is L w ( θ ; t , z ) = N Y i =1 K Y k =0  π k N ( t i ; µ k , 1 w i σ 2 k )  1 ( z i = k ) (10) = N Y i =1 K Y k =0  π k w i 2 π σ k 2 exp  − w i 2 σ k 2 || t i − µ k || 2 2  1 ( z i = k ) where w i = X j ∈ S q i exp  − || t i − t j || 2 2 ν  (11) is the weight assigned to photon i , S q i is the set of q nearest neighbors of photon i , and ν is a positiv e scalar . The ex- pected v alue of the weighted complete-data log-likelihood, Q , is Q = E z | t [log L w ( θ ; t , z )] (12) = X i X k r ik  log ( π k ) + log ( w i ) − log (2 π σ k 2 ) − w i 2 σ k 2 || t i − µ k || 2 2  where r ik = E z | t [ 1 ( z i = k )] = π k N ( t i ; µ k , σ 2 k ) P K k ′ =0 π k ′ N ( t i ; µ k ′ , σ 2 k ′ ) (13) giv es the posterior distribution of z . r ik is the probability that photon i comes from component k , giv en the current parameter values. 11 Figure 6. Mirror apertures and image truncations. An event emits light onto a mirror that reflects into the camera. Some light from a mirror reflection cannot reach the sensor due to finite mirrors and defocus blur , forming a “truncation zone” on the sensor . The “acceptance zone” denotes the area where light from a mirror reflection can arrive on the sensor . Example cases include the following: The mirror reflection is located within the focal plane (a,b,e) or beyond the focal plane (c,d). All light that forms the mirror reflection reaches the sensor (a,c). Some light that would form the mirror reflection is stopped at the mirror’ s edge and truncated on the sensor (b,d). Some light that would form a second-order mirror reflection is stopped at both mirrors’ edges and truncated on the sensor (e). The mirror for the second reflection in (e) is illustrated along the light’ s path. Illustrations are drawn with scintillator index of refraction n = 1 . D. Experimental Parameter V alues In all experiments, we use the regularization term with λ = 10 . W e use q = 10 nearest neighbors and ν = 10 pix- els for assigning photon weights. Up to ten possible e vent locations in the scintillator are ev aluated in the initialization procedure equispaced over the scintillator’ s depth. During one M-step, we run gradient ascent for 1,000 steps with a step size of 1e-7. W e run the EM algorithm until the dis- tance in the estimated e vent location between consecutiv e steps is less than 0.01 mm, or until 100 steps are taken. W e clip all σ k ’ s to a a minimum v alue of 10 pixels to account for imperfect focusing and a non-ideal point source. The gradient of σ k is set to 0 if σ k gets clipped to the minimum value. W e set A = 41 . 7 mm, S 1 = 45 mm, S 2 = 72 mm, and a = 0 . 25 . See Appendix E for how we calibrate these camera parameters. The focal plane is set to the scintilla- tor’ s apparent depth at z w = 2 . 75 mm. E. Experimental Camera Parameter Calibra- tion W e select the experimental image shown in Fig. 12 where the event is approximately centered and mirror reflections are near the image’ s edges. The camera’ s field of view cov ers about 5.5 mm at the focal plane, and the + x mir- ror reflection’ s photon cluster is approximately centered at pixel coordinate (470 , 256) . W e approximate the ob- served mirror reflection’ s x -coordinate in world coordinates as 470 / 512 × 5 . 5 / 2 = 2 . 52 mm. W e assign the ev ent’ s loca- tion to be (0 , 0 , 2 . 9) mm. This ev ent location and the scin- tillator’ s geometry produce a + x mirror reflection located at x = 2 . 52 mm, matching the observed mirror reflection. The event location is fixed, and the camera aperture diam- eter is set to A = 50 / 1 . 2 = 41 . 7 mm. The camera pa- rameter v alues for S 1 , S 2 , and a are manually adjusted until the resulting Gaussian components ( µ k , σ k ) coincide with the event and mirror reflection photon clusters. V alues are set so that approximately all photons in the photon clus- ter are contained within a radius of 2 σ k from component k ’ s centroid (plotted in Fig. 12 ). Dark counts are ignored. The resulting v alues are S 1 = 45 mm, S 2 = 72 mm, and a = 0 . 25 . The focal plane is set to the scintillator’ s appar- ent depth at z w = 5 . 77 − 5 . 77 / 1 . 91 = 2 . 75 mm. The lens plane is set at z w = 2 . 75 + 45 = 47 . 75 mm. The sensor plane is set at z w = 47 . 75 + 72 = 119 . 75 mm. F. Ablation Study: Regularization W e perform the same experiments as in Sec. 5 of the main paper on the same 4,379 experimental images containing at least 60 counts, except we set λ = 0 (no regulariza- tion). The results are reported in Figs. 7 and 8 . An ele- vated frequency of images where no photons are removed during mirror reflection removals is observed in Fig. 7 . For these images, the algorithm is likely con verging to an e vent location with an erroneously high z w coordinate. This re- sults in a Gaussian component with a large σ 0 that cov ers photon clusters of multiple mirror reflections, as sho wn in Fig. 11 a. The high frequency of short distances in Fig. 8 also indicates that mirror reflections are not being properly identified and removed, and the algorithm is conv erging to the same estimate over multiple identical images. Thus, we observe a decrease in localization performance without re g- ularization. 12 (a) (b) Figure 7. Ablation study . Experimental fraction of counts in an image removed during mirror remov als without regularization ( λ = 0 ). (a) One mirror reflection remov al. Median, mean, stdev: 0.10, 0.10, and 0.11. 8,972 images. (b) T wo mirror reflection remov als. Median, mean, stdev: 0.27, 0.21, and 0.19. 13,458 images. Figure 8. Ablation study . Experimental agreement in event location measurements without regularization ( λ = 0 ). His- togram of distances between mean estimated e vent location and each image’ s estimated event location after mirror reflection re- mov als. Median, mean, stdev: 0.03 mm, 0.39 mm, and 0.79 mm. 24,673 distances. G. Denoising and Estimating Depth from De- f ocus (Prior Method) Prior methods that estimate a 3D ev ent location using depth from defocus ha ve also attempted to remov e dark counts us- ing a denoising algorithm before solving for the location’ s maximum likelihood estimate [ 8 ]. The denoising algorithm consists of computing the minimum spanning tree among all photons in an image and remo ving edges longer than a chosen threshold, T edg e . The photons in the largest con- nected component by cardinality are classified as the scin- tillation photons, and the rest are discarded as dark counts. W e test this method on the simulated non-kaleidoscopic im- ages (described in Sec. 6 of the main paper) and report re- sults in T ab. 2 . W e observe similar performance as the non- kaleidoscopic version of our proposed algorithm. A verage error and resolution (mm) ↓ N 0 T edge (pixels) 3D Error x Res. y Res. z Res. 30 40 0.81 1.43 1.41 1.42 80 0.64 1.04 1.02 1.04 120 1.26 1.72 1.91 1.73 20 40 0.92 1.59 1.56 1.56 80 0.68 1.05 1.03 1.06 120 1.59 2.10 2.38 2.21 10 40 1.14 1.89 1.85 1.90 80 0.80 1.20 1.13 1.14 120 2.04 2.84 2.90 2.83 T able 2. Prior method all simulation results. Estimating depth from defocus on non-kaleidoscopic images after attempting to re- mov e dark counts as in a prior method [ 8 ]. 13 H. Additional Figures Figure 9. Histogram of dark counts per experimental image. A median of 4 dark counts per image after zeroing hot pixels is observed out of 130,000 images taken in the dark with no gamma-ray source present. (a) (b) Figure 10. Histogram of counts per experimental image with the gamma-ray source present. A median of 4 counts per image after zeroing hot pixels is observed out of 13,000,000 images taken with the g amma-ray source present. a) The full histogram. b) The histogram clipped in the y-axis. 14 Figure 11. Regularization example. An experimental image o verlaid with the algorithm’ s estimated Gaussian components with (a) λ = 0 (no regularization) and (b) λ = 10 (regularization). Each dashed red circle is centered on the Gaussian component’ s mean. The inner and outer circles are one and two standard deviations in radius, respectiv ely . Red and blue circles represent the event and mirror reflections, respectiv ely . The number of counts in the image and the algorithm’ s estimated event location are shown in each image. Pixels with a photon are enlarged with a 3 × 3 filter for visualization purposes. Figure 12. Selected experimental calibration image. The image is ov erlaid with the Gaussian components found after manually adjusting the camera parameters. Each dashed red circle is centered on the Gaussian component’ s mean. The inner and outer circles are one and two standard deviations in radius, respectively . Red and blue circles represent the ev ent and mirror reflections, respectiv ely . Pixels with a photon are enlarged with a 3 × 3 filter for visualization purposes. 15 Figure 13. Additional selected experimental images. Experimental images o verlaid with the algorithm’ s estimated Gaussian components. Each dashed red circle is centered on the Gaussian component’ s mean. The inner and outer circles are one and two standard deviations in radius, respectively . Red and blue circles represent the event and mirror reflections, respectiv ely . Pixels with a photon are enlarged with a 3 × 3 filter for visualization purposes. The number of counts in the image and the algorithm’ s estimated ev ent location are shown in each image. 16 Figure 14. Experimental agreement in event location measurements using regularization ( λ = 10 ). Histogram of distances between mean estimated ev ent location and each image’ s estimated event location after mirror reflection remo vals. Median, mean, stdev: 0.10 mm, 0.17 mm, and 0.22 mm. 17,666 distances. (a) (b) Figure 15. Experimental fraction of counts in an image remov ed during mirror remov als using regularization ( λ = 10 ). (a) One mirror reflection remov al. Median, mean, stdev: 0.19, 0.18, and 0.07. 6,424 images. (b) T wo mirror reflection remov als. Median, mean, stdev: 0.38, 0.37, and 0.08. 9,636 images. 17 Figure 16. Experimental focus. A view of how the camera is focused on the scintillator in the experiments. The edges between mirrors are visible. 18 I. Gradient Derivations Consider a square pyramid kaleidoscopic scintillator with height h and index of refraction n . The camera’ s focal plane is located at the apparent depth of the scintillator’ s apex at z w = h − h/n (world coordinates). Denote an event’ s real location p 0 at ( x 0 , y 0 , z w 0 ) in world coordinates. T o transform from w orld coordinates (origin at p yramid apex) to camera coordinates (origin at lens), apply z c = z lens − z w , where z lens is in world coordinates. The ev ent’ s mirror reflection k is located at p k = ( x k , y k , z k ) = T k p 0 . α xy k denotes the coefficient in T k corresponding to x 0 and y k . k = 0 refers to the event itself, and not a mirror reflection. These deriv ations assume µ k = h S 2 z k x k , S 2 z k y k i . In these deriv ations, we use camera coordinates (origin at lens) denoted as z k instead of z ck as in the main paper . The camera sees events and mirror reflections at their apparent depth rather than their real depth. z w 0 is the z -coordinate of the e vent’ s real location in world coordinates, and z k is the ev ent or mirror reflection’ s apparent z -coordinate in camera coordinates. I.1. Unweighted Likelihood x k = α xxk x 0 + α xy k y 0 + α xz k z w 0 (14) y k = α y xk x 0 + α y yk y 0 + α y zk z w 0 (15) z k = z lens −  h − h − ( α z xk x 0 + α z yk y 0 + α z zk z w 0 ) n  (16) µ k =  S 2 z k x k , S 2 z k y k  (17) σ k = a AS 2 S 1 | S 1 − z k | z k (18) L ( θ ; t , z ) = N Y i =1 K Y k =0  π k N ( t i ; µ k , σ 2 k )  1 ( z i = k ) (19) = N Y i =1 K Y k =0  π k 1 2 π σ k 2 exp  − 1 2 σ k 2 ( t i − µ k ) T ( t i − µ k )  1 ( z i = k ) (20) ( t i − µ k ) T ( t i − µ k ) = t ix 2 − 2 t ix  S 2 x k z k  +  S 2 x k z k  2 + t iy 2 − 2 t iy  S 2 y k z k  +  S 2 y k z k  2 (21) = t ix 2 − 2 t ix S 2 x k z k + S 2 2  x k z k  2 + t iy 2 − 2 t iy S 2 y k z k + S 2 2  y k z k  2 (22) Q = X i X k r ik  log π k − log (2 π σ k 2 ) − σ k − 2 2 ( t i − µ k ) T ( t i − µ k )  (23) x k z k = α xxk x 0 + α xy k y 0 + α xz k z w 0 z lens −  h − h − ( α zxk x 0 + α zy k y 0 + α zz k z w 0 ) n  (24) y k z k = α y xk x 0 + α y yk y 0 + α y zk z w 0 z lens −  h − h − ( α zxk x 0 + α zy k y 0 + α zz k z w 0 ) n  (25) 19 u x = x k = α xxk x 0 + α xy k y 0 + α xz k z w 0 u x ′ ( x 0 ) = α xxk u x ′ ( y 0 ) = α xy k u x ′ ( z w 0 ) = α xz k u y = y k = α y xk x 0 + α y yk y 0 + α y zk z w 0 u y ′ ( x 0 ) = α y xk u y ′ ( y 0 ) = α y yk u y ′ ( z w 0 ) = α y zk v z = z k = z lens −  h − h − ( α zxk x 0 + α zy k y 0 + α zz k z w 0 ) n  v z ′ ( x 0 ) = − α z xk /n v z ′ ( y 0 ) = − α z yk /n v z ′ ( z w 0 ) = − α z zk /n ∂ ∂ x 0  x k z k  = u x ′ v z − u x v z ′ v 2 = α xxk z k + α z xk x k /n z k 2 (26) ∂ ∂ y 0  x k z k  = α xy k z k + α z yk x k /n z k 2 (27) ∂ ∂ z w 0  x k z k  = α xz k z k + α z zk x k /n z k 2 (28) ∂ ∂ x 0  y k z k  = u y ′ v z − u y v z ′ v 2 = α y xk z k + α z xk y k /n z k 2 (29) ∂ ∂ y 0  y k z k  = α y yk z k + α z yk y k /n z k 2 (30) ∂ ∂ z w 0  y k z k  = α y zk z k + α z zk y k /n z k 2 (31) ∂ ∂ x 0  x k z k  2 ! = 2 x k z k ∂ ∂ x 0  x k z k  (32) ∂ ∂ y 0  x k z k  2 ! = 2 x k z k ∂ ∂ y 0  x k z k  (33) ∂ ∂ z w 0  x k z k  2 ! = 2 x k z k ∂ ∂ z w 0  x k z k  (34) ∂ ∂ x 0  y k z k  2 ! = 2 y k z k ∂ ∂ x 0  y k z k  (35) 20 ∂ ∂ y 0  y k z k  2 ! = 2 y k z k ∂ ∂ y 0  y k z k  (36) ∂ ∂ z w 0  y k z k  2 ! = 2 y k z k ∂ ∂ z w 0  y k z k  (37) σ k = a AS 2 S 1 | S 1 − z k | z k (38) | S 1 − z k | z k = | S 1 − z lens −  h − h − ( α zxk x 0 + α zy k y 0 + α zz k z w 0 ) n  | z lens −  h − h − ( α zxk x 0 + α zy k y 0 + α zz k z w 0 ) n  u = S 1 − z k = S 1 − z lens −  h − h − ( α zxk x 0 + α zy k y 0 + α zz k z w 0 ) n  ∂ σ k ∂ x 0 = a AS 2 S 1 ∂ ∂ x 0 | S 1 − z k | z k = a AS 2 S 1 α z xk sig n ( u ) z k /n − | u | ( − α z xk /n ) z k 2 = a AS 2 S 1 α z xk sig n ( u ) z k /n + α z xk | u | /n z k 2 (39) ∂ σ k ∂ y 0 = a AS 2 S 1 ∂ ∂ y 0 | S 1 − z k | z k = a AS 2 S 1 α z yk sig n ( u ) z k /n + α z yk | u | /n z k 2 (40) ∂ σ k ∂ z w 0 = a AS 2 S 1 ∂ ∂ z w 0 | S 1 − z k | z k = a AS 2 S 1 α z zk sig n ( u ) z k /n + α z zk | u | /n z k 2 (41) Q = X i X k r ik  log π k − log (2 π σ k 2 ) − σ k − 2 2 ( t i − µ k ) T ( t i − µ k )  (42) = X i X k r ik " log π k − log (2 π σ k 2 ) − σ k − 2 2 t ix 2 − 2 t ix S 2 x k z k + S 2 2  x k z k  2 + t iy 2 − 2 t iy S 2 y k z k + S 2 2  y k z k  2 !# (43) f ( x k , y k , z k ) = t ix 2 − 2 t ix S 2 x k z k + S 2 2  x k z k  2 + t iy 2 − 2 t iy S 2 y k z k + S 2 2  y k z k  2 ! (44) ∂ f ∂ x 0 = − 2 t ix S 2 ∂ ∂ x 0  x k z k  + S 2 2 ∂ ∂ x 0  x k z k  2 ! − 2 t iy S 2 ∂ ∂ x 0  y k z k  + S 2 2 ∂ ∂ x 0  y k z k  2 ! (45) ∂ f ∂ y 0 = − 2 t ix S 2 ∂ ∂ y 0  x k z k  + S 2 2 ∂ ∂ y 0  x k z k  2 ! − 2 t iy S 2 ∂ ∂ y 0  y k z k  + S 2 2 ∂ ∂ y 0  y k z k  2 ! (46) ∂ f ∂ z w 0 = − 2 t ix S 2 ∂ ∂ z w 0  x k z k  + S 2 2 ∂ ∂ z w 0  x k z k  2 ! − 2 t iy S 2 ∂ ∂ z w 0  y k z k  + S 2 2 ∂ ∂ z w 0  y k z k  2 ! (47) ∂ Q ∂ x 0 = X i X k r ik  − 2 σ k − 1 ∂ σ k ∂ x 0 −  − σ k − 3 ∂ σ k ∂ x 0 f + σ k − 2 2 ∂ f ∂ x 0  (48) 21 ∂ Q ∂ y 0 = X i X k r ik  − 2 σ k − 1 ∂ σ k ∂ y 0 −  − σ k − 3 ∂ σ k ∂ y 0 f + σ k − 2 2 ∂ f ∂ y 0  (49) ∂ Q ∂ z w 0 = X i X k r ik  − 2 σ k − 1 ∂ σ k ∂ z w 0 −  − σ k − 3 ∂ σ k ∂ z w 0 f + σ k − 2 2 ∂ f ∂ z w 0  (50) I.2. W eighted Likelihood w i is the weight assigned to photon sample i . w i = P j ∈ S q i exp  − || t i − t j || 2 2 ν  where S q i is the set of q nearest neighbors of photon i , and ν is a positi ve scalar . L ( θ ; t , z ) = N Y i =1 K Y k =0  π k N ( t i ; µ k , 1 w i σ 2 k )  1 ( z i = k ) (51) = N Y i =1 K Y k =0  π k w i 2 π σ k 2 exp  − w i 2 σ k 2 ( t i − µ k ) T ( t i − µ k )  1 ( z i = k ) (52) Q = X i X k r ik  log ( π k ) + log ( w i ) − log (2 π σ k 2 ) − w i σ k − 2 2 ( t i − µ k ) T ( t i − µ k )  (53) ∂ Q ∂ x 0 = X i X k r ik  − 2 σ k − 1 ∂ σ k ∂ x 0 − w i  − σ k − 3 ∂ σ k ∂ x 0 f + σ k − 2 2 ∂ f ∂ x 0  (54) ∂ Q ∂ y 0 = X i X k r ik  − 2 σ k − 1 ∂ σ k ∂ y 0 − w i  − σ k − 3 ∂ σ k ∂ y 0 f + σ k − 2 2 ∂ f ∂ y 0  (55) ∂ Q ∂ z w 0 = X i X k r ik  − 2 σ k − 1 ∂ σ k ∂ z w 0 − w i  − σ k − 3 ∂ σ k ∂ z w 0 f + σ k − 2 2 ∂ f ∂ z w 0  (56) I.3. Regularized Objective Regularized objective function R , where µ 0 k is the centroid obtained from the initialization procedure corresponding to ev ent/mirror reflection k . R = Q − λ K X k =0   µ k − µ 0 k   2 2 (57) g k =   µ k − µ 0 k   2 2 = ( µ kx − µ 0 kx ) 2 + ( µ ky − µ 0 ky ) 2 =  S 2 x k z k  − µ 0 kx  2 +  S 2 y k z k  − µ 0 ky  2 (58) ∂ g k ∂ x 0 = 2  S 2 x k z k − µ 0 kx   S 2 ∂ ∂ x 0  x k z k  + 2  S 2 y k z k − µ 0 ky   S 2 ∂ ∂ x 0  y k z k  (59) 22 ∂ g k ∂ y 0 = 2  S 2 x k z k − µ 0 kx   S 2 ∂ ∂ y 0  x k z k  + 2  S 2 y k z k − µ 0 ky   S 2 ∂ ∂ y 0  y k z k  (60) ∂ g k ∂ z w 0 = 2  S 2 x k z k − µ 0 kx   S 2 ∂ ∂ z w 0  x k z k  + 2  S 2 y k z k − µ 0 ky   S 2 ∂ ∂ z w 0  y k z k  (61) ∂ R ∂ x 0 = ∂ Q ∂ x 0 − λ K X k =0 ∂ g k ∂ x 0 (62) ∂ R ∂ y 0 = ∂ Q ∂ y 0 − λ K X k =0 ∂ g k ∂ y 0 (63) ∂ R ∂ z w 0 = ∂ Q ∂ z w 0 − λ K X k =0 ∂ g k ∂ z w 0 (64) 23

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment