Standardized spectral and radiometric calibration of consumer cameras

Consumer cameras, particularly onboard smartphones and UAVs, are now commonly used as scientific instruments. However, their data processing pipelines are not optimized for quantitative radiometry and their calibration is more complex than that of sc…

Authors: Olivier Burggraaff, Norbert Schmidt, Jaime Zamorano

Standardized spectral and radiometric calibration of consumer cameras
S t a n d a rdi z ed s p e c t r a l a nd r ad io me tr ic c a l i b r a t i o n of c on su me r c a m e r a s O L I V I E R B U R G G R A A F F 1 , 2 , * , N O R B E R T S C H M I D T 3 , J A I M E Z A M O R A N O 4 , K L A A S P AU L Y 5 , S E R G I O P A S C U A L 4 , C A R L O S T A P I A 4 , E VA N G E L O S S P Y R A K O S 6 , A N D F R A N S S N I K 1 1 Leiden Obser v atory, Leiden U niv ersity , PO Bo x 9513, 2300 RA Leiden, The N ether lands 2 Institute of Envir onmental Sciences (CML), Leiden U niv ersity , PO Bo x 9518, 2300 RA Leiden, The N ether lands 3 DDQ Apps, W ebservices, Project Manag ement, Maastric ht, The Net herlands 4 Dept. Astr ofísica y CC. de la Atmósfera, IP ARC OS, Univ ersidad Complutense de Madrid, Madrid, 28040, Spain 5 VITO; Flemish Ins titute f or T ec hnological Resear c h, Mol, Belgium 6 Biological and Environmental Sciences, Sc hool of Natur al Sciences, Univ ersity of Stirling, S tirling, Unit ed Kingdom * burgg raaff@s tr w .leidenuniv .nl Abstract : Consumer cameras, par ticular ly onboard smar tphones and U A Vs, are no w commonly used as scientific instruments. Ho we ver , their data processing pipelines are not optimized f or quantitativ e radiometr y and their calibration is more complex than that of scientific cameras. The lack of a standardized calibration methodology limits the interoperability between devices and, in the ev er -changing market, ultimately the lif espan of projects using them. W e present a standardized methodology and database (SPECT A CLE) f or spectral and radiometric calibrations of consumer cameras, including linearity , bias variations, read-out noise, dark cur rent, ISO speed and gain, flat-field, and R GB spectral response. This includes golden standard g round-truth methods and do-it-y ourself methods suitable f or non-experts. Appl ying this methodology to sev en popular cameras, we f ound high linearity in RA W but not JPEG data, inter -pixel gain variations >400% cor related with larg e-scale bias and read-out noise patter ns, non-trivial ISO speed normalization functions, flat-field cor rection factors varying by up to 2.79 ov er the field of vie w , and both similar ities and differences in spectral response. Moreov er , these results differed wildly betw een camera models, highlighting the impor tance of standardization and a centralized database. © 2019 Optical Society of America 1. Introduction Consumer cameras hav e seen increasing scientific use in recent years. Their lo w cost makes them ideal for projects inv olving large scale deplo yment, autonomous monitor ing, or citizen science. Successful scientific applications include environmental monitoring [1 – 13], cosmic ra y detection [14], veg etation mapping [15 – 19], color science [9, 20 – 25], and biomedical applications [26 – 33]. Ho w ev er , the use of consumer cameras is made difficult b y limited softw are controls and camera specifications. Inter-calibration of multiple camera models is comple x and laborious and the market constantl y shifting, and for these reasons many projects are limited to only a fe w devices. These constraints se v erely affect both the data quality and the sustainability of projects using consumer cameras. Smartphones, in par ticular , hav e become a common tool f or research, thanks to their wide a vailability and f eatures such as wireless connectivity . Man y scientific applications (apps) using smar tphone cameras ha v e been dev eloped, across a variety of fields. A recent ex ample is HydroColor , a citizen science tool for measuring w ater quality , specifically turbidity and remote sensing reflectance R r s . These are derived from R GB color photographs using standard in v ersion algor ithms. R esults from this app agree well with prof essional standard equipment, with mean er rors in R r s and turbidity ≤ 26% compared to reference sensors. How ev er, due to softw are constraints, the app uses compressed JPEG data rather than ra w sensor data and assumes identical spectral responses f or all cameras. These f actors sev erel y limit the possible data quality . Ne vertheless, HydroColor has already seen significant adoption by the community , and future dev elopments ma y reduce the af orementioned limitations [2 – 4]. Another recent application of smar tphone cameras is bioluminescent-based analyte quantitation by smartphone (B A QS), a technique for the detection of bioluminescent bacteria. Using B A QS, flux intensities do wn to the pW scale can be detected on some smar tphone models; ho we ver , on others, software constraints and dark noise sev erely limit its sensitivity [34]. As a final e xample, Skandarajah et al. used smar tphone cameras with conv entional microscopes f or micron-scale imaging, f or e xample of stained blood samples. R esolutions comparable to scientific cameras w ere achie ved, but intensity and color measurements were limited b y a lack of camera control and f actors including nonlinear ity and white balance [32]. A full revie w of smar tphone science is outside the scope of this work, and we instead ref er the reader to a number of e xtensiv e revie ws b y other authors [35–42]. Smartphone spectroscopy is an activ e field of dev elopment [39, 43]. Many spectroscopic add-ons ha v e been dev eloped, including do-it-yourself models costing less than $10 at Public Lab ( https://publiclab.org/wiki/spectrometry ). One earl y smar tphone spectrome- ter w as iSPEX, a spectropolar imetric add-on f or iPhone devices used by >3000 citizen scientists to measure aerosol optical thic kness (A OT) in the Nether lands in 2013. iSPEX data w ere f ound to agree well with reference sensors, with a cor relation coefficient of r = 0 . 81 betw een A OT v alues observed with iSPEX and with the Moderate Resolution Imaging Spectroradiometer (MODIS) Aq ua and T erra satellites [10]. Ho we v er , the iSPEX data w ere limited in their polar imetric accuracy (absolute uncertainties in the degree of linear polar ization (DoLP) ≈ 0 . 03 ), pre v enting quantitativ e measurements of aerosol compositions and sizes [10]. This relativ ely large er ror stemmed from a lack of camera controls, such as the inability to fix the focus of the camera to a controlled and reproducible position. Fur thermore, the sustainability of iSPEX in the fas t-moving smartphone market was limited by its need f or device-specific calibrations. Consumer unmanned aerial v ehicles (U A Vs) with R GB cameras ha v e similarl y become common scientific instruments. They provide a lo w-cos t, high-resolution replacement for , or complement to, satellite and air plane imag ery , especially f or en vironmental monitoring [15 – 17, 44 – 46]. U A V data are increasingly being integrated with data from other platforms, such as satellites [47]. Ho we v er, f e w scientific consumer camera projects progress past a proof-of-concept on a handful of camera models, which often become obsolete within two y ears, par ticular ly in the constantl y shifting smar tphone market. This sev erely limits the sustainability of projects that require calibrations specific to each camera model. Difficulties in upscaling and future- proofing such calibrations are an oft cited constraint on the combination of multiple camera models [5, 10, 13, 14, 21, 40, 48]. Further complications are introduced by the lack of control ov er camera hardw are and softw are parameters suc h as focus and white balance [6, 10, 15, 32, 34, 49]. For e xample, the dominant smartphone operating systems, Android and iOS, only introduced suppor t f or unprocessed (RA W) imagery as recently as 2014 (Android 5.0 ‘Lollipop’) and 2016 (iOS 10). Previousl y , third-par ty dev elopers could only use JPEG data, which introduce a number of sy stematic er rors due to their lossy compression and bit-rate reduction [1, 2, 5, 10, 24, 32, 40, 49, 50]. Other common problems in consumer camera data include nonlinear ity and the g amma correction [1, 2, 9 – 12, 19, 22, 24, 28, 32, 50 – 55], electronic and ther mal noise [5, 14, 34, 46, 56 – 58], and highly variable (between camera models) spectral response functions which are not provided by manuf acturers [1, 2, 13, 21, 23, 24, 40, 42, 46, 51, 59]. These factors limit the accuracy of radiometr ic measurements done with consumer cameras b y introducing systematic er rors. Further more, the accuracy of color measurements and their con v ersion to standard measures, such as the CIE 1931 XYZ and CIELAB color spaces, is limited by distor tions in the obser v ed colors [20] and differences in spectral response functions [21, 23–25]. Extensiv e (spectro-)radiometr ic calibrations of consumer cameras are labor ious and require specialized equipment and are thus not commonly per f ormed [23, 54, 60]. A notable e xception is the spectral and absolute radiometr ic calibration of a Raspber ry Pi 3 V2 webcam by Pagnutti et al. [57], including calibrations of linear ity , exposure stability , thermal and electronic noise, flat-field, and spectral response. Using this absolute radiometr ic calibration, digital values can be conv er ted into SI units of radiance. Ho we ver , the authors noted the need to characterize a larg e number of these cameras before the results could be applied in g eneral. Moreov er , cer tain calibrations are device-dependent and would need to be done separately on each de vice. Spectral and radiometric calibrations of sev en cameras, including the Raspber ry Pi, are given in [51]. These calibrations include dark cur rent, flat-fielding, linear ity , and spectral characterization. Ho we ver , f or the five digicams included in this w ork, JPEG data were used, sev erely limiting the quality and usefulness of these calibrations, as descr ibed abo v e. Spectral characterizations are more commonly published since these are vital f or quantitative color anal ysis. Using various methods, the spectral responses of sev eral Canon [1, 19, 23, 24, 46, 51, 54, 61], Nikon [1, 13, 23, 24, 54, 59 – 62], Olympus [23, 24, 51], Panasonic [46], SeaLife [13], Sigma [60], and Son y [1, 23, 46, 51, 61] digital cameras (digicams), as well as a number of smartphones [2, 23], ha ve been measured. Direct comparisons between >2 different camera models are given in [1, 2, 23, 46, 51]. Common f eatures include the peak response wa velengths f or the R GB color filters, appro ximately 600, 520, and 470 nm, respectivel y , as w ell as a roughly Gaussian profile around the peak. Differences are found especially in the wings, notably the locations of secondar y peaks and near -infrared (NIR) and ultra violet (UV) cut-off wa velengths. These ma y cause significant differences in observed colors betw een cameras [21, 23], especially f or nar ro w-band sources. Camera calibrations in the literature are often limited to a small number of cameras or proper ties, either to nar ro w down the scope or because of limitations in time and equipment. Further more, calibration data are published in varying f ormats, locations, and quality , complicating their use b y others. S tandardized formats exis t, such as those f or vignetting, bias, and color cor rections described in Adobe ’ s digital negativ e (DNG) standard [63], but ha v e seen limited adoption. The European Machine Vision Association (EMV A) standard 1288 [64] f or characterization of cameras is e xtremely thorough, but has also seen limited adoption due to the high-end equipment required [54] and its scope simply being too broad for man y practical pur poses. Similarl y , standardized data sets or databases, f or e xample containing spectral response cur v es [23, 60], hav e been created but these are limited in scope and, again, adoption. T o our kno w ledge, no widely adopted standardized methodology or centralized database containing spectral and radiometr ic calibration data for consumer cameras has been created thus f ar . In this work, we present a standardized methodology for the calibration of consumer cameras and a database, SPECT A CLE (Standardised Photog raphic Equipment Calibration T echnique And CataLoguE), containing calibration data f or the most popular de vices. The calibration methodology is focused on simplicity and facilitating measurements b y non-experts and those lacking expensiv e equipment, similarl y to [54] but with a broader scope including software, optics, and sensor characteristics. The database is designed with openness and sustainability in mind, f ocusing on community contr ibutions. Further more, w e strive to f ollow pre viously e xisting standards, such as DNG [63] and EMV A 1288 [64], where practical. Our focus is on radiometr ic and photometr ic measurements but these calibration data can equall y be used f or color science purposes, in par ticular to conv er t betw een color spaces using the measured spectral response curves. W e stress that we hav e no financial nor commercial interests in consumer cameras, and an y comparison between devices is purel y scientific. The aim of our standardized methodology and the SPECT A CLE database is merely to simplify the use of data from consumer cameras, not to cast judgment on their quality . Sect. 2 contains an o verview of hardw are and software trends in consumer cameras. W e present the standardized calibration methodology in Sect. 3. Sect. 4 contains results from its application to sev eral popular cameras and a descr iption of the SPECT A CLE database. Finally , in Sect. 5 we compare our findings with previous work and discuss future perspectives. 2. T rends in consumer cameras Consumer cameras can be divided into f our categor ies, namely smar tphones, U A Vs, digicams (DSLR and mir ror less), and w ebcams. Despite ser ving very diverse pur poses, these cameras share common characteristics and can be calibrated with the same methods. CMOS-based sensors now dominate the consumer camera market [43]. These are often not produced in-house by camera manuf acturers, but acquired from external par ties, such as Sony and Samsung. Different cameras often use the same sensor , such as the Sony IMX298 which is used in 12 smar tphone models from 10 different manufacturers. Most color cameras use Ba y er filters, on-chip R GB filters arranged in a check erboard patter n, with tw o green pixels (G and G 2 ) f or ev er y red or blue one [65]. The spectral responses of these filters differ strongl y between cameras and are fur ther modified b y f ore-optics [21]. Alter nate pix elated filter arrangements exis t, intended for e xample to reduce aliasing, but with little adoption so far [66]. Data from the separate R GBG 2 pix els can be recombined through a process known as demosaicing to retriev e an image with inter polated R GB values f or each pix el. Many different schemes exis t for this [66], ranging from simple bilinear interpolation [20, 46, 57] to complex computational methods [67]. Consumer camera software often includes propr ietary demosaicing algorithms [19, 32] which may introduce comple x, unpredictable effects. Depending on their implementation, demosaicing schemes typically mix data from different filters and remov e their mutual independence, leading to undesirable cross-f eed effects [2, 57]. In an y case, the added data are fully synthetic and thus do not offer any new radiometric inf ormation. It is thus pref erable f or radiometric applications to treat the R GBG 2 imag es completely independentl y [19] and demosaic data for visualization pur poses [57] onl y . As discussed previousl y , the most commonly used digital file f or mats are JPEG (or JPG) and RA W . In both formats, data are sav ed on a pixel-b y-pix el basis in analog-digital units (ADU). ADU are alter natel y ref erred to as digital numbers (DN) in the literature, but in this work w e will use the ADU nomenclature. JPEG (ISO 10918) is based on lossy spatial compression and do wnsampling to 8-bit values, optimal for small file sizes while maintaining aesthetic qualities. Due to camera-specific processing and compression ar tef acts, JPEG images lose inf ormation and are not recommended f or quantitativ e analy sis [1, 2, 5, 10, 19, 24, 32, 40, 49, 50]. While standardizations exis t, such as the standard Red Green Blue (sRGB) color space and gamma curve [63], these are not strictly adhered to and cannot be assumed in data processing [68]. Con v ersely , RA W files contain relativ ely unprocessed sensor output, intended f or manual post- processing. One factor complicating the reduction of RA W data is their mosaiced nature, due to which they must be demosaiced or treated as multiple independent images, as discussed abo v e. Despite these complications, their unprocessed nature makes RA W data highly preferable f or scientific pur poses [2, 19, 21, 24, 40, 57]. A vailable camera controls generall y include f ocus, exposure time, ISO speed (sensitivity), and aperture. Focus and aper ture are chang ed by phy sical mo v ement of camera optics, though most w ebcams and smar tphones only allow a single, fix ed aperture. ISO speed is set b y changing the camera gain, through analog amplification or digital processing. Analog amplification in v olv es varying the gain of the CMOS amplifiers, which can be done on the lev el of individual pixels. Con v ersely , digital gain is implemented in post-processing b y simply re-scaling and inter polating measured digital values. Since ISO speed is a measure of the ov erall sensitivity of the camera, including fore-optics, each camera (and possibly each pixel) has a unique relation between ISO speed and gain. Finall y , e xposure time may be set by a phy sical shutter (common in digicams) or an electronic one (common in smar tphones). Other parameters like white balance onl y affect processed imagery and are not rele vant to RA W photography . Man y cameras include some built-in calibrations, most notably for nonlinear ity , dark current, and flat-fielding effects. Nonlinearity cor rections are typically based on previousl y measured correction cur v es [69]. Dark current cor rections (autodarking) are commonl y done using unilluminated exposures or permanently dark pixels around the sensor . Finally , flat-fielding (specifically vignetting) is typically cor rected using a pre-made cor rection map. A variety of methods f or generating such maps e xists, based f or e xample on computational methods using regular photographs [20, 70 – 73], simply av eraging man y e xposures [19], and simply imaging white paper [74]. These maps are typicall y parametr ized, for which various methods also e xist [19, 20, 57, 63, 71 – 74], the simplest being the cos 4 model, a combination of inv erse square falloff, Lambert’ s la w , and f oreshor tening [71]. Alter natel y , a pix el-b y-pix el map of vignetting cor rection coefficients ma y be used. Such maps may be device-specific or g eneralized f or a camera model. Notabl y , iOS-based smar tphones use the sev en-parameter parametrization descr ibed in the DNG standard [63] (see Sect. 3.8) while Android-based smar tphones use pix el-b y-pix el maps. 2.1. Smar tphones The smartphone market has become remarkably homogeneous in recent years, with virtually all models using the slate f orm factor , f eaturing a larg e touch screen, f e w buttons, and a camera on either side of the device. The most popular smar tphones are all iOS- or Android-based. Both these operating sys tems no w suppor t RA W photography using A dobe’ s DNG standard [63], though not on all devices. Hardware proper ties are rarely released by manufacturers, and are instead often pro vided b y revie wers through disassembly of the smartphone. Smartphone cameras aim to reproduce the human ey e and thus hav e similar optical proper ties [2]. Sensors, most commonl y from the Son y Exmor ser ies, are compact with 12–16 megapix els and a diagonal of 5–8 mm. Some de vices focus on high-resolution imagery with many pixels, while others are optimized f or lo w-light conditions, with fe w er but larger pixels. Smartphones now increasingly ha v e multiple rear cameras. These secondary cameras offer f eatures such as different fixed f ocal lengths and higher sensitivity , f or e xample with a different lens or a monochromatic sensor . All rear cameras are typically placed in a cluster at the top r ight or top center of the smar tphone. 3. Methods In this section we descr ibe the standardized methods f or the calibration of consumer cameras. W e dev eloped a custom data processing pipeline, implemented in Python scr ipts a vailable on GitHub ( https://github.com/monocle- h2020/camera_calibration ) and iOS and Android apps ( https://github.com/monocle- h2020/spectacle_android and https://github.com/monocle- h2020/spectacle_ios ). Sect. 3.1 descr ibes the e xperimental setups and data processing used in calibration measure- ments. The methods used to characterize and calibrate the camera responses are giv en in Sects. 3.2–3.9. Finall y Sect. 3.10 descr ibes how consumer camera data are con verted into relative radiometric units using the pre viously descr ibed calibration measurements. These units pro vide a constant scale, independent of e xposure parameters and individual device characteristics, f or each camera model, a constant factor K per model a wa y from absolute radiometric units (W m − 2 sr − 1 ). Absolute radiometr ic calibration is outside the scope of this w ork. 3.1. Experimental setup This section descr ibes the setups used in our golden standard ground-tr uth e xper iments. Descrip- tions of do-it-yourself (DIY) calibration methods are giv en in the relev ant sections. All images from all cameras w ere taken in RA W f ormat; for the linear ity measurements, simultaneous RA W and JPEG images were taken f or compar ison. As discussed in Sect. 2, demosaicing schemes introduce synthetic data and undesirable cross-feed effects. For this reason, in our data reduction the RA W images were split into separate R GBG 2 images which were analyzed individually [19]. Multiple imag es w ere taken and stac ked f or each measurement to improv e the signal-to-noise ratio (SNR). On smartphones, the af orementioned iOS and Android apps w ere used to control the camera and automatically take multiple exposures. Exposure settings, including ISO speeds and exposure times w ere obtained from camera controls where possible, since EXIF metadata values f or these were f ound (Sect. 4.1) to be unreliable. The setup f or measuring linearity , ISO-gain relations, and inter-pix el gain variations on smartphones is shown in Fig. 1. A halogen light source (OceanOptics HL -2000-LL) was used, specified by the manufacturer to be stable to 0.15% peak -to-peak and drift <0.3% per hour after a warm-up of 10 minutes. Its light was fed into an optical fiber (Thorlabs M25L02) and collimated using two lenses (Thorlabs A C254-030-A with f = 30 mm and A C508-200-A with f = 200 mm). T w o linear polar izers (both Thorlabs LPVISE100-A, with an extinction ratio ≥ 495 from 400-700 nm), the first rotatable and the second fixed, were used to attenuate the light beam entering an integrating sphere (Thorlabs IS200). Using Malus’ s law ( I = I 0 cos 2 θ ), the rotation angle between the polarizers could be used to calculate the attenuation. A calibration detector was not necessary since all experiments done with this setup inv olv e relativ e measurements only . Malus ’ s law was first fitted to a ser ies of e xposures ov er the entire rotation range to deter mine the ref erence angle. The rotation angle of the first polarizer could be deter mined visually up to 2 ° precision, giving a typical uncer tainty on the attenuated intensity of 2.5%. Finall y , smartphones w ere placed on top of the integrating sphere, flush agains t the view -por t. The farthest possible f ocus was used (infinity on Android devices, an arbitrar y number on iOS). All e xperiments done with this setup inv olv ed analy sis on the individual pixel and (broad-band) filter lev el, without an y spatial av eraging. Because of this, differences in illumination due to spectral dependencies in the polarizer throughput or the integrating sphere output did not affect an y of the experiments. Light source Fiber Collimator Linear polarizers Integrating sphere Smartphone Fig. 1. Setup used to measure linear ity , ISO-gain relations, and inter -pix el gain variations on smartphones. The first linear polar izer w as rotatable, the second fixed. Smartphones were placed with their camera flush agains t the vie w-port at the top of the integrating sphere. The linear polar izers can be replaced with alter nate methods for attenuation, such as neutral density filters. Attenuation can also be replaced completely by varying exposure times instead, though phy sical attenuation may be more precise [57]. The integrating sphere ma y be replaced b y another diffuse sur f ace, such as a Spectralon targ et. If sufficiently wide, the light beam may also be shone directly onto the sensor; such a setup w as used for digicams, with the digicam in place of the collimator in Fig. 1 at a sufficient distance to completely illuminate the sensor . This was done to simplify the alignment process since our digicams had large phy sical CMOS sizes. Since all measurements were done on the individual pix el lev el, the y were not affected by the added differences in illumination. Bias, read-out noise, and dark cur rent were measured on all devices b y placing the camera flush agains t a flat sur f ace (such as a table), pointing do wn, in a dark room. The setups f or flat-fielding and spectral characterization are descr ibed in Sects. 3.8 and 3.9, respectiv el y . 3.2. General proper ties General hardw are and softw are properties were retr ie ved from official specifications and commu- nity re view s. A surve y across these pro vided an o v erview of basic phy sical and optical parameters of cameras. On Android smar tphones, the Camera2 API pro vides ample inf ormation on such parameters, facilitating automatic data collection using an app. The retr ie v ed de vice proper ties included the camera type, manufacturer , product code and internal identifiers, release y ear , the number of cameras (f or smartphones), camera module identifiers, and CMOS sensor models. Sensor proper ties included phy sical size, pix el pitch, resolution, or ientation with respect to the device, color filter pattern, and bit depth. Camera optic properties included f ocal length, f -number , neutral density filters (f or high-end smar tphones), and a vignetting model if av ailable. Finall y , software and fir m ware proper ties included suppor ted softw are versions, RA W and JPEG suppor t, estimated bias value (see Sect. 3.4), ISO speed range, e xposure time range, and the activ e part of the sensor (accounting f or dark pixels, see Sect. 3.5). 3.3. Linearity Sensor linear ity w as quantified by measuring the camera response to varying e xposures, either by attenuating a light source or b y v arying the exposure time, as discussed in Sect. 3.1. W e used the setup shown in Fig. 1 with tw o linear polar izers to attenuate the light f or smar tphones, since e xposure times on those are not completely tr ustw or th y (Sect. 4.1). Con v ersely , f or digicams, e xposure times are reliable [54, 60] and thus were used instead of phy sical attenuation to simplify the setup. A third method, varying the phy sical aperture, changes the distribution of light on the sensor [71] and thus cannot be used to measure linear ity . T wo common types of nonlinear ity e xist, either across the entire intensity range or only at high intensities. The f or mer is common in JPEG imag ery due to the gamma cor rection [19, 32] while the latter is expected in both JPEG and RA W data. W e onl y in v estig ated the f ormer since it has the larg est impact on data quality , as descr ibed in Sect. 1. N onlinearity at high intensities is easily negated by discarding data abo v e a threshold value; w e use a threshold of ≥ 95% of the maximum digital value. The linearity of each pixel was expressed through the Pearson cor relation coefficient r , a measure of the linear correlation between intensity and camera response. Pixels were analyzed individually to negate differences in illumination and vignetting effects (Sect. 3.8). Simulated responses of a perf ectly linear camera with a mean er ror of 5% in the incoming intensity simulating, f or e xample, errors in e xposure parameters or polarizer alignment in the setup descr ibed in Sect. 3.1, as well as Poisson noise ( σ N = √ N ) and 10 ADU read noise in the response were analyzed. This included simulated measurements at 15 different exposures, a v eraged o v er 10 images per e xposure. These simulated data resulted in a mean v alue of r = 0 . 996 ± 0 . 002 and a lo wer 0.1 percentile P 0 . 1 ( r ) = 0 . 985 . T o account f or unf oreseen measurement er rors, we set the cut-off for linear ity at r ≥ 0 . 980 . A dditionally , the JPEG data were compared to sR GB-like profiles to determine whether gamma in v ersion [9] is possible. The sRGB-lik e profiles are descr ibed b y Eq. (1) , with J C the JPEG response (0–255) in band C , n a nor malization factor , γ the gamma correction factor and I the incoming intensity in arbitrar y units. The JPEG response of each pixel was individually fit to Eq. (1) with n and γ as free parameters. Additionall y , profiles with standard γ values (2.2 and 2.4 [9]) were fit to the JPEG data (with n free) to determine the accuracy of these standards. J C = 255 × ( 12 . 92 n I if n I < 0 . 0031308 1 . 055 ( n I ) 1 / γ − 0 . 055 otherwise (1) 3.4. Bias & read-out noise Bias (or ‘black lev el’) and read-out noise (R ON) w ere measured by stac king short dark exposures. The bias and RON in individual pix els are given by the mean and v ariance, respectivel y , of their values in each stac k. Many (>50) images per stack are required to distinguish bias variations from R ON. T emporal variations were probed b y repeating this process sev eral times. While EXIF metadata often contain a bias value, this is only an estimate and should be validated b y measurement. 3.5. Dark current Dark cur rent (thermal noise) was measured b y taking dark e xposures with different lengths and fitting a linear relation between e xposure time and camera response to deter mine the dark cur rent in ADU s − 1 . For cameras that hav e autodarking (see Sect. 2), the residual dark cur rent was characterized instead. Depending on the autodark precision, the e xposure-response relation may be non-linear in this case. 3.6. ISO speed The relation betw een camera sensitivity and ISO speed was measured b y taking identically e xposed images at different ISO speeds. These were bias-corrected and pixel v alues w ere divided b y those at the low est ISO speed. A relation between ISO speed and nor malization factor w as then fitted. Lik e the linear ity measurements (Sect. 3.3), this was done individuall y per pixel to negate illumination differences and vignetting effects. This relation ma y be an y combination of linear and constant functions, depending on the implementation of ISO speed ratings. Linear relations cor respond to analog gain, while digital gain may result in linear or constant relations, as described in Sect. 2. 3.7. Gain variations Inter -pixel and inter -filter gain variations were characterized using the mean-variance method [75], which exploits the Poissonian nature of photo-electrons in a sensor. W e applied this method to individual pixels rather than a veraging ov er the sensor, to measure inter-pix el v ariations and remo ve the need f or flat-fielding prior to this calibration. The response of a digital camera to incoming light is given by Eq. (2) , with M the mean response in ADU, I the exposure in photo-electrons, D the dark current in e − , B the bias in ADU , and G the gain in ADU/e − . Both I and D are integrated ov er the exposure time. M = I G + D G + B (2) The variance in the response of a pixel is a combination of shot noise on the photo-electrons and dark cur rent, and read noise. The shot noise f ollo ws a P oissonian distribution with a standard deviation σ I = √ I and thus a variance V I = I . The total variance in the response is expressed in Eq. (3), with V the variance in ADU 2 and RO N the read noise in ADU. V = I G 2 + DG 2 + RO N 2 (3) After cor recting for bias and dark cur rent, and assuming DG 2 is negligible, a linear relation betw een mean and variance is f ound, shown in Eq. (4). V = G M cor + RO N 2 (4) Equation (4) was fitted to mean and variance values from sev eral image stacks taken under different illumination conditions. Within each stack, all images were e xposed identicall y , while the illumination varied between stacks. A large amount of data (>10 stac ks of >50 images each) was necessary to constrain the fitted gain v alues sufficiently (typical relativ e er rors in individual pix els <15%). ISO normalization functions der iv ed in Sect. 3.6 ma y be used to extrapolate measured values to different ISO speeds. 3.8. Flat-field correction Flat-fielding w as performed b y imaging a unif or m light source. Unlik e telescopes, most consumer cameras hav e fields-of-view (FoV) too larg e to use the twilight sky for this. Instead, a larg e integrating sphere was used to create an isotropic light field, as descr ibed in [57]. W e used a LabSphere HELIOS USLR -D12L -NMNN lit by three halogen lamps with a specified luminance unif or mity of ± 1.0%, sequentiall y placing each camera before its aperture. An y significant chromatic differences in the flat-field response w ere measured automatically , since all filters were exposed simultaneously . The R GBG 2 images were split out and each normalized to their maximum value, then recombined and smoothed with a Gaussian filter ( σ = 10 pixels); both the individual R GBG 2 images and the recombined imag e w ere analyzed. Since vignetting, often the dominant flat-field component, is caused by the camera aper ture, the flat-field response changes and must be measured again when v arying the aper ture [71]. Vignetting can be parametr ized in a number of different wa ys, as discussed in Sect. 2. For consistency , we used the DNG sev en-parameter ( k 0 . . . k 4 , ˆ c x , ˆ c y ) model, also used internally in iOS smar tphones, f or the flat-field cor rection factor g ( x , y ) , expressed in Eq. (5) , with r the normalized Euclidean distance from pixel ( x , y ) to the optical center ( ˆ c x , ˆ c y ) . g ( x , y ) = 1 + k 0 r 2 + k 1 r 4 + k 2 r 6 + k 3 r 8 + k 4 r 10 (5) Three simpler, alternate methods were also tested. The first inv olv ed imaging an ov ercast sky , the second imaging the sun with a piece of paper taped onto the camera as a diffuser similarl y to the Hukseflux Pyranometer app ( http://www.hukseflux.com/product/ pyranometer- app ). For the final method, the camera, again with a paper diffuser, was held flush agains t a computer monitor displa ying a white screen, some what similarly to [54]. In all three cases, the camera was dithered and rotated 360 ◦ during measurements to av erage out anisotropies. Data from all three methods were processed in the same wa y as the integ rating sphere data, to compare their efficacy . 3.9. Spectral response The spectral response of a camera, which is a product of the individual spectral responses of its f ore-optics, filters, and sensor , was measured in two w a ys. The first method, using a monochromator , is simple processing-wise as the data are simply a ser ies of imag es at different wa v elengths with known intensities [46, 57, 62]. It also allow s f or the measurement of inter -pixel variations in spectral response. The second, a spectrometer add-on such as iSPEX [10], is more accessible than monochromators but its spectral data are more difficult to calibrate and process. W e used a double monochromator (OL 750-M-D) at the NERC Field Spectroscop y F acility to scan a w a v elength range of 390-700 nm. This wa v elength rang e was chosen because no significant response was f ound outside it on an y of the test cameras. The effective spectral resolution (half bandwidth) of the monochromator was 4 nm, calculated from the grating (1200 groov es/mm) and slits (2.5 mm entrance/exit and 5.0 mm central slit) used. The wa velength range w as critically sampled at 2 nm inter v als. A laser -dr iv en light source (Energetiq EQ-99X) was used, and its spectral output calibrated using a silicon photodiode (Gooch & Housego OL DH-300C with a Hamamatsu S1337-1010BQ sensor). The system w as NIST -traceably calibrated in 2012 and is descr ibed in more detail in [46]. Spectral characterization was also done using a modified (remo ving polar izers and retarders) iSPEX add-on [10]. iSPEX has a slit consisting of tw o par ts, one 0.4 mm (‘broad’) and the other 0.17 mm (‘nar ro w’) wide and a 1000 g roo v es/mm transmission g rating f oil (Edmund Optics #52-116). Using this f oil, a similar spectrometer can be built f or any other camera. The reflection of sunlight on a piece of white paper w as measured using the iSPEX on an iPhone SE. iSPEX projects a spectr um onto the sensor, so the pixel responses must be cor rected for bias, dark cur rent, and flat-field to obtain a quantitativ e spectrum. The 436.6, 544.5, and 611.6 nm spectral lines of a commercial fluorescent lamp were used f or the wa v elength calibration, fitting a quadratic relation between pixel position and wa velength. A stra y light cor rection was done b y subtracting the mean pixel value per column abo ve and belo w the spectr um from the nar ro w and broad slit spectra, respectivel y . T wo theoretical ref erence spectra w ere used to nor malize the obser v ed spectra, namel y a 5777 K black body (approximating the Sun) and a diffuse solar irradiance spectr um generated using the Simple Model for the Atmospheric Radiativ e T ransfer of Sunshine (SMAR TS2) [76, 77] and smoothed to the 5 nm resolution of narrow -slit iSPEX spectra. For the latter , the location and time of the iSPEX measurements as well as the built-in urban aerosol and ground albedo models were used instead of default parameters. The models differed significantly (RMS 34%) due to the diffuse sky ir radiance factored into the SMAR TS2 model. Finall y , the obser v ed spectra were cor rected for the transmission of the iSPEX optics, determined by measur ing the zero-order transmission using a halog en lamp and spectrometer (OceanOptics HL -2000-LL and USB2000+, respectivel y). Instead of the sun, a previousl y calibrated commercial lamp ma y be used. For e xample, the LICA -UCM database ( https://guaix.fis.ucm.es/lamps_spectra ) contains spectra of common commercial lamps which can be used as standard light sources f or spectral response measurements [78]. This method has the advantag e of independence from weather conditions and higher reproducibility compared to solar measurements. Combined with the ne w v ersion of iSPEX w e are cur rentl y de v eloping, featuring a univ ersal smar tphone hardware interface, this enables v olunteer measurements of smar tphone camera spectral responses. The spectral curves R C ( λ ) thus derived were nor malized to the global maximum transmission in all bands and used f or calibration of spectral measurements and in the radiometric cor rection of imaging data (Sect. 3.10) to calculate effectiv e spectral bandwidths Λ C . These are defined as Λ C = ∫ C R 0 C ( λ ) d λ , with R 0 C ( λ ) the spectral response R C ( λ ) normalized to the maximum in band C [57, 79]. This integral was calculated using the composite trapezoid method, implemented in the NumPy function numpy.trapz . 3.10. Relative r adiometr ic calibration The calibrations descr ibed in the pre vious section are used to conv ert digital values to radiance. Follo wing the methods described in [57, 79 – 81], a digital v alue M (in ADU) in band C (R GBG 2 f or Bay er filters) can be conv er ted to effectiv e radiance L C , in units of W m − 2 sr − 1 . Since absolute radiometric calibration is outside the scope of this work, w e instead deter mined the relative effective radiance L 0 C = L C / K , in relative radiometric units (RR U) m − 2 sr − 1 , with K an e xtra factor accounting for the absolute quantum efficiency and transmission of the lens. Measuring these requires a previousl y calibrated light source with a known radiance. The expression f or conv er ting M to L 0 C is given in Eq. (6) . The adv antage of the piece-wise calibration given in Eq. (6) o ver a black -box approach containing all calibration components is its adaptability when a small subset of parameters are chang ed, such as due to fir m ware updates or manufacturing chang es. This wa y , calibration data can be re-used rather than requir ing a full re-calibration with ev er y chang e. L 0 C = h c 1 A d Λ C g  4 ( f # ) 2 π τ N  ( M − B − D τ ) (6) Firs t, the bias B (ADU; Sect. 3.4) and dark cur rent D τ (with D in ADU s − 1 and τ the exposure time in seconds; Sect. 3.5) are subtracted. Linear ization of digital values [81] is not necessar y since we only used sufficiently linear pixels ( r ≥ 0 . 980 ; Sect. 3.3). Ne xt, the image is cor rected f or the exposure parameters, dividing by the e xposure time τ , ISO speed nor malization factor N (Sect. 3.6), and aperture, appro ximated as π / 4 ( f # ) 2 , with f # the f -number of the camera [57]. This approximation causes a systematic error of 4% at f / 2 . 0 [57]; f or fix ed-aper ture systems like smar tphones, this error is not relev ant. For sys tems with adjustable aper tures, an e xact solution ma y be preferable if operating at very low f -numbers. These cor rections yield a response in normalized ADU s − 1 sr − 1 . The third step is the flat-field cor rection. The response is multiplied by the flat-field cor rection g (unitless; Sect. 3.8). The flat-fielding methods used here account for both optical and electronic variations in sensitivity , so a separate cor rection f or inter -pixel gain variations (Sect. 3.7) is not necessary . Since absolute transmission and quantum efficiency w ere not measured, this step yields a response in relativ e counts s − 1 sr − 1 , propor tional to the number of photo-electrons s − 1 sr − 1 . Ne xt, sensor properties are cor rected for . The response is divided by the pixel size A d (m 2 ; Sect. 3.2) to giv e a response in relativ e counts s − 1 m − 2 sr − 1 . It is then divided by the effective spectral bandwidth of band C , Λ C = ∫ C R 0 C ( λ ) d λ (Sect. 3.9). Finall y , the result is conv erted to a relative radiance by multiplying by a f actor h c , with h Planck’ s constant and c the speed of light. This yields L 0 C in RR U m − 2 sr − 1 . For specific applications, Eq. (6) ma y be simplified or adjusted. For e xample, inter -pix el bias and dark cur rent v ar iations are typically negligible in bright conditions. In those cases, B and D ma y be approximated b y constants, and inter -pixel variations incor porated in the er ror budget. For spectroscopic applications, a relative spectral radiance L 0 C , λ in RR U m − 2 sr − 1 nm − 1 is measured, which is not av eraged ov er band C . In this case, the energy per photon is simply h c / λ and onl y the transmission at wa velength λ , R C ( λ ) is rele vant; fur thermore, the result must be divided by the wa velength co verag e of each pixel ∆ λ . This is expressed in Eq. (7). L 0 C , λ = h c λ 1 A d R C ( λ ) ∆ λ g  4 ( f # ) 2 π τ N  ( M λ − B − D τ ) (7) 4. Results The methodology descr ibed in Sect. 3 was applied to three iOS smartphones (Apple iPhone SE, 6S, and 7 Plus), tw o Android devices (Samsung Galaxy S6 and S8), one digicam (Nikon D5300), and one U A V camera (D JI Phantom Pro 4). This section contains an o verview of results from these v ar ious calibration steps. Results f or all devices are included in the SPECT A CLE database further descr ibed in Sect. 4.9. 4.1. General proper ties General hardw are and software properties were retr ie ved from the surve y descr ibed in Sect. 3.2, with a specific f ocus on smar tphones using the previousl y descr ibed Android app. Little variation was f ound in these general proper ties, especially f or smar tphones. F or e xample, vir tuall y all main cameras on smar tphones hav e aper tures of f / 2 . 4 – f / 1 . 5 , f ocal lengths of 3.8–4.5 mm, and sensors of 3.4–6.7 × 2.7–4.7 mm, giving fields-of-view (FoVs) of 60–75 ◦ × 45–55 ◦ . It was found from test images that EXIF metadata from some cameras are inaccurate. For e xample, the iPhone SE can use unrounded exposure times of 1 / 3 . 0 s and 1 / 3 . 9 s but records both as simply 1 / 3 s in metadata. Assuming the recorded exposure time of 1 / 3 s for a real e xposure of 1 / 3 . 9 s would lead to photometr ic er rors up to 30%. T o counteract this, exposure parameters such as ISO speed and exposure time should be recorded separatel y from default EXIF metadata, for e xample with custom EXIF tags or e xtra files. 4.2. Linearity The linear ity of two smar tphones (iPhone SE and Galaxy S8) and one digicam (Nik on D5300) was measured using the methods descr ibed in Sect. 3.3 and the setup described in Sect. 3.1 and shown in Fig. 1. The smar tphones were analyzed using rotating linear polar izers while the D5300 was analyzed b y varying exposure times. Simultaneous RA W and JPEG images w ere taken on each device (using the Fine JPEG setting on the D5300) to compare their responses. JPEG images were taken with a fixed white balance. The P earson r coefficients of the RA W and JPEG responses of all pixels were calculated and their histograms are shown in Fig. 2. The JPEG responses of all pixels in all cameras w ere w ell belo w the linearity threshold ( r ≥ 0 . 980 ), sho wing again that JPEG data are highly nonlinear . Con v ersely , nearl y all RA W responses w ere w ell within the bounds for linear ity , with 99.9% of r values ≥ 0.997 (iPhone SE), ≥ 0.996 (Galaxy S8), and ≥ 0.999 (D5300). The Galaxy S8 was the only camera with RA W responses having r < 0 . 980 , though only in 56 pixels. 1 0 2 1 0 4 1 0 6 iPhone SE 1 0 2 1 0 4 1 0 6 Galaxy S8 0.86 0.88 0.90 0.92 0.94 0.96 0.98 1.00 P e a r s o n r 1 0 2 1 0 4 1 0 6 D5300 Fig. 2. Histogram of Pearson r coefficients f or RA W (black, all filters combined) and JPEG (red/green/blue) responses. The r ≥ 0 . 980 cut-off is sho wn with a dashed black line. The respectiv e cameras are listed ne xt to the vertical axis. Note the logarithmic v er tical scale. The JPEG and RA W responses of individual pixels in the iPhone SE and Galaxy S8 cameras are shown in Fig. 3. The JPEG responses are visibl y nonlinear ( r = 0 . 956 , 0 . 918 ) while the RA W responses are linear within measurement er rors ( r = 0 . 999 , 0 . 998 ). Further more, the dynamic range of the JPEG data is much smaller than that of the RA W data. These differences highlight the advantages of RA W data. Finall y , Fig. 4 show s the best-fitting γ f or the JPEG response per pix el as w ell as the accuracy of two standard values ( γ = 2 . 2 and 2 . 4 , expressed in RMS relative difference ( 1 − data / fit ). Larg e inter -pixel, inter -filter, and inter -device differences in best-fitting γ e xist, indicating an sR GB gamma inv ersion with a single γ value is not possible. Further more, the γ = 2 . 2 and 2 . 4 models are both clearl y v er y inaccurate f or all cameras. For the γ = 2 . 2 and 2 . 4 cases respectivel y , 99.9% of pixels had RMS relative differences between obser v ations and the sR GB model of >7% and >10% (iPhone SE), >13% and >15% (Galaxy S8), and >19% and >21% (Nikon D5300). 4.3. Bias & read noise Bias and read noise variations in f our smar tphone cameras (iPhone SE and 7 Plus, Galaxy S6 and S8), one digicam (Nikon D5300), and one U A V camera (Phantom Pro 4) were analyzed using the methods from Sect. 3.4. Bias values in all cameras deviated sys tematically from the EXIF values by <1 ADU on a v erage, with standard deviations also <1 ADU. How ev er , large outliers were f ound, such as 0 50 100 150 200 250 JPEG value iPhone SE r J P E G = 0 . 9 5 6 r D N G = 0 . 9 9 9 0 50 100 150 200 250 JPEG value Galaxy S8 r J P E G = 0 . 9 1 8 r D N G = 0 . 9 9 8 0.0 0.2 0.4 0.6 0.8 1.0 Relative incident intensity 5.0 2.5 0.0 2.5 Norm. res. (JPEG, %) 0.0 0.2 0.4 0.6 0.8 1.0 Relative incident intensity 5 0 5 Norm. res. (JPEG, %) 0 1000 2000 3000 4000 RAW value 5.0 2.5 0.0 2.5 Norm. res. (RAW, %) 0 250 500 750 1000 RAW value 5 0 5 Norm. res. (RAW, %) Fig. 3. JPEG (blue, left vertical axis) and RA W (black, right axis) response of a single B pix el in the iPhone SE (left) and Galaxy S8 (right) rear cameras, under varying incident intensities. Each point represents the mean of a stac k of 10 images at the same exposure. V er tical er ror bars are smaller than the dot size. The black and blue lines represent the best-fitting linear (RA W) and sR GB-like (JPEG) profiles, respectiv ely . The low er row sho ws the residuals, normalized to the dynamic range. some pixels in our Galaxy S6 which e v en saturated in bias frames. Phantom Pro 4 data are scaled up from 12-bit (its native bit depth) to 16-bit, increasing the observed bias variations. Scaled do wn to 12 bits, its bias variations are similar to those in the other cameras. T ypical obser v ed R ON values were distributed similarl y to inter -pixel bias variations. The smartphones and D5300 show R ON distributions consisting of one or tw o main components <3 ADU , which cor relate with inter -pixel gain v ar iations (Sect. 4.6), and a long but shallo w tail to wards R ON values >20 ADU . As with the bias variations abov e, the Phantom Pro 4 show ed a comparativ ely high mean R ON (14 ADU at ISO speed 100) in 16-bit (scaled-up) data but a comparable value (1.8 ADU) when scaled do wn to its nativ e bit depth of 12 bits. Larg e-scale patter ns in inter-pix el and inter -filter bias and R ON variations were obser v ed in sev eral cameras, most prominently in the smar tphones. Figure 5 show s the R ON per pix el in the sensors of tw o iPhone SE de vices. The R ON and bias patter ns on each device are strongly correlated, suggesting a common or igin. The RMS difference in bias between these tw o devices was 0.31 ADU , larg er than the standard deviation on either device (0.24 and 0.21 ADU). The larg e-scale patter ns persisted ov er time scales of months, indicating that the y are sys tematic. Both bias variations and RON decreased with ISO speed when nor malized (Sect. 3.6). This ma y be a result of better amplifier or ADC per f or mance at a higher gain. Similarl y , larg e-scale patterns such as those in Fig. 5 become less distinct at high ISO speeds. Either a map of mean bias per pixel at a giv en ISO speed B ( x , y , I SO ) or a mean value B is used in Eq. (6) . For lo w-light applications such as dark -sky measurements [6] or spectroscopy , a detailed map is necessar y since a single ‘bad’ pixel with an abnormally high output may cause a significant systematic er ror . Being manufacturing def ects, bad pix els are in different locations ev en on two cameras of the same model, and thus a map is required f or each de vice. Conv ersely , f or br ight conditions, the bias variations are not significant and thus a mean v alue can be used. Similarl y , R ON values can be incor porated in the er ror budget separatel y f or individual pixels or using the RMS value as an ensemble estimate. 0 200000 400000 600000 iPhone SE = 2 . 2 = 2 . 4 0 50000 100000 150000 Galaxy S8 1.75 2.00 2.25 2.50 B e s t f i t 0 200000 400000 600000 D5300 0 10 20 30 RMS diff. (%) 0 10 20 30 RMS diff. (%) Fig. 4. Histogram of bes t-fitting γ and RMS relativ e difference between JPEG data and fit (f or models with γ = 2 . 2 and 2 . 4 ) in the R GB bands. 4.4. Dark current The methods described in Sect. 3.5 were applied to two smar tphones (iPhone SE and Galaxy S8) to measure their dark cur rent proper ties. Both cameras hav e built-in dark current calibrations (autodark; see Sect. 2). Measurements were done at room temperature, with shor t breaks betw een differently e xposed stac ks to prev ent o v erheating the sensor . How ev er , sensor temperatures were not obtained from the camera software. A separate data set consisting of 96 images taken with 4 seconds between each on the iPhone SE, during which the entire device palpably warmed up, was analyzed to identify ther mal effects. Pearson r correlations between response and time stamps (as a pro xy f or temperature) w ere calculated f or the individual pixels. These r values w ere w ell-described by a nor mal distribution with µ = 0 . 00 and σ = 0 . 10 , indicating that no strong relation exis ts betw een temperature and residual dark cur rent. How ev er, w e note that again no direct sensor temperatures could be obtained. In both cameras, a small residual (positive or neg ativ e) dark cur rent signal was observ ed. Most pix els in both cameras had little dark cur rent (RMS <2 ADU s − 1 , 99.9th percentile of absolute values <6 ADU s − 1 ), though notable outliers were found, such as >300 pixels in our Galaxy S8 with dark cur rent >50 ADU s − 1 . The residual dark cur rent decreased at higher ISO speeds, similar to R ON and bias variations (Sect. 4.3), but show ed no larg e-scale patter ns. These results show that autodarking accurately cor rects most pixels, but is inadequate f or outliers. Since autodarking is built into camera chips, it cannot be disabled. For outliers and in lo w-light conditions, it should be augmented with a manual dark cur rent cor rection. As with bias variations, the dark current map D ( x , y , I SO ) is used in Eq. (6) f or lo w-light conditions, but an appro ximation is adequate f or br ight conditions. For autodarked cameras like the ones tested here, a mean value of D = 0 ADU s − 1 is assumed, and the RMS variation incor porated into the 2.00 2.25 2.50 2.00 2.25 2.50 Read noise (ADU) 2.00 2.25 2.50 2.00 2.25 2.50 Fig. 5. Read-out noise per pix el of two iPhone SE devices (top and bottom) at ISO speed 23, in the R GBG 2 filters from left to r ight. Darker colors cor respond to low er read-out noise. A two-dimensional Gaussian filter ( σ = 5 pixels) has been applied to better visualize larg e-scale variations. The G imag e sho ws similar patter ns to Fig. 7. error budget. Outliers ma y be masked in either case. 4.5. ISO speed The normalization of data at different ISO speeds was measured using the methods from Sect. 3.6 on two smar tphones (iPhone SE and Galaxy S8) and one digicam (Nik on D5300). The measured and best-fit normalization cur v es are depicted in Fig. 6. The Nikon D5300 and Galaxy S8 w ere best fit with a single linear relation, while the iPhone SE cur v e is clipped at ISO 184. This clipping is not due to image saturation, as none of the pixels in any imag e reached saturation. The linear part of the iPhone SE relation passes through the origin, while the Nik on D5300 and Galaxy S8 cur v es do not, instead sho wing significant (>5%) sys tematic er rors when using the simplest mathematical model (zero offset and slope 1/minimum ISO speed). 0 500 1000 1500 2000 ISO speed 0 5 10 15 20 25 30 Normalization iPhone SE Galaxy S8 D5300 Fig. 6. ISO speed normalization for the iPhone SE, Samsung Galaxy S8, and Nikon D5300. Dots indicate means of image stacks divided b y the mean value per pix el at the lo w est ISO speed. Lines indicate the best-fitting relationships. The clipping of the iPhone SE cur v e likel y cor responds to a transition from purel y analog to purely digital gain. How ev er , data from the Camera2 API on the Galaxy S8 indicated that it too uses digital gain, at ISO speeds >640. This sugges ts that there are different implementations of gain f or RA W photog raph y . The large obser v ed differences in ISO speed normalization can lead to significant systematic errors when combining data taken at different ISO speeds, if not adequately calibrated. Data are normalized by dividing b y N , as e xpressed in Eq. (6). 4.6. Gain The methods from Sect. 3.7 w ere used to characterize inter -pix el g ain v ariations in tw o smartphones (iPhone SE and Galaxy S8). Significant inter -pixel variations were obser v ed, as shown in Fig. 7 f or the G pixels in both cameras. Since the measurement protocol is applied on the individual pixel lev el, the obser v ed variations are only due to differences in gain, rather than external factors such as vignetting effects. The iPhone SE show ed small v ar iations, with higher gain values at the edges and lo wer values in the center . This patter n is similar to that seen in Fig. 5, suggesting a common or igin. Con v ersely , on the Galaxy S8 a concentr ic pattern with a v ery wide rang e is clearl y visible, lik ely intended as a first-order vignetting correction. Both show ed similar ranges in gain (0.58–2.74 and 0.59–2.45 ADU/e − , respectivel y), though on the iPhone SE most variations w ere on small scales and thus are not visible in the smoothed image. iPhone SE (ISO 88) Galaxy S8 (ISO 200) 1.70 1.75 1.80 1.85 1.90 1.95 G a i n ( A D U / e ) 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 G a i n ( A D U / e ) Fig. 7. Gain values of G pix els in the iPhone SE (left; ISO speed 88) and Galaxy S8 (r ight; ISO speed 200) sensors. Darker colors indicate low er gain values. A two-dimensional Gaussian filter ( σ = 5 ) has been applied to better visualize larg e-scale fluctuations. The iPhone SE patterns are similar to the read noise shown in F ig. 5. Histograms of gain values f or both cameras are shown in Fig. 8. Inter -filter differences are small in the Galaxy S8 but obvious in the iPhone SE. In particular, the R, G, and B distr ibutions in the latter clearl y ha v e different mean values and widths (means and standard deviations of 1 . 97 ± 0 . 24 , 1 . 78 ± 0 . 29 , and 1 . 73 ± 0 . 30 ADU/e − , respectivel y). Fur thermore, the G distr ibution is bimodal while both others are unimodal; no significant differences between the G and G 2 gain distributions w ere f ound, so this is not the cause f or the obser v ed bimodality . The obser v ed gain variations are significant and pro vide insights into camera design and fabrication, specifically the or igins of some of the phenomena seen in flat-field data (Sect. 4.7). Ho we v er, they are not necessar y in the radiometric calibration of data, since our flat-field correction method (Sects. 3.8 and 4.7) accounts for these inter -pixel gain variations as well as all other differences in sensitivity , such as vignetting, as discussed in Sect. 3.10. 0.5 1.5 iPhone SE (ISO 88) Galaxy S8 (ISO 200) 0.5 1.5 Frequency 0.5 1.0 1.5 2.0 2.5 G a i n ( A D U / e ) 0.5 1.5 0.5 1.0 1.5 2.0 2.5 G a i n ( A D U / e ) Fig. 8. Histogram of gain v alues in the R (top), G and G 2 (middle), and B pix els (bottom) in the iPhone SE (left; ISO speed 88) and Galaxy S8 (right; ISO speed 200) sensors. The v ertical axes w ere normalized to account for the different amounts of pix els. 4.7. Flat-field correction Three smartphones (iPhone SE and 6S, and Galaxy S8) and one U A V (DJI Phantom Pro 4) w ere flat-fielded using an integ rating sphere as described in Sect. 3.8. An aper ture of f / 2 . 8 w as used f or the Phantom Pro 4, and on each device the maximum f ocus was used. 300 imag es w ere taken with the iPhone SE and Galaxy S8, 224 with the Phantom Pro 4, and 30 with the iPhone 6S. The latter w as flat-fielded using a different set-up, with a Ne wpor t 819D-SF-4 integ rating sphere and taking only 30 images as this was sufficient f or an SNR >3 in >99% of its pix els. Significant vignetting w as f ound in all cameras. The observed cor rection factors of the iPhone SE, the best-fitting model, and residuals betw een the tw o are shown in F ig. 9. The smooth pattern sugges ts optical vignetting is the main flat-field component; the same is tr ue in the iPhone 6S and Galaxy S8. The Phantom Pro 4 data show ed an additional steep cut-off near the cor ners, sugges ting mechanical vignetting. T o counteract the latter , the outermost 250 pix els on all sides of the images from all cameras w ere remo ved pr ior to fur ther analy sis. Cor rection f actors up to 2.42 (iPhone SE), 2.03 (iPhone 6S), 1.43 (Galaxy S8), and 2.79 (Phantom Pro 4) were obser v ed. No significant chromatic differences were f ound, so the recombined data w ere used instead of separate RGBG 2 data. 1.2 1.6 2.0 2.4 g ( O b s e r v e d ) 1.2 1.6 2.0 2.4 g ( B e s t f i t ) 0.02 0.00 0.02 Residual Fig. 9. Flat-field cor rection f actor g f or the iPhone SE camera. From left to r ight: observed values (inv erse of obser v ed relative sensitivity), bes t-fitting DNG model, and residuals. As seen in Fig. 9, the DNG model fitted the data w ell with only small residuals remaining. The RMS of the residuals, nor malized to the unsmoothed observed values, was 1.5% (iPhone SE), 1.4% (Galaxy S8), 3.1% (iPhone 6S), and 2.0% (Phantom Pro 4). These differences drop to ≤ 0.7% on all cameras when using the spatially smoothed data, implying that they are mostl y due to pixel-b y-pix el variations and noise in the obser v ations. These small residuals sho w that the DNG model is an adequate approximation f or most applications; a pixel-b y-pix el map per device is necessar y only if sub-percent precision is required. Estimated er rors in the model w ere <0.01 on the polynomial coefficients and < 10 − 5 on the optical center (in relativ e coordinates) f or all cameras. Anomalous dots can be seen throughout the difference image in Fig. 9, possibly due to dust par ticles or inter -pixel gain variations (Sect. 4.6). Since iOS also uses the DNG model for its inter nal vignetting cor rection, a direct compar ison betw een correction models for the iPhone SE was made. The RMS relative residual between our smoothed data and the internal model was 5.9%, more than 10 times that of our model (0.5%). While the iOS model is symmetr ic ( ˆ c x = ˆ c y = 0 . 5 ), ours had a slight offset ( ˆ c x = 0 . 494226 ( 1 ) and ˆ c y = 0 . 503718 ( 2 ) ). The polynomial coefficients all differed by >400 σ , with σ the standard er ror on our model derived b y the fitting routine. Finall y , the RMS difference between the models per pixel was 5.7%. The three alter nate methods described in Sect. 3.8 were tested on the Galaxy S8. 40 images of the o v ercast sky were taken, as w ell as 40 of the sun and 50 of a monitor with a paper diffuser . The Galaxy S8 was used because its integrating sphere data show a larg e asymmetr y ( ˆ c x = 0 . 449391 ( 5 ) , ˆ c y = 0 . 426436 ( 9 ) ), pro viding a simple comparison metr ic. The RMS difference between the smoothed data from the integrating sphere and alter nativ e methods relativ e to the sphere data w ere 4%, 4%, and 5%, respectivel y . The best-fitting optical centers of all three data sets differed significantly both from the sphere data and from each other ( ˆ c x = 0 . 53447 ( 1 ) , 0 . 501989 ( 4 ) , 0 . 490794 ( 4 ) and ˆ c y = 0 . 38837 ( 2 ) , 0 . 449426 ( 7 ) , 0 . 477590 ( 7 ) , f or the sky , sun, and monitor methods, respectiv ely). This causes a typical sy stematic er ror on the order of 5% in all three cases. Finally , six replicate measurement sets (50 images each) w ere taken using the monitor method to assess the effects of nonuniformities in the paper diffusers, generating a cor rection model for each set. The typical variation, expressed as the RMS of the standard de viation per pixel relativ e to the mean value per pix el, w as 3%, smaller than the typical de viations betw een the do-it-y ourself methods and g round tr uth data. The effect of paper nonunif or mities thus does not significantl y impact the quality of do-it-yourself data. The flat-field cor rection is incor porated in the radiometric cor rection e xpressed in Eq. (6) as the factor g = g ( x , y ) . For cameras with a fixed aper ture, such as smar tphones, one measurement is sufficient; other wise, g varies with aper ture. This cor rects f or the sy stematic er ror induced by flat-fielding effects but pix els at the edges still receiv e fe wer photons than those in the center . The former cor respondingl y hav e a smaller SNR due to shot noise, scaling as S N R ∝ g − 1 / 2 . Theref ore, objects of interest are pref erably imaged near the optical center of the camera. 4.8. Spectral response T wo smar tphones (iPhone SE and Galaxy S8) and one U A V (D JI Phantom Pro 4) w ere spectrall y calibrated using a monochromator , and the iPhone SE using iSPEX, as described in Sect. 3.9. Figure 10 sho ws the nor malized spectral response cur v es der iv ed from the monochromator data, calibrated to the spectral throughput of the monochromator and spectral ir radiance of the light source. This calibration was done b y measur ing its output under the same conditions as during the measurements, using a pre-calibrated silicon photodiode. Parts of the spectra were measured with different exposure settings and monochromator filters; these w ere first calibrated and then nor malized and av eraged on ov erlapping sections. The peak response wa velengths and effectiv e bandwidths of the R GBG 2 filters in the different cameras are giv en in T able 1. Some similarities and differences between the cameras are immediately obvious from Fig. 10 and T able 1. Notabl y , no significant differences between G and G 2 w ere f ound in an y camera (RMS differences ≤ 0.003); the different peak wa velength f or the Phantom Pro 4 is likel y due to 400 450 500 550 600 650 700 Wavelength (nm) 0.0 0.2 0.4 0.6 0.8 1.0 Relative sensitivity iPhone SE Phantom Pro 4 Galaxy S8 Fig. 10. Spectral response cur v es of the iPhone SE, Galaxy S8, and Phantom Pro 4, der iv ed from monochromator data. The responses are nor malized to the global maximum per camera, giving relativ e sensitivities. G is the av erage of the G and G 2 responses o v er the wa velength axis, since no significant differences w ere f ound. RMS er rors are ≤ 0.005. Camera λ P , R Λ R λ P , G Λ G λ P , G 2 Λ G 2 λ P , B Λ B iPhone SE 596 72 524 110 524 109 458 93 Galaxy S8 594 73 524 109 524 109 468 117 Phantom Pro 4 594 65 524 115 532 116 468 94 T able 1. Peak response w av elength λ P , C and effectiv e spectral bandwidth Λ C of each filter in the three cameras, derived from monoc hromator measurements. All values are in nm. noise. The peak response wa velengths are v ery similar or ev en identical between cameras, as are the effectiv e bandwidths, with two notable ex ceptions. The Galaxy S8 B filter is significantly broader than the others, with a comparativel y high response at λ > 500 nm. Con v ersely , the Phantom Pro 4 has a relativel y nar ro w R filters due to its NIR cut-off around 670 nm rather than 680 nm. Moreo v er , the R filters in all three cameras show a secondar y peak around 535 nm and nearl y identical responses betw een 570–650 nm. The spectral response cur v es measured with iSPEX, sho wn in Fig. 11, were similar to those derived from the monochromator data but sho wed small though significant sys tematic differences. No significant differences were f ound between nar ro w- and broad-slit spectra, so these were a v eraged. RMS differences betw een iSPEX - and monochromator -derived responses were 0.04, 0.02, and 0.02 (SMAR TS2 normalization) and 0.12, 0.11, and 0.10 (black -body normalization), in R GB respectivel y . The black -body under-es timated the ir radiance <500 nm and o ver -estimated it >500 nm compared to the SMAR TS2 model, resulting in larg e deviations in the retriev ed spectral response. The RMS difference between the monochromator -der iv ed and black -body-normalized iSPEX -derived spectral responses could be reduced to 0.05, 0.11, and 0.04 b y multiplying each filter with an empirical constant. How ev er , sys tematic differences >0.2 remained in the G filter at wa v elengths of 500-600 nm. Con versel y , the SMAR TS2-nor malized iSPEX -der iv ed spectral responses only show ed a significant sys tematic difference compared to monochromator data at wa v elengths >650 nm, the or igins of which are unclear . The observed differences betw een devices ha ve important implications for R GB color measure- ments and spectroscop y , for ex ample f or color measurements as discussed in Sect. 1. The 400 450 500 550 600 650 700 Wavelength (nm) 0.0 0.2 0.4 0.6 0.8 1.0 Relative sensitivity Monochromator iSPEX (SMARTS2) iSPEX (black-body) Fig. 11. Comparison of the iPhone SE spectral response curves measured with the monochromator and iSPEX. iSPEX data are nor malized using a 5777 K black -body and a SMAR TS2 model, as described in Sect. 3.9. effectiv e spectral bandwidths are incorporated into the radiometr ic calibration of imaging data as described in Sect. 3.10. Further more, smar tphone spectrometers naturally require calibration f or the spectral response of the camera, as expressed in Eq. (7). 4.9. SPECT A CLE database T o f acilitate the use of consumer cameras in scientific projects and improv e future compatibility , w e ha v e created the SPECT A CLE (Standardised Photographic Equipment Calibration T echnique And CataLoguE) database. It includes the calibration data required for radiometric cor rections (Sect. 3.10), for the most popular cameras. The data are giv en in standardized f ormats, split into three categories (device, camera, and software) to minimize the amount of data required. For e xample, tw o devices using the same camera module hav e the same spectral response cur v es and flat-field response, while softw are parameters such as bias and ISO speed settings vary . The f ormer can thus be combined while keeping the latter separate. Since the proper ties of a camera ma y chang e with fir m ware updates or chang es in manuf actur ing, database entries may be split according to device version, rather than assuming devices of the same model are clones. Finall y , giv en calibration data for multiple identical devices, statistics on variations within a camera model may be included. The open design of the SPECT A CLE database, based on the Parse platf or m, allo ws any one to use or contribute data, par ticularl y using the calibration apps w e ha v e dev eloped. Submitted data are cur rentl y curated b y the authors to ensure their quality . As the database gro ws, community curation or automated curation based on outlier analysis may become preferable. SPECT A CLE can be accessed at http://spectacle.ddq.nl/ . 5. Discussion & conclusions In this work, w e ha ve presented a standardized calibration methodology for the most impor tant factors limiting the quality of consumer camera data, the first to our know ledge. Further more, w e ha ve dev eloped the SPECT A CLE database, containing calibration data f or the most popular devices. The standardized methodology and the SPECT A CLE database ha v e the potential to impro ve the sustainability of projects using these cameras, by simplifying their calibration and the use of multiple camera models. The main difference between our approach and those in much of the literature is the use of RA W data. Software constraints pre viously f orced the use of JPEG data, which are compressed and hea vily processed, introducing systematic effects that negativ ely affect the data quality and are difficult to calibrate [2, 5, 9, 10, 24, 32, 40, 49, 50]. The desire to use RA W data has been expressed widely in the literature [2, 9, 19, 21, 24, 40, 42, 57], and their superior ity is clearly demonstrated b y the highl y linear response and larger dynamic range f ound in Sect. 4.2. The former is especially notable since nonlinearity and the associated gamma correction are among the most cited problems of JPEG data [1, 2, 9 – 12, 19, 22, 24, 28, 32, 50 – 53, 55]. While JPEG nonlinearity cor rections e xist, either full y empirical or based on the sRGB standard [9, 32, 51], the wide (1.7–2.6) v ariations in gamma and larg e (>30%) deviations from sR GB profiles sho wn in Sect. 4.2 and Fig. 4 indicate that these are inaccurate and difficult to generalize. The highly linear nature of RA W data was pre viously demonstrated in [54, 57, 60] and may be a result of inter nal linear ity cor rections in the CMOS chip [69]. Fur thermore, RA W data are not affected by white balance, a color cor rection in JPEG processing which sev erely affects colorimetr ic measurements, is difficult to calibrate, and differs strongly betw een measurements and cameras [1, 4, 8 – 10, 13, 32, 40, 42, 73]. This variable gamma cor rection and white balance make it impossible to in vert the JPEG algor ithm and recov er RA W data. How ev er, RA W data are no panacea, since they still require fur ther calibrations. Fur thermore, not all consumer cameras suppor t RA W imagery , especially lo w-end smartphones; hence the low adoption rate in literature until now . S till, w e consider the linear ity , larg er dynamic range, and lack of unknown post-processing affecting the data w or th relying on RA W data, especially in a market trending tow ards broader support. Inter -pixel and inter -device bias v ariations and read noise were f ound to be small in general ( σ <1 for bias variations, mean R ON <3 ADU), though with larg e outliers (Sect. 4.3). These distributions are similar to those f ound in sev eral smar tphones [14] and a Raspber ry Pi camera [57], though neither work distinguishes between bias variations, read noise, and dark cur rent. The larg e-scale patter ns seen in Fig. 5 were not found in the literature. Their cause is unclear , though correlations with inter -pixel gain variations (Sect. 4.6) sugg est a common or igin. Ultimately , since both phenomena are small, f or most applications these patter ns are merely a cur iosity and an estimate in the error budget and masking of outliers is sufficient for fur ther radiometric calibrations (Sect. 3.10). While dark cur rent has been implicated in the literature as a major noise source [5, 14, 34, 46, 56 – 58], the results presented in Sect. 4.4 indicate that it is actuall y typically quite minor . The RMS dark cur rent in the iPhone SE and Galaxy S8 (<2 ADU s − 1 ) is similar to v alues f ound in [5, 51, 56, 58], while w e f ound larg er outliers, such as >300 pix els with >50 ADU s − 1 in our Galaxy S8. Similarl y to [58], no significant relationship was found betw een temperature and residual dark cur rent, though this experiment should be repeated under more controlled conditions and using inter nal sensor temperatures to dra w strong conclusions. In general, a quantitativ e comparison with the literature is difficult, since those studies used JPEG data, not RA W . While our sample of tw o cameras is insufficient to draw broad conclusions, these results sugges t that dark cur rent is less impor tant than previousl y thought. As discussed in Sect. 4.4 and similarly to the aforementioned bias and RON v ar iations, e xtensive characterization of the dark cur rent in individual pixels is necessar y for lo w-light applications and spectroscop y as these are significantly affected by a fe w ‘bad’ pix els. Conv ersely , f or br ight-light conditions the dark response is typically negligible and an ensemble estimate in the er ror budget and masking of outliers are sufficient. ISO speed normalization is typically done by simply dividing digital v alues by the ISO speed [2, 61], but the results presented in Sect. 4.5 and Fig. 6 contradict the validity of this method. This discrepancy was also identified in [62]. Obser v ed relations differ significantl y from the naïv e linear model in shape, offset and slope. F or e xample, differences betw een the tw o models of >5% were f ound in the Galaxy S8. More e xtremely , the e xpected and obser v ed normalization factor at ISO speed 1840 on the iPhone SE differ by a factor of 10. Moreov er , Android documentation suggests that more complex cur v es with mixed analog and digital gain ma y also be in use. Thus, to prev ent similar systematic er rors, either a single ISO speed per device must be used or these relations must be calibrated. Significant inter -pixel gain variations w ere f ound in Sect. 4.6, as sho wn in Figs. 7 and 8. The Galaxy S8 show ed a strong radial patter n, likel y intended as a first-order vignetting cor rection; this was not seen in the iPhone SE. Conv ersely , gain values in the latter differed significantly betw een color filters. This ma y be a color cor rection called analog white balance, which is described in the DNG s tandard [63]; how ev er , in this case it is not clear why significant inter-pix el variations e xist. No previous discussion of such variations in gain in a consumer camera was f ound in the literature. T ypically , an equal gain in all pixels is assumed in absolute radiometr ic calibrations [57, 62] but the variations f ound here cast doubt on the generality of this assumption. Strong flat-field effects were f ound in Sect. 4.7, with cor rection factors up to 2.79. Similarl y larg e cor rection factors hav e been found f or other cameras, f or instance appro ximately 2.8 in a Canon EOS 400D [19] and 650D [20], 4 in a Raspber ry Pi camera [57], 1.8 in a Canon EOS 10D [71], and 1.5 in a Nikon E775 [72]. It should be noted that vignetting is highly aperture-dependent and thus these cor rection factors will chang e with varying aper tures [71]. Interestingl y , w e did not find the larg e chromatic differences described in [19, 57]. Notabl y , the Galaxy S8 sho w ed a much weaker vignetting effect ( g m a x = 1 . 43 ) than the other cameras ( g m a x > 2 ), likel y due to the af orementioned inter -pix el gain variations. These may also explain the strong asymmetry ( ˆ c x = 0 . 449391 ( 5 ) , ˆ c y = 0 . 426436 ( 9 ) ) seen in the Galaxy S8, due to the main symmetr ical component having been cor rected already . The 7-parameter vignetting model descr ibed in the DN G standard [63] fits our data v er y w ell (RMSE ≤ 3.1% f or raw data, ≤ 0.7% f or smoothed data), without significant sys tematic differences. Since the typical difference between observed and modeled cor rections is small, pix el-b y-pix el flat-fielding is necessar y only f or applications requir ing sub-percent precision. For those, a flat-field map would be made f or each individual device, rather than using the same map f or multiple devices of the same model. Flat-field measurements of the latter could be used to quantify typical variations in flat-field response among identical devices and fur ther deter mine when pixel-b y-pix el or modeled flat-field cor rections are pref erable. The DNG model is also used for inter nal vignetting correction in iOS. While this cor rection is sometimes considered a major advantag e of JPEG data ov er RA W data, the inter nal model of the iPhone SE was sho wn to be significantly less accurate (RMSE = 5 . 9 %) than one based on our data (RMSE = 0 . 5 %), contradicting this notion. Moreov er , residual vignetting effects up to 15% hav e been obser v ed in JPEG data [51]. A comparison to the inter nal correction data in Android smar tphones, consisting of pixel-b y-pix el look -up tables, has not yet been done since these data are relativ ely difficult to access. Finall y , three simpler alter nativ e flat-fielding methods w ere tested, namel y imaging the sky , the sun, and a computer monitor, as descr ibed in Sect. 3.8. Applied on the Galaxy S8, data from these methods differed from the integ rating sphere data by ≤ 5% RMS. These errors mainly result from a difference in the location of the optical center . The cause of these discrepancies is unclear, though insufficientl y isotropic light sources are an obvious e xplanation. Ne vertheless, the RMS difference of ≤ 5% is small compared to the ov erall flat-field cor rection of up to 179% and better than the inter nal cor rection of the iPhone (RMS 5.9%). These methods thus ser v e as a useful first estimate for the flat-field cor rection in the absence of integ rating sphere data. As discussed in Sect. 2, many fur ther alter nativ e flat-fielding methods exis t [19, 20, 70–74]. Our data ma y be useful as a ground tr uth f or a thorough compar ison of such methods akin to [20, 54]. The spectral responses f ound in Sect. 4.8 and shown in Fig. 10 ag ree w ell with those f ound in the literature [1, 2, 13, 19, 23, 24, 46, 51, 54, 57, 59 – 62], with the R GB cur v es centered around 600, 520, and 470 nm, respectivel y . Notably , the strong secondar y peaks seen in [2, 51] w ere not f ound in our data and may be JPEG ar tef acts. Differences are mainly f ound in the wings, such as the NIR cut-offs [19, 46] and har monics. The comparativ ely high response of the Galaxy S8 B filter at wa velengths >500 nm is also seen in the Nokia N900 [23] and Sony A7SII [1], and to a lesser extent the Galaxy S5 [2], but is otherwise uncommon. The earl y NIR cut-off of the Phantom Pro 4 appears to be similarl y uncommon but not unique [1, 2, 23, 46]. These differences again sho w the impor tance of spectral characterization f or normalizing smar tphone spectrometer data. Further more, the significant variations show that the common assumption of sR GB responses [9, 22] does not hold, as has been sugg ested pre viously [21], and characterization of the spectral response is necessar y to conv er t obser v ed colors to color spaces such as CIE 1931 XYZ or CIELAB [23, 25]. Ho w ev er , color measurements still depend on the incident light spectrum [25]; hyperspectral measurements, f or e xample with iSPEX [10], and characterization of common light sources [1, 78] ma y pro vide valuable additional inf ormation. Finall y , while no significant response was f ound at wa velengths <390 or >700 nm on our test cameras, it may be w or thwhile in the future and the SPECT A CLE database to use a spectral rang e of 380-780 nm to f ollow color imetric standards [25, 52, 56]. Spectral response measurements done with the iSPEX smar tphone spectrometer [10] ag reed w ell (RMS differences ≤ 0.04) with the monochromator measurements (Sect. 4.8 and Fig. 11). The only sys tematic difference w as an under -estimation at wa velengths >650 nm, though it is unclear what causes this. The good ag reement sho ws that iSPEX measurements are an adequate replacement f or monochromator data if the latter are not av ailable. This will be especially useful with the new iSPEX w e are dev eloping, which will also f eature univ ersal smar tphone hardware interface. One do wnside of this method is that it requires an accurate solar reference spectr um. W e used one generated with SMAR TS2 [76, 77]; this model matches observed solar spectra v er y well but it is not very por table or user -friendly for non-e xper t users. A 5777 K black -body appro ximation was also used but reproduced the SMAR TS2 spectrum poorl y (RMSE of 34%) and accurate spectral response cur v es could not be retr ie v ed this wa y . A more portable model or set of standard spectra could improv e the user-friendliness of this calibration method. Further alter nativ e methods for spectral response characterization include those based on multispectral measurements using computational methods to enhance their resolution [25, 68, 82, 83] or those using a linear variable edg e filter [24]. Ho we ver , the former are not sufficiently accurate [60] while the latter is not necessarily more accessible than a monochromator . Our data ma y be used as a g round-truth for testing other methods akin to [60] but with the adv antage of smartphones being more accessible than the cameras used therein. Finall y , we ha v e created the SPECT A CLE database containing the calibration data descr ibed abo ve. The aim of this database is to f acilitate the use of consumer cameras in scientific projects b y reducing the labor required f or calibration. Data sets containing spectral responses [23, 60] and e xtensive calibrations of single cameras [57] hav e been published bef ore but to our kno wledg e SPECT A CLE is the first comprehensiv e, centralized spectral and radiometric calibration database. It is designed with community par ticipation in mind, rel ying on volunteer contr ibutions to become and remain complete in the rapidly ev ol ving camera market. This will require a cr itical mass of users to maintain it, which is easier if more accessible calibration methods, like those discussed pre viously , can be used. W e ha ve kick -started this process with the calibrations done in this paper and will continue this while dev eloping iSPEX. Though e xtensiv e, our calibration methodology is not complete. The two most prominent missing components are geometric distortions and absolute radiometr ic calibration. The former are a well-kno wn phenomenon with a larg e impact on image quality but relativel y simple to measure and cor rect [11, 16 – 18, 45, 74]. A parametr ic model f or distortion is giv en in the DNG standard [63] and a comparison between measured distortions and the inter nal cor rection models of different cameras, similar to that done in Sect. 4.7 for vignetting corrections, may be used to determine the accuracy of the latter . Absolute radiometr ic calibration is e xtremely valuable for quantitativ e measurements, as described in Sect. 3.10. In pr inciple, our methods and calibration data contain most of the information required f or this, bar a constant K . Absolute radiometric calibration of consumer cameras has been demonstrated bef ore, notably in the Raspber ry Pi camera [57], and Nikon D300 and Canon 40D [62], though onl y f or a small number of devices. Another notable example is the Hukseflux Pyranometer app (Sect. 3.8) f or measurements of solar ir radiance, though it is intended f or education and enter tainment rather than scientific measurements. Finall y , most of our calibrations were done on a single device, and differences betw een devices ma y e xist, as sho wn in Fig. 5. Calibration of multiple devices per camera model would allo w the character ization of these differences and the associated er rors when using multiple devices. A dditionally , differences may be introduced by chang es in manufacturing or camera software. Character ization of different generations of the same model camera will be necessary to characterize these, which ma y result in separate entries in the SPECT A CLE database f or each camera version being necessary . Ho we ver , the modular design of the SPECT A CLE database makes it simple to e xtend. The simple, standardized calibration methods described in this work and the SPECT A CLE database hav e the potential to greatly improv e the data quality and sustainability of future scientific projects using consumer cameras. Funding European Commission Hor izon 2020 program (grant nr . 776480, MONOCLE and grant nr . 824603, A CTION) Ackno wledgments The authors wish to thank Molly and Chr is MacLellan of the NER C Field Spectroscopy Facility f or experimental help and inv aluable insights in the flat-field and spectral response measurements. Figure 1 was drawn using draw .io . Data analy sis and visualization were done using the AstroPy , ExifRead, Matplotlib, NumPy , RawPy , and SciPy librar ies f or Python. Finall y , the authors wish to thank the tw o anonymous revie wers f or their thorough and constr uctiv e re vie ws. This project has received funding from the European Union ’ s Horizon 2020 research and innov ation programme under grant agreement No 776480. References 1. A. Sánchez de Miguel, C. C. K yba, M. Aubé, J. Zamorano, N. Cardiel, C. T apia, J. Bennie, and K. J. Gas ton, “Colour remote sensing of the impact of artificial light at night (I): The potential of the International Space Station and other DSLR -based platf or ms, ” Remote. Sens. En viron. 224 , 92–103 (2019). 2. T . Leeuw and E. Boss, “The HydroColor app: Abo ve w ater measurements of remote sensing reflectance and turbidity using a smartphone camera, ” Sensors 18 , 256 (2018). 3. Y . Y ang, L. L. Cow en, and M. Cos ta, “Is ocean reflectance acquired by citizen scientists robust for science applications?” Remote. Sens. 10 , 835 (2018). 4. J. B. Gallagher and C. H. Chuan, “Chlorophy ll a and turbidity distributions: Applicability of using a smar tphone app across two contrasting bay s, ” J. Coast. Res. 34 , 1236–1243 (2018). 5. D. P . Igoe, A. V . Parisi, A. Amar, N. J. Downs, and J. T urner, “Atmospheric total ozone column e valuation with a smartphone image sensor, ” Int. J. Remote. Sens. 39 , 2766–2783 (2018). 6. A. Hänel, T . Posch, S. J. Ribas, M. Aubé, D. Duriscoe, A. Jecho w , Z. K ollath, D. E. Lolkema, C. Moore, N. Schmidt, H. Spoelstra, G. W uchterl, and C. C. K yba, “Measuring night sky br ightness: methods and challenges, ” J. Quant. Spectrosc. Radiat. T ransf. 205 , 278–290 (2018). 7. F . Cai, W . Lu, W . Shi, and S. He, “A mobile device-based imaging spectrometer for environmental monitoring b y attaching a lightw eight small module to a commercial digital camera, ” Sci. Reports 7 , 15602 (2017). 8. A. Fr iedrichs, J. A. Busch, H. J. van der W oerd, and O. Zielinski, “SmartFluo: A method and affordable adapter to measure chlorophy ll a fluorescence with smar tphones, ” Sensors 17 , 678 (2017). 9. S. Nov oa, M. R. W er nand, and H. J. v an der W oerd, “W AC ODI: A generic algor ithm to deriv e the intrinsic color of natural waters from digital images, ” Limnol. Oceanogr. Methods 13 , 697–711 (2015). 10. F . Snik, J. H. H. Rietjens, A. Apitule y , H. V olten, B. Mijling, A. Di Noia, S. Heikamp, R. C. Heinsbroek, O. P . Hasekamp, J. M. Smit, J. V onk, D. M. Stam, G. v an Har ten, J. de Boer, C. U. Keller , and . ISPEX citizen scientists, “Mapping atmospher ic aerosols with a citizen science network of smar tphone spectropolarimeters,” Geoph ys. Res. Lett. 41 , 7351–7358 (2014). 11. S. Sumriddetchkajorn, K. Chaitav on, and Y . Intarav anne, “Mobile-platform based colorimeter for monitor ing chlorine concentration in w ater, ” Sensors Actuators B: Chem. 191 , 561–566 (2014). 12. A. Kreuter , C. Emde, and M. Blumthaler , “Measuring the influence of aerosols and albedo on sky polarization, ” Atmospheric Res. 98 , 363–367 (2010). 13. L. Goddijn-Murphy , D. Dailloux, M. White, and D. G. Bow ers, “Fundamentals of in situ digital camera methodology f or water quality monitoring of coast and ocean,” Sensors 9 , 5825–5843 (2009). 14. D. Whiteson, M. Mulhearn, C. Shimmin, K. Cranmer , K. Brodie, and D. Bur ns, “Searching f or ultra-high energy cosmic rays with smar tphones, ” Astropar t. Ph ys. 79 , 1–9 (2016). 15. M. Ruwaimana, B. Satyanara yana, V . Otero, A. M. Muslim, M. Syafiq A., S. Ibrahim, D. Ra ymaekers, N. K oedam, and F . Dahdouh-Guebas, “The advantag es of using drones o ver space-borne imag ery in the mapping of mang ro ve f orests, ” PLOS ONE 13 , e0200288 (2018). 16. J. Rasmussen, G. Ntakos, J. Nielsen, J. Svensgaard, R. N. Poulsen, and S. Christensen, “Are v egetation indices derived from consumer -grade cameras mounted on U A Vs sufficientl y reliable for assessing experimental plots?” Eur. J. Agron. 74 , 75–92 (2016). 17. K. Flynn and S. Chapra, “Remote sensing of submerg ed aquatic vegetation in a shallo w non-turbid river using an unmanned aer ial vehicle, ” Remote. Sens. 6 , 12815–12836 (2014). 18. X. Liang, A. Jaakk ola, Y . W ang, J. Hyyppä, E. Honkav aara, J. Liu, and H. Kaar tinen, “The use of a hand-held camera f or individual tree 3D mapping in f orest sample plots, ” Remote. Sens. 6 , 6587–6603 (2014). 19. V . Lebour geois, A. Bégué, S. Labbé, B. Malla van, L. Prév ot, and B. R oux, “Can commercial digital cameras be used as multispectral sensors? A crop monitoring tes t, ” Sensors 8 , 7300–7322 (2008). 20. A. Kordecki, A. Bal, and H. Palus, “A study of vignetting cor rection methods in camera colorimetric calibration, ” in Proc. SPIE 10341, v ol. 10341 (Inter national Society for Optics and Photonics, 2017), p. 103410X. 21. R. Nguyen, D. K. Prasad, and M. S. Bro wn, “Ra w-to-ra w: Mapping betw een image sensor color responses, ” in The IEEE Confer ence on Computer Vision and P attern Recognition (CVPR), (2014), pp. 3398–3405. 22. R. Char rière, M. Héber t, A. T rémeau, and N . Destouches, “Color calibration of an R GB camera mounted in front of a microscope with strong color distor tion, ” Appl. Opt. 52 , 5262–5271 (2013). 23. J. Jiang, D. Liu, J. Gu, and S. Susstrunk, “What is the space of spectral sensitivity functions f or digital color cameras?” in 2013 IEEE W orkshop on Applications of Computer V ision (WA CV), (IEEE, 2013), pp. 168–179. 24. D. L. Bongiorno, M. Br y son, D. G. Dansereau, and S. B. Williams, “Spectral characterization of COTS RGB cameras using a linear v ariable edge filter, ” in Proc. SPIE 8660, Digital Photogr aphy IX, N. Sampat and S. Battiato, eds. (International Society for Optics and Photonics, 2013), p. 86600N. 25. V . Cheung, C. Li, J. Hardeberg, D. Connah, and S. W estland, “Character ization of tr ic hromatic color cameras by using a ne w multispectral imaging technique, ” J. Opt. Soc. Am. A 22 , 1231–1240 (2005). 26. A. Khedekar, B. Dev arajan, K. Ramasamy , V . Muthukkar uppan, and U. Kim, “Smar tphone-based application impro ves the detection of retinoblastoma, ” Ey e (2019). 27. R. G. Mannino, D. R. Myers, E. A. T yburski, C. Caruso, J. Boudreaux, T . Leong, G. D. Clifford, and W . A. Lam, “Smartphone app for non-inv asive detection of anemia using onl y patient-sourced photos, ” N at. Commun. 9 , 4924 (2018). 28. H. Ding, C. Chen, S. Qi, C. Han, and C. Y ue, “Smar tphone-based spectrometer with high spectral accuracy for mHealth application,” Sensors Actuators A: Ph ys. 274 , 94–100 (2018). 29. L.-J. W ang, Y .-C. Chang, R. Sun, and L. Li, “A multichannel smar tphone optical biosensor f or high-throughput point-of-care diagnostics, ” Biosens. Bioelectron. 87 , 686–692 (2017). 30. K. D. Long, E. V . W oodbur n, H. M. Le, U. K. Shah, S. S. Lumetta, and B. T . Cunningham, “Multimode smartphone biosensing: the transmission, reflection, and intensity spectral (TRI)-analyzer, ” Lab on a Chip 17 , 3246–3257 (2017). 31. A. Lam and Y . Kuno, “Robus t heart rate measurement from video using select random patches, ” in The IEEE International Confer ence on Computer Vision (ICCV), (2015), pp. 3640–3648. 32. A. Skandara jah, C. D. R eber, N . A. Switz, D. A. Fletcher , and A. J. Kabla, “Quantitativ e imaging with a mobile phone microscope,” PLOS ONE 9 , e96906 (2014). 33. Z. J. Smith, K. Chu, A. R. Espenson, M. Rahimzadeh, A. Gryshuk, M. Molinaro, D. M. Dwyre, S. Lane, D. Matthe ws, and S. W achsmann-Hogiu, “Cell-phone-based platform for biomedical device dev elopment and education applications, ” PLOS ONE 6 , e17150 (2011). 34. H. Kim, Y . Jung, I.-J. Doh, R. A. Lozano-Mahecha, B. Appleg ate, and E. Bae, “Smar tphone-based lo w light detection f or bioluminescence application, ” Sci. Reports 7 , 40203 (2017). 35. M. Grossi, “A sensor -centr ic sur v ey on the development of smartphone measurement and sensing systems, ” Measurement 135 , 572–592 (2019). 36. A. A. Zaidan, B. B. Zaidan, O. S. Albahri, M. A. Alsalem, A. S. Albahr i, Q. M. Y as, and M. Hashim, “A re view on smartphone skin cancer diagnosis apps in evaluation and benchmarking: coherent tax onomy , open issues and recommendation pathway solution, ” Heal. T echnol. 8 , 223–238 (2018). 37. S. Kanchi, M. I. Sabela, P . S. Mdluli, Inamuddin, and K. Bisetty , “Smartphone based bioanalytical and diagnosis applications: A review, ” Biosens. Bioelectron. 102 , 136–149 (2018). 38. X. Huang, D. X u, J. Chen, J. Liu, Y . Li, J. Song, X. Ma, and J. Guo, “Smartphone-based analytical biosensors, ” Analy st 143 , 5339–5351 (2018). 39. A. J. S. McGonigle, T . C. Wilk es, T . D. Pering, J. R. W illmott, J. M. Cook, F . M. Mims, and A. V . P ar isi, “Smartphone spectrometers, ” Sensors 18 , 1–15 (2018). 40. G. Rateni, P . Dario, and F . Cavallo, “Smartphone-based f ood diagnostic technologies: A revie w, ” Sensors 17 , 1453 (2017). 41. F. Li, Y . Bao, D. W ang, W . W ang, and L. Niu, “Smartphones f or sensing, ” Sci. Bull. 61 , 190–201 (2016). 42. K. E. McCrack en and J.- Y . Y oon, “Recent approaches f or optical smar tphone sensing in resource-limited settings: a brief re view, ” Anal. Methods 8 , 6591–6601 (2016). 43. R. A. Crocombe, “P or table Spectroscop y, ” Appl. Spectrosc. 72 , 1701–1751 (2018). 44. S. Manfreda, M. McCabe, P . Miller, R. Lucas, V . Pa juelo Madrigal, G. Mallinis, E. Ben Dor, D. Helman, L. Estes, G. Ciraolo, J. Müllerová, F . T auro, M. de Lima, J. de Lima, A. Maltese, F . Frances, K. Cay lor , M. K ohv , M. Perks, G. Ruiz-Pérez, Z. Su, G. Vico, and B. T oth, “On the use of unmanned aerial sys tems for environmental monitor ing, ” Remote. Sens. 10 , 641 (2018). 45. F . T auro, M. Porfir i, and S. Grimaldi, “Surface flo w measurements from drones, ” J. Hydrol. 540 , 240–245 (2016). 46. E. Ber ra, S. Gibson-Poole, A. MacAr thur , R. Gaulton, and A. Hamilton, “Estimation of the spectral sensitivity functions of un-modified and modified commercial off-the-shelf digital cameras to enable their use as a multispectral imaging sy stem for U A Vs,” in International Conf erence on Unmanned Aerial V ehicles in Geomatics, (2015), pp. 207–214. 47. P . C. Gray , J. T . Ridge, S. K. Poulin, A. C. Se ymour, A. M. Schwantes, J. J. Swenson, and D. W . Johnston, “Integrating drone imagery into high resolution satellite remote sensing assessments of estuarine environments, ” R emote. Sens. 10 , 1257 (2018). 48. S. Levin, S. Krishnan, S. Ra jkumar, N. Halery , and P . Balkunde, “Monitoring of fluoride in w ater samples using a smartphone,” Sci. The T otal. En viron. 551-552 , 101–107 (2016). 49. C. Zhang, G. Cheng, P . Edw ards, M.-D. Zhou, S. Zheng, and Z. Liu, “G-Fresnel smar tphone spectrometer, ” Lab on a Chip 16 , 246–250 (2016). 50. A. Kreuter , M. Zanger l, M. Schwarzmann, and M. Blumthaler , “All-sky imaging: a simple, versatile sy stem for atmospheric research, ” Appl. Opt. 48 , 1091–1097 (2009). 51. C. A. Cobur n, A. M. Smith, G. S. Logie, and P . Kennedy , “Radiometric and spectral compar ison of inexpensiv e camera systems used f or remote sensing, ” Int. J. R emote. Sens. 39 , 4869–4890 (2018). 52. Y . Q. Xu, J. Hua, Z. Gong, W . Zhao, Z. Q. Zhang, C. Y . Xie, Z. T . Chen, and J. F . Chen, “Visible light communication using dual camera on one smar tphone, ” Opt. Express 26 , 34609–34621 (2018). 53. Y . W ang, M. M. A. Zeinhom, M. Y ang, R. Sun, S. W ang, J. N. Smith, C. Timc halk, L. Li, Y . Lin, and D. Du, “A 3D-printed, portable, optical-sensing platform for smar tphones capable of detecting the herbicide 2,4-dichloropheno xyacetic acid, ” Anal. Chem. 89 , 9339–9346 (2017). 54. A. Manako v , “Evaluation of computational radiometr ic and spectral sensor calibration tec hniques, ” in Proc. SPIE 9896, vol. 9896 (International Society f or Optics and Photonics, 2016), p. 98960O. 55. A. Cescatti, “Indirect estimates of canopy gap fraction based on the linear con version of hemispherical photographs: Methodology and comparison with standard thresholding tec hniques, ” A gr ic. For. Meteorol. 143 , 1–12 (2007). 56. J. T urner, A. V . Parisi, D. P . Igoe, and A. Amar , “Detection of ultraviolet B radiation with inter nal smartphone sensors, ” Instrumentation Sci. & T echnol. 45 , 618–638 (2017). 57. M. Pagnutti, R. E. Ry an, G. Cazena vette, M. Gold, R. Harlan, E. Leggett, and J. Pagnutti, “Laying the foundation to use Raspber ry Pi 3 V2 camera module imagery for scientific and engineering purposes, ” J. Electron. Imaging 26 , 013014 (2017). 58. D. P . Igoe, A. V . Parisi, and B. Carter, “A method f or deter mining the dark response f or scientific imaging with smartphones,” Instrumentation Sci. T echnol. 42 , 586–592 (2014). 59. D. Jav oršek, T . Jer man, and A. Ja voršek, “Comparison of two digital cameras based on spectral data estimation obtained with tw o methods, ” A cta Pol ytech. Hungarica 12 , 183–197 (2015). 60. M. M. Darrodi, G. Finla yson, T . Goodman, and M. Mac kiewicz, “Ref erence data set for camera spectral sensitivity estimation, ” J. Opt. Soc. Am. A 32 , 381–391 (2015). 61. H. Zhao, R. Ka wakami, R. T . T an, and K. Ikeuchi, “Estimating basis functions f or spectral sensitivity of digital cameras, ” in Meeting on Imag e Recognition and Unders tanding (MIRU), (2009). 62. F . Sig er nes, M. Dyr land, N. P eters, D. A. Lorentzen, T . Svenøe, K. Heia, S. Chernouss, C. S. Deehr, and M. Kosc h, “The absolute sensitivity of digital colour cameras, ” Opt. Express 17 , 20211–20220 (2009). 63. Adobe Systems Incor porated, “Digital Negativ e (DNG) specification, version 1.4.0.0, ” T ech. rep. (2012). 64. European Machine V ision Association, “EMV A standard 1288: Standard for characterization of image sensors and cameras, ” T ech. rep. (2016). 65. B. E. Bay er, “Color imaging ar ra y, ” (1975). 66. C.-H. Lin, K.-L. Chung, and C.- W . Y u, “No vel chroma subsampling strategy based on mathematical optimization for compressing mosaic videos with arbitrary R GB color filter array s in H.264/A V C and HEV C, ” IEEE Transactions on Circuits Syst. f or V ideo T echnol. 26 , 1722–1733 (2016). 67. D. Menon, S. Andr iani, and G. Cal vagno, “Demosaicing with directional filter ing and a posteriori decision, ” IEEE T ransactions on Image Process. 16 , 132–141 (2007). 68. G. Finlay son, M. M. Darrodi, and M. Mackie wicz, “Rank -based camera spectral sensitivity estimation, ” J. Opt. Soc. Am. A 33 , 589–599 (2016). 69. H. Zhang and E. Sanchez-Sinencio, “Linearization techniques for CMOS low noise amplifiers: A tutorial, ” IEEE T ransactions on Circuits Sy st. I: R egul. Pap. 58 , 22–36 (2011). 70. V . D. Silva, V . Chesnoko v , and D. Larkin, “A nov el adaptive shading correction algor ithm f or camera systems, ” Electron. Imaging 2016 , 1–5 (2016). 71. D. B. Goldman, “Vignette and exposure calibration and compensation, ” IEEE Transactions on Pattern Analy sis Mac h. Intell. 32 , 2276–2288 (2010). 72. Y . Zheng, S. Lin, C. Kambhamettu, J. Y u, and S. Bing Kang, “Single-image vignetting cor rection, ” IEEE T ransactions on Pattern Anal ysis Mach. Intell. 31 , 2243–2256 (2009). 73. S. J. Kim and M. Pollef ey s, “R obust radiometric calibration and vignetting cor rection, ” IEEE T ransactions on Pattern Analy sis Mach. Intell. 30 , 562–576 (2008). 74. W . Y u, Y . Chung, and J. Soh, “Vignetting distortion correction method f or high quality digital imaging, ” in Proceedings of the 17t h International Confer ence on P attern Recognition, 2004. ICPR 2004. , (IEEE, 2004), pp. 666–669. 75. I. S. McLean, Electronic imaging in astronomy : detect ors and ins trumentation (Springer - V er lag Ber lin Heidelberg, 2008). 76. C. A. Gue ymard, “Parameterized transmittance model for direct beam and circumsolar spectral ir radiance, ” Sol. Energy 71 , 325–346 (2001). 77. C. A. Gueymard, “SMAR TS2, A Simple Model of the Atmospheric Radiativ e T ransfer of Sunshine: Algor ithms and performance assessment,” T ech. rep., Florida Solar Energy Center/Univ ersity of Central Flor ida (1995). 78. C. E. T apia A yuga and J. Zamorano, “LICA AstroCalc, a softw are to analyze the impact of artificial light: Extracting parameters from the spectra of street and indoor lamps, ” J. Quant. Spectrosc. Radiat. T ransf. 214 , 33–38 (2018). 79. J. R. Schott, Remo te sensing: the imag e chain approach (Oxf ord U niversity Press, 2007), 2nd ed. 80. U . Beisl, “Absolute spectroradiometric calibration of the ADS40 sensor, ” in Proc. of the ISPRS Commission I Symposium "Fr om Sensors to Imag er y", P aris – Marne-la- V allée, (2006). 81. S. W . Bro wn, T . C. Larason, C. Habauzit, G. P . Eppeldauer, Y . Ohno, and K. R. Lykk e, “Absolute radiometric calibration of digital imaging sys tems, ” in Proc. SPIE 4306, vol. 4306 M. M. Blouke, J. Canosa, and N. Sampat, eds. (International Society for Optics and Photonics, 2001). 82. S. Chaji, A. P our reza, H. P our reza, and M. R ouhani, “Estimation of the camera spectral sensitivity function using neural lear ning and architecture, ” J. Opt. Soc. Am. A 35 , 850–858 (2018). 83. P .-K. Y ang, “Deter mining the spectral responsivity from relative measurements b y using multicolor light-emitting diodes as probing light sources, ” Optik 126 , 3088–3092 (2015).

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment