Natural brain-information interfaces: Recommending information by relevance inferred from human brain signals

Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user's interest or search intention is necessary to recommend and retrieve relevant information from these co…

Authors: Manuel J. A. Eugster, Tuukka Ruotsalo, Michiel M. Spape

Natural brain-information interfaces: Recommending information by   relevance inferred from human brain signals
Natural brain-information in terfaces: Recommending information b y relev ance inferred from h uman brain signals Man uel J. A. Eugster 1 ∗ † , T uukk a Ruotsalo 1 ∗ , Mic hiel M. Spap ´ e 1 ∗ , Osw ald Barral 2 , Niklas Ra v a ja 1 , 3 , Giulio Jacucci 1 , 2 , Sam uel Kaski 1 , 2 † 1 Helsinki Institute for Information T echnology HIIT, Department of Computer Science, Aalto Univ ersity , Finland. 2 Helsinki Institute for Information T echnology HIIT, Department of Computer Science, Univ ersity of Helsinki, Finland. 3 Helsinki Institute for Information T echnology HIIT, Department of Social Research, Univ ersity of Helsinki, Finland. Abstract Finding relev an t information from large do cumen t collec- tions suc h as the W orld Wide W eb is a common task in our daily lives. Estimation of a user’s interest or searc h in tention is necessary to recommend and retrieve rele- v an t information from these collections. W e introduce a brain-information in terface used for recommending infor- mation by relev ance inferred directly from brain signals. In experiments, participan ts w ere ask ed to read Wikipedia do cumen ts ab out a selection of topics while their EEG w as recorded. Based on the prediction of word relev ance, the individual’s search inten t w as modeled and success- fully used for retrieving new, relev an t do cumen ts from the whole English Wikip edia corpus. The results show that the users’ interests tow ards digital conten t can b e mo d- eled from the brain signals ev oked by reading. The in tro- duced brain-relev ance paradigm enables the recommenda- tion of information without any explicit user interaction, and may b e applied across div erse information-intensiv e applications. 1 In tro duction Do cumen ts on the W orld Wide W eb, and seemingly coun t- less other information sources av ailable in a v ariet y of on-line services, hav e b ecome a central resource in our da y-to-day decisions. As our capabilities are limited in finding relev ant information from large collections, com- putational recommender systems hav e b een introduced to alleviate information ov erload [12]. T o predict our future needs and inten tions, recommender systems rely on the history of observ ations ab out our interests [37]. Unfortu- nately , p eople are reluctan t to pro vide explicit feedbac k to recommender systems [18]. As a consequence, acquir- ing information ab out user in tents has b ecome a ma jor b ottlenec k to recommendation p erformance, and sources of information ab out the individual’s interests ha ve b een limited to the implicit monitoring of online b eha vior, such as which do cumen ts they read, which videos they watc h, ∗ equal contribution † corresponding authors Figure 1: The user reads text from the English Wikip edia while the ev ent-related potentials (ERPs) are recorded us- ing electro encephalograph y (EEG). A classifier is trained to distinguish the relev an t from the irrelev an t w ords by us- ing the ERPs associated with eac h w ord in the text. An in- ten t mo del uses the relev ance estimates as input and then infers the user’s search inten t. The inten t mo del is used to retrieve new information from the English Wikip edia. or for which items they shop [18]. An intriguing alterna- tiv e is to monitor the brain activity of an individual; that could mitigate the cognitiv e load in volv ed in expressing in tentions and enable the direct inferring of information ab out relev ance. T o utilize brain signals, we in tro duce a brain-relev ance paradigm for information filtering. The paradigm is based on the h yp othesis that relev ance feedbac k on individual w ords, estimated from brain activit y during a reading task, can b e utilized to automatically recommend a set of do c- umen ts relev an t to the user’s topic of in terest (see Fig- ure 1 for an illustration). F ollowing the brain-relev ance paradigm, w e introduce the first end-to-end metho dol- ogy for performing fully automatic information filtering b y using only the asso ciated brain activit y (Figure 1). The metho dology is based on predicting and mo deling the user’s informational inten ts [34] using brain signals and the asso ciated text corpus statistics, and recommending new and unseen information using the estimated inten t mo del. 1 W e demonstrate the effectiv eness of the metho dology with brain signals naturally ev ok ed during a text read- ing task. That is, unlike standard activ e brain-computer in terface (BCI) practices, the metho d used here do es not require the user to p erform additional, explicit tasks (suc h as the men tal counting of relev ant w ords) that ha ve been previously shown to enhance the signal-to-noise ratio [45]. Instead, the methodology relies solely on the detection of the neural activity patterns asso ciated with relev ance, so that applications b enefit from truly implicit and passive measuremen ts. The d ata from exp erimen ts, in which electro encephalog- raph y (EEG) was recorded from 15 participants while they w ere reading texts, shows that the recommendation of new relev ant information can b e significan tly impro ved us- ing brain signals when compared to a random baseline. The result suggests that relev ance can b e predicted from brain signals that are naturally evok ed when users read, and they can b e utilized in recommending new informa- tion from the W eb as a part of our ev eryday information- seeking activities. 2 Brain-relev ance paradigm for in- formation filtering W e prop ose a new paradigm for information filtering based on brain activity asso ciated with relev ance. The br ain- r elevanc e p ar adigm is based on the following four hypothe- ses ev aluated empirically in this paper: H1: Brain activity asso ciated with relev ant words is dif- feren t from brain activit y asso ciated with irrelev an t w ords. H2: W ords can b e inferred to be relev an t or irrelev an t based on the associated brain activit y . H3: W ords inferred to b e relev an t are more informative for do cumen t retriev al than are words inferred to b e irrelev an t. H4: Relev ant do cuments can b e recommended based on the inferred relev an t and informative words. The following tw o sections provi de the cognitive neuro- science and the information science motiv ations as well as existing foundations of the brain-relev ance paradigm. Cognitiv e neuroscience motiv ation. Ev ent-related p oten tials (ERPs) are obtained b y sync hronizing electrical p oten tials from EEG to the onset (“time-lo c ked”) of sen- sory or motoric ev ents. The last 50 years of psychoph ysi- ology hav e demonstrated b ey ond a reasonable doubt that ERPs hav e a neural origin, that men tal ev en ts can reli- ably elicit them, and that the measurement of their tim- ing, scalp distribution (“top ograph y”), and amplitude can b e inv aluable in providing information on normal [19] and neuropathological functioning [29]. Men tally controlling in terfaces through measured ERPs has, to date, principally relied on the P300. The P300 is a distinct, p ositiv e p oten tial that o ccurs at least 300 ms after stimulus onset and is traditionally obtained via so- called o ddball paradigms. Sutton et al. [40] presen ted a fast series of simple stimuli with infrequen tly o ccurring devian ts (e.g. 1 in 6tones ha ving a high pitch) and discov- ered that these rare “o ddballs” would on a verage trigger a p ositivit y compared to the standard stimuli. Later ex- p erimen ts show ed that the degree to which the stimulus pro vided new information [41] and was task-relev ant [39] amplified the P300, whereas rep etitive, unattended [15] or easily processed [8] stim uli could remov e the P300 entirely . F or the language domain, the onset of w ords normally ev okes a negativit y at ca. 400 ms which has been at- tributed to semantic pro cessing [21]. This N400 w as first observ ed as a type of “semantic o ddball” since the closing w ord in a sen tence such as “I like my coffee with milk and torp edoes” is semantically improbable, but would amplify the N400 rather than cause a P300. How ev er, if a rare syn tactic violation occurs in a sen tence (“I likes my coffee [..]”), the deviant word once again evok es a p ositivit y , but no w at 600 ms [11]. As this P600 shows similarities to the P300 in p olarit y and top ograph y , it started the ongoing debate as to whether it is a language-sp ecific “syntactic p ositiv e shift”, or a dela y ed P300 [22, 26, 35]. Finally , researc h on memory has identified a late p ositiv e comp o- nen t (LPC) at a latency similar to the P600. The LPC has b een related to semantic priming and is particularly strong in tasks where an explicit judgemen t on whether a w ord is old or new is to b e made [32]. Consequen tly , it is often associated with mnemonic op erations such as recol- lection [27]. In the present context, relev an t words could cue recollection of the user’s in ten t, thereb y amplifying the LPC. Although the P300/P600 and N400 are often describ ed as contrasting effects, this is not necessarily the case in pre- dicting term relev ance. That is, if an o dd, task-relev ant stim ulus yields a P300 or P600 and a semantically irrele- v an t stim ulus an N400, it follows that the total amoun t of p ositivit y b et w een an estimated 300 and 700 ms may indi- cate the summed total seman tic task relev ance. This was indeed found b y Kotchoubey and Lang [20], who show ed that semantically relev ant o ddballs (animal names) that w ere randomly intermixed amongst words from four other categories evok ed a P300-like resp onse for semantic rele- v ance (but at ca. 600 ms). Likewise, our previous work on inferring term-relev ance from even t-related p oten tials [9], sho wed that a searc h category elicited either P300s/P600s in resp onse to relev ant words or N400s ev oked by seman- tically irrelev ant terms. Information science motiv ation. Relev ance estima- tion aims to quantify how well the retriev ed information meets the information need of the user. Computational metho ds are used in estimating statistical relev ance mea- sures based on word o ccurrences in a do cumen t collection. 2 a b Text Cont ent Document from relevant topic Document from irrelevant topic Atom Money the atom is a basic unit of matter that consists of a dense cen tral nucleus surrounded by a cloud o f negativ ely charged electrons money is any item or verifiable recor d that is ge nerally acce pted as payment for goods and services and repaym ent of debts i n a particular count ry or context the atomic nucle us contains a mix of positively ch arged proton s and electrically neutral neutrons the main functions of money ar e distinguish ed as a medium of exchange a unit of account a store of value an d occasionall y in the past a stan dard of deferred paymen t c TextConte ntofInterest Rank Title 1 Atom 2 Timelineof atomicand subatomicp hysics 3 Neutron 4 Timelineof quantum mechanics 5 Electron 6 Timelineof physical chemistry 7 Historyofp hysics 8 Proton 9 Historyofc hemistry 10 Betadecay Figure 2: Extract from one exp erimen t to illustrate a reading task with subsequen t document retriev al: (a) Our data acquisition setup with one participan t wearing an EEG cap with em b edded electro des. (b) Sample text with the first tw o sentences from the Wikip edia do cumen t “Atom” (relev an t do cumen t) and the do cumen t “Money” (irrelev ant do cumen t). The color of the words sho ws the explicit relev ance judgmen ts by the user (red: relev an t; blue: irrelev an t). The crossed-out words w ere lost b ecause of too m uch noise in the EEG (e.g., b ecause of eye blinks). The framed words “matter” and “atomic” were the top words predicted to b e relev ant by the EEG-based classifier. Colors and markings w ere not sho wn to the user. (c) The top-10 retrieved do cumen ts, based on the predicted relev an t words, are highly related to the relev an t topic “Atom”. These measures are used in man y information retriev al ap- plications, such as W eb search engines, recommender sys- tems, and digital libraries. One of the most well-kno wn statistical measures of word informativeness or word im- p ortance is tf-idf [17]. The foundation of tf-idf is that low- and medium frequency-w ords ha ve a higher discriminating pow er at the lev el of the do cument collection, in particular when they ha ve high frequency in an individual do cumen t [17]. F or example, the word “nucleus” has a lo w frequency at the collection level but a higher frequency in a do cumen t ab out atoms (i.e., the “Atoms” do cumen t) and therefore is con- sidered to discriminate this do cumen t b etter than, for ex- ample, the word “the,” which has a high frequency at both the collection and do cumen t lev els. Search and recommen- dation systems use w ord-imp ortance statistics to pro duce a ranked list of do cumen ts that match the word list en- co ding the user’s search inten t. The words in do cumen ts can b e indexed with weigh ts enco ding their imp ortance, and ranking mo dels then compute a relev ance score for eac h do cumen t in the do cumen t collection and rank the do cumen ts according to the relev ance scores. F or exam- ple, if the w ords “the” and “n ucleus” are encoding the user’s inten t, then a ranking mo del could estimate that the do cumen t “Atom” should b e ranked higher b ecause it has a high importance v alue for the w ord “n ucleus” com- pared to, for example, the do cumen t “Nuclear Magnetic Resonance,” which also contains the w ords “nucleus” and “the,” but with lo wer imp ortance. In summary , word relev ance is determined by the user giv en the user’s searc h in tent. W ord informativ eness is determined by the searc h system given the do cumen t col- lection. W ords that are both relev ant and informative are w ords that discriminate relev an t do cumen ts from irrele- v an t do cumen ts and are needed to recommend meaningful do cumen ts. In addition to the brain-activity findings re- lated to the semantic o ddball (introduced in the cognitive neuroscience motiv ation), recent findings in quantifying brain activit y asso ciated with language also suggest a con- nection b et w een the word class and frequency of the w ord, and the corresp onding brain activity . It has b een shown that brain activity is different for different w ord classes in language [23] and that high-frequency words elicit different activit y than lo w-frequency w ords [14]. 3 Metho dology During the exp erimen t, we recorded the EEG signals of 15 participants while each participan t p erformed a set of eigh t reading tasks. Experimental details are provided in SI Neural-Activity Recording Exp erimen t. Reading task. The text conten t read by the user con- sisted of t wo do cumen ts at a time. Each do cumen t was c hosen from a list of 30 candidate do cumen ts, and each do cumen t was selected from a different topical area. F or example, the do cumen ts “A tom,” “Money ,” and “Michael Jac kson” were part of the list; SI T able 1 pro vides a de- tailed list of the do cumen ts. One do cumen t represen ted the r elevant topic , the other one an irr elevant topic . The user c hose the relev ant topic herself in the b eginning of the exp erimen t. The user read the first six sen tences from eac h do cumen t—first the first sen tence from b oth do cumen ts, then the second sentence from b oth do cumen ts, and so on. The obtained term-relev ance feedbac k (predicted from brain signals) w as then used to retrieve further documents relev an t to the user-chosen topic of interest, from among the four million documents av ailable in the database. 3 a [250, 350] ms [350, 450] ms [450, 550] ms [550, 650] ms 1 -1 0 b −1 0 1 −250 0 250 500 750 1000 Time relative to stimulus onset (ms) Grand av erage Pz channel (mV) c ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 5 10 15 REL IRR Explicit relev ance judgment tf−idf value Figure 3: Grand av erage results o ver all participants and reading tasks based on explicit relev ance judgments: (a) Grand a verage-based top ographic scalp plots of relev ant minus irrelev ant ERPs from [250 , 350] ms, [350 , 450] ms, [450 , 550] ms, and [550 , 650] ms after word onset. (b) Grand av erage even t-related p oten tial at the Pz c hannel of relev ant (red curv e) and irrelev an t (blue curve) terms. The gray vertical lines show the word onset even ts. (c) T erm frequency–inv erse do cumen t frequency v alues ( tf-idf ) of relev an t (red b o x plot) and irrelev an t (blue b o x plot) words. The median of relev an t words is 5 . 00, and that of irrelev an t w ords is 1 . 46. The difference is significant (Wilco xon test, V = 49680192, p < 0 . 0001). In order for the task to b e representativ e of natural read- ing, no simplifications were done on the text con tent. In particular, this implies that the sentences ha ve different n umbers of w ords, and word length ranges from v ery short to very long. Figure 2 illustrates one reading task consist- ing of the relev ant do cumen t “A tom” and the irrelev an t do cumen t “Money” with subsequen t document retriev al. Data analysis. T o asso ciate brain activity with rele- v ance, we computed the neural correlates of relev ant and irrelev an t w ords for all participan ts. A participan t-sp ecific single-trial prediction model [2] was computed for eac h participan t, and the p erformance was ev aluated on a left- out reading task (lea ve-one-task-out, a cross-v alidation sc heme). This procedure matches the example of the task illustrated in Figure 2, consisting of the following steps: (1) users p erform a new reading task; (2) relev ance pre- dictions are made for eac h w ord based on a model that w as trained on observ ations collected during previous reading tasks; and (3) do cumen ts are retrieved using the relev ance predictions for the present reading task. W e presen t re- sults for the (H1) neural correlates of relev ance, (H2) term- relev ance prediction, (H3) relation betw een relev ance pre- diction and word imp ortance, and (H4) do cumen t recom- mendation. Results are presented for b oth individual users and as grand av erages. T echnical details are in SI Data Analysis Details. Ev aluation. T o quantify the significance and the effect sizes of the br ain fe e db ack -based prediction performances, w e compared them against p erformances from prediction mo dels learned from r andomize d fe e db ack . By comparing against this baseline, we are able to op erate with natu- ral and hence non-balanced texts. Standard p erm utation tests [10] were applied for significance testing. W e used the A r e a under the R OC curve (A UC) to quan- tify the p erformance of the classifiers. AUC is a widely used and sensible measure ev en under the class imbalances of our scenario, and it is a comprehensive measure for com- parison against the prediction mo dels based on random- ized feedback. F rom the p ersp ectiv e of do cumen t recom- mendations, it is more imp ortan t to predict relev an t w ords than to predict irrelev an t w ords. T o quan tify this, we mea- sured the pr e cision (SI App endix, Equation 1). T o demon- strate the influence of a positive predicted w ord on the do cumen t retriev al problem, we additionally measured the tf-idf-weighte d pr e cision (SI Appendix, Equation 2). F rom the user p erspective, the qualit y of the recommended do c- umen ts is important. T o quan tify this, w e used cumulative imformation gain , whic h measures the sum of the graded relev ance v alues of the returned do cumen ts (SI App endix, Equation 7). AUC and precision are based on participan t- sp ecific relev ance judgments, and cumulativ e information gain is based on external topic-level exp ert judgmen ts. Details on the concrete definition of the ev aluation mea- suremen ts and the assessment pro cess are a v ailable in the SI App endix. 4 Results Neural correlates of relev ance. Grand-av erage based ERP results show that brain activity asso ciated with rele- v an t words is different from brain activit y asso ciated with irrelev an t words (H1), ov er all participants and all read- ing tasks. The topographic scalp plots in Figure 3a show the spatial in terp olation of relev an t ERPs minus irrele- v an t ERPs ov er all electro des from 300ms to 600ms after a w ord w as shown on screen. The topography of the differ- ence show ed an initial fron to-central p ositivit y at 300ms, relativ e to the onset of the word on the screen, follow ed b y a centro-parietal p ositivit y from 400 to 600 ms. The maximal effect of relev ance can b e clearly observ ed in Fig- ure 3b, with − 0 . 24 µ V for relev an t w ords and − 1 . 06 µ V for irrelev ant words at 367ms ov er Pz. F ollowing the neg- ativit y a late positivity can b e observed for b oth t yp es 4 ● ● −0.05 0.00 0.05 0.10 0.15 A UC of the receiver operating characteristic Classification performance Brain feedback − Random f eedback −0.6 −0.3 0.0 0.3 Cumulative gain based on expert judgments Retrieval perf ormance Brain feedback − Random f eedback Figure 4: Ov erall prediction and retriev al p erformances: (a) Overall classification p erformance on new data, mea- sured b y the difference of A UC b et w een a classifier learned using explicit relev ance feedback and that learned us- ing randomized feedback. The difference is significan tly greater than zero ( p < 0 . 0005). The figure sho ws that the prediction mo dels are able to find structure significantly discriminating b et w een relev ant and irrelev an t brain sig- nal patterns. (b) Ov erall retriev al p erformance c harac- terized as difference in cumulativ e gain (based on exp ert judgmen ts) b et ween documents retriev ed based on brain- based feedback and randomized feedbac k (normalized with the maxim um information gain that w ould b e p ossible to ac hieve when retrieving the b est top-30 documents). Brain feedbac k is significantly b etter than randomized feedback ( p < 0 . 003). of words, which reac hes a lo cal maximum at a latency of around 600 ms, implicating a p ossible P600 or LPC. F or descriptiv e purposes, w e tested the difference be- t ween the relev ant and irrelev an t words of well-kno wn P300, N400, and P600 ERP comp onen ts and their laten- cies giv en in the existing literature. There was no sig- nifican t difference in the early P3 interv al ([250 , 350] ms, paired t -test, T (14) = 1 . 75, p = 0 . 10), which suggests that the system do es not rely on the mere visual resemblance b et w een relev an t words and the inten t category . How ever, irrelev an t words elicited a negativit y compared to relev ant w ords in the N400 window ([350 , 500] ms, T (14) = 2 . 27, p = 0 . 04). Moreov er, relev an t words were found to sig- nifican tly elicit a p ositivit y compared to irrelev ant w ords in the P600 interv al ([500 , 850] ms, paired t -test interv al, T (14) = 4 . 99, p = 0 . 0002). F or the purp ose of the sub- sequen t term-relev ance prediction, this result verified our approac h of computing the temp oral features for the ERP classification within the range of 200ms to 950ms (this range was determined based on the pilot experiments). SI Figure 1 shows the remaining scalp plots for other time in terv als, and SI Figure 2 shows the grand-av erage-based ERP curves for all c hannels. T erm-relev ance prediction. Across participants and reading tasks, the classification of brain signals by mo dels learned from earlier explicit feedback sho ws significantly b etter results than with mo dels learned from randomized feedbac k (Figure 4a; p < 0 . 0005, Wilcoxon test, V = 118). ● ● ● ● ● ● ● 0.4 0.5 0.6 0.7 0.8 TRPB109 TRPB102 TRPB111 TRPB112 TRPB103 TRPB101 TRPB114 TRPB113 TRPB115 TRPB107 TRPB106 TRPB116 TRPB105 TRPB117 TRPB110 A UC (Classification) Figure 5: Comprehensive term-relev ance prediction p er- formance on participant level: classification p erformance for all participants (TRPB#), measured as AUC for participan t-sp ecific models on left-out reading tasks. The horizon tal dashed line indicates the p erformance of a mo del learned using randomized feedback. Asterisks in- dicate models with significan tly better A UC ( p < 0 . 05; exact p -v alues are listed in SI T able 3). This implies that the prediction models are able to extract and utilize structured signals significantly and that w ords can b e inferred to b e relev ant or irrelev an t based on the asso ciated brain activit y (H2). Figure 5 shows the classification p erformance in terms of A UC for eac h participant. F or 13 out of the 15 par- ticipan ts, the term-relev ance prediction mo dels perform significan tly b etter than do es a prediction mo del learned based on randomized feedback (hence ha ving A UC = 0 . 5; p < 0 . 05, within-participant p erm utation tests with 1000 iterations). F or t wo participants, the predictions w ere es- sen tially random, and they were excluded from the rest of the analyses. It is well known that BCI control do es not w ork for a non-negligible p ortion of participants (ca. 15- 30% [42]), and the rep orted results should b e interpreted as b eing v alid for the p opulation of users, which can b e rapidly screened by using the system on pre-defined tasks. Relev ance for do cumen t retriev al. F or our final goal, the retriev al and recommendation of documents, it is im- p ortan t to be able to detect w ords that are both relev an t and informativ e (measured by the tf-idf ) in discriminat- ing b et w een relev ant and irrelev ant do cumen ts in the full collection. Figure 6 visualizes the relationship b et ween the predicted relev ance probability of w ords and their tf- idf v alues. Relev an t words (according to the user’s own judgemen t afterwards) are predicted as b eing more rele- v an t than irrelev an t w ords, but also, their tf-idf v alues are greater (H3). The figure further indicates that the tf- idf dimension explains more of the difference than do es the predicted relev ance. In terms of an information retriev al application, the pre- cision of the prediction models is the most important mea- sure. F or do cumen t retriev al, the influences of positive predicted words on the search results are not equal but 5 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 REL IRR 0 5 10 15 tf−idf value Predicted relev ance probability Figure 6: Relev ance prediction v ersus tf-idf v alue: Two- dimensional k ernel density estimate (the blue con tours) for relev an t (top) and irrelev an t (b ottom) words with an axis-aligned biv ariate normal kernel. The mass of relev an t w ords (REL) is m uch more tow ards the top right corner (high probabilit y to be relev an t and high tf-idf v alue) than the mass of irrelev an t words (IRR). The gra y p oin ts in the bac kground are the observed words. rather dep enden t on the word-specific tf-idf v alues within eac h individual do cumen t. F or example, a true p ositiv e predicted word can still ha ve very low impact on the searc h result if its tf-idf v alue is low in the relev an t do cumen ts. Similarly , a false p ositiv e predicted word can ha ve only a lo w impact on the search result, if its tf-idf v alue is low. Figure 7 visualizes the mean precision of the prediction mo dels from the p ersp ectiv e of the retriev al problem. It sho ws the mean precision for each of the 13 participants o ver all of the reading tasks (based on binarizing the pre- dicted probabilities with the threshold 0 . 5). In addition, it shows what is actually crucial: The precision weigh ted with the tf-idf v alues from the relev an t document is in all cases, except for one, muc h higher than the precision w eighted with the tf-idf v alues from the irrelev ant do cu- men t. In conclusion, the results in Figure 7 explain why the prediction mo dels are useful for do cumen t retriev al and recommendation, even though the unw eighted precision of the prediction models is limited. In detail, our pre- diction mo dels tend to predict true p ositiv e w ords with higher tf-idf v alues, and false p ositiv e words with lo w er tf-idf v alues. This means that our prediction mo dels tend to predict words that the user would judge to b e relev ant, and which are also discriminativ e in terms of the user’s searc h in tent. Do cumen t recommendation. The final step is to use the relev an t w ords—predicted from brain signals—for do c- umen t retriev al and recommendation, and to ev aluate the cum ulative gain. Figure 4b sho ws that across the partici- pan ts and reading tasks, do cumen t retriev al p erformance based on brain feedback is significantly b etter than ran- domized feedback (top-30 do cumen ts, p < 0 . 003, Wilco xon test, V = 3153). Therefore, relev ant do cumen ts can b e 0.0 0.2 0.4 0.6 TRPB109 TRPB106 TRPB112 TRPB111 TRPB114 TRPB116 TRPB102 TRPB101 TRPB113 TRPB105 TRPB115 TRPB107 TRPB103 Precision (Classification) Mean precision − weighted for rele vant document − weighted for irrele vant document Figure 7: Relev ance prediction w eighted for retriev al on participan t lev el: Mean precision p er participant (black bars). F or the do cumen t retriev al problem, the influence of a p ositiv e predicted w ord is dep enden t on its document- sp ecific tf-idf v alue. Therefore, a false p ositiv e can hav e a smaller effect than a true p ositiv e. The red and blue bars illustrate this effect. The red bars show the precision w eighted with the tf-idf v alue of the relev an t do cumen t. The blue bars show the precision weigh ted with the tf-idf v alue of the irrelev ant topic. recommended based on the inferred relev an t and informa- tiv e w ords (H4). Figure 8 sho ws the do cumen t retriev al p erformance for eac h participan t in terms of mean information gain. Based on the expert scoring, the scale for the mean information gain is from 0 (irrelev an t) to 3 (highly relev ant). The visu- alization sho ws for each participant the mean information gain ov er all reading tasks based on brain feedback (blue bars) and randomized feedback (purple bars). F or 10 par- ticipan ts, the brain feedbac k results in significan tly greater information gain ( p < 0 . 05; tw o-sided Wilcoxon test). SI Figure 3 also shows the visualizations for top 10 and top 20 retriev ed do cumen ts. In both cases, the same signif- ican t results hold except for one participan t (TRPB113). 5 Discussion By com bining insigh ts on information science and cog- nitiv e neuroscience, w e prop osed the brain-relev ance paradigm to construct maximally natural in terfaces for in- formation filtering: The user just reads, brain activit y is monitored, and new information is recommended. T o our kno wledge, this is the first end-to-end demonstration that the recommendation of new information is possible just b y monitoring the brain activit y while the user is reading. The brain-relev ance paradigm for information filtering is based on four hypotheses empirically demonstrated in this pap er. W e show ed that (H1) there is a difference in brain activity asso ciated with relev an t versus irrelev an t w ords; (H2) there is a difference in the imp ortance of words dep ending on their relev ance to the user’s search inten t; (H3) it is possible to detect relev ant and informativ e w ords based on brain activity; and (H4) it is p ossible to recom- 6 0.0 0.5 1.0 1.5 TRPB109 TRPB114 TRPB116 TRPB106 TRPB115 TRPB112 TRPB101 TRPB111 TRPB113 TRPB103 TRPB102 TRPB105 TRPB107 Information gain (Retriev al) based on expert judgments Brain feedback Random feedback Figure 8: Retriev al p erformance of each participan t: Av- erage cum ulative information gain (on a scale of 0-3) based on the top-30 retriev ed documents for the participan t. As- terisks indicate a significantly b etter p ooled information gain based on brain feedback than random feedback re- triev al based on 1000 iterations ( p < 0 . 05; exact p -v alues are listed in SI T able 4). mend relev an t do cumen ts based on the detected relev an t and informative words. F rom a cognitive neuroscience point of view, it is kno wn that sp ecific ERPs can be particularly asso ciated with rel- ev ance. In cognitiv e science, early P300s ha v e been related with task relev ance. In psycholinguistics, N400s are com- monly asso ciated with semantic pro cesses [21] as seman- tically incongruent words amplify the comp onen t whereas seman tic relev ance reduces it. Late positivity has b een related to semantic task-relev ant stim uli [20], in particu- lar if characterizing it as a delay ed P3 resp onse, due to the assessing of relev ance of language, or an LPC, due to mnemonic op erations and seman tic judgemen ts. In line with these findings, our grand a verages indicate that the ERP at a latency of 500–850 ms is most lik ely the best pre- dictor of p erceiving w ords that are semantically related to a user’s search inten t. The present data do not allo w for a disso ciation among the P300, N400 or P600 as the most lik ely neural candidate for evoking the observ ed effect. In- deed, the metho d is based on the assumption that task rel- ev ance and semantic relev ance b oth contribute positively to the inference of relev ance when aiming to ultimately predict a user’s search inten t without requiring an addi- tional task by the user. While our results use real data and are also v alid b e- y ond the particular experimental setup, our metho dology is limited to exp erimen tal setups in which it is p ossible to con trol strong noise, such as noise due to physical mov e- men ts, whic h are known to cause confounding artifacts in the EEG signal. Another limitation is that the comparison setup in our studies considers only tw o topics at a time, one being relev ant and another b eing irr elev ant. While this is a solid experimental design and can rule out man y confounding factors, it may not be v alid in more realistic scenarios in whic h users choose among a v ariet y of topics during their information seeking activities. F urthermore, the presented term-relev ance prediction is based on a tra- ditional set of even t related p oten tials to demonstrate the feasibilit y of the metho dology . How ever, it is possible that more adv anced feature extraction could improv e the so- lution further, for example b y computing phase synchro- nization statistics in the delta and theta frequencies, whic h recen tly hav e b een shown to be sensitive to the detection of relev ant lexical information [4]. Despite these limitations, our work is the first to address an end-to-end methodology for p erforming fully automatic information filtering by using only the associated brain activit y . Our exp erimen ts demonstrate that our metho d w orks without any requirements of a background task or artificially evok ed even t-related p oten tials; the users are just reading text, and new information is recommended. Our findings can enable systems that analyze relev ance directly from individuals’ brain signals naturally elicited as a part of our ev eryday information seeking activities. 6 Materials The SI Appendix pro vides extensive details on all tech- nical asp ects. SI Database describ es the selection pro- cess and the criteria for the p ool of candidate do cumen ts. SI Neural-Activit y Recording Experiment pro vides the ex- p erimen tal details, i.e., the participan t recruiting, the pro- cedure and design of the EEG recording exp eriment, the apparatus and stimuli definition, and details on the pi- lot exp erimen ts. SI Data Analysis Details describ es the general prediction ev aluation setup, EEG cleaning and preparing, and the EEG feature engineering. SI T erm Relev ance Prediction gives a description of the Linear Dis- criminan t Analysis (LDA) metho d used for the prediction mo dels, and sp ecifics on the ev aluation measures for pre- diction. SI Inten t Mo deling based Recommendation giv es details on the inten t estimation mo del based on the Lin- Rel algorithm, and specifics on the ev aluation measures for do cumen t retriev al. 7 Ac kno wledgmen ts W e thank Khalil Klouche for designing Figure 1. This w ork has b een partly supp orted by the Academy of Fin- land (278090; Multivire, 255725; 268999; 292334; 294238; and the Finnish Cen tre of Excellence in Computational In- ference Research COIN), and MindSee (FP7 ICT; Grant Agreemen t 611570). 7 Supplemen tal Information SI Natural brain-information in terfaces: Recommending information b y relev ance inferred from h uman brain signals 8 SI Database The database used in the experiment was the English Wikip edia provided b y Wikimedia (database dump of 2014/07/07 [43]). F or the exp erimen t, our search engine indexed all articles except sp ecial pages suc h as disam- biguation pages. The references, notes, and external links w ere remov ed from the text of the articles. The final database contained ov er 4 million articles. The p o ol of c andidate do cuments read by the partici- pan ts during the exp erimen t consisted of 30 do cumen ts. The criteria for c ho osing a document were that (1) the do cumen t should describ e a topic of general interest and that (2) the first six sentences of the introduction of the do cumen t pro vide a sufficient description of the topic. The final p ool of do cumen ts fulfilling these criteria are listed in SI T able1. 9 SI Relev ance Judgmen ts of W ords and Do cumen ts In order to measure the relev ance prediction p erformance, “ground truth” in the form of relev ance judgements for individual w ords is needed, for a sp ecific reading task, on b oth relev ant and irrelev an t do cumen t. The binary rele- v ance judgment “relev ant” or “irrelev an t” of a word was pro vided b y each participant during the exp erimen t (see SI Neural Activity Recording Exp eriment). This allow ed us to capture the sub jectiv e nature of perceived relev ance. In addition, for each document, the w ord class of eac h word w as defined (nouns, verbs, adjectives, etc.) and three ex- p erts judged eac h word as being “relev ant” or “irrelev an t” for the given do cumen t. In order to measure the do cumen t retriev al p erformance, the “ground truth” relev ance judgmen ts of retriev ed do cu- men ts given a relev ant topic of the reading task is needed. F or each of the 30 do cumen ts in the p ool, three exp erts judged all do cumen ts that were retrieved in any exp eri- men ts (brain feedback-based and random feedback-based), resulting in a p ool of 13971 retrieved do cumen ts. The ex- p erts assessed all the documents according to the follo wing criterion: “W ould y ou b e satisfied in ha ving this do cumen t in the search result list of do cumen ts after examining do c- umen t x ? If yes, ho w satisfied from 1 to 3, if no 0.” The mean Cohen’s Kappa [6] indicated substantial agreement b et w een the exp erts, Kappa = 0 . 72. 10 SI Neural Activit y Recording Exp erimen t W e recorded the electro encephalograph y (EEG) signals of 17 participants while each participan t p erformed a set of eigh t reading tasks. The follo wing sections pro vide the exp erimen tal details. 10.1 P articipan ts P articipants were volun teers recruited from the universi- ties of the Helsinki metrop olitan area in Finland. They w ere selected only if they were righ t-handed, had no self- rep orted neuropathological history , and were deemed to ha ve sufficient fluency in English. Handedness was as- sessed using the Edinburgh Handedness Inv en tory [25, 7] and English fluency using the Cambridge English “T est y our English – Adult Learners” online test [5]. Sev enteen participan ts w ere recruited to participate in the exp eri- men t. The data of tw o participants were discarded due to technical issues. Of the fifteen remaining, 8 w ere fe- male and 7 male. Their English fluency was assessed as high (Mean = 23 . 53, SD = 1 . 23), and their handedness as right-handed (Mean = 87 . 35, SD = 12 . 13). They were fully briefed as to the nature and purp ose of the study prior to the exp erimen t. F urthermore, and in accordance with the Declaration of Helsinki, they signed informed consents and w ere instructed on their righ ts as participants, includ- ing the righ t to withdra w from the experiment at an y time without fear of negative consequences. They received tw o mo vie tic kets as comp ensation for their participation. 10.2 Procedure and Design F ollowing the initial briefing, participants w ere explained the task in more detail, while the EEG equipment was set up. They then received a short training task with t w o sam- ple topics. When participants indicate their complete un- derstanding of the task, the exp erimen t commenced. Par- ticipan ts completed eight exp erimen tal blo c ks, each con- sisting of a single reading task with tw o topics, drawn randomly (without replacemen t) from the po ol of 30 do c- umen t candidates. At the b eginning of the blo c k, they w ere ask ed to freely choose whic h of the tw o do cumen ts describ ed the relev an t and irrelev ant topic. Every blo c k comprised six trials, eac h consisting of one sentence from the relev ant and one sen tence from the irrelev ant do cu- men t, with the presen tation order of the sen tences ran- 8 domized b et ween blo c ks. Each trial consisted of the se- quen tial presen tation of words (the word stream), t wo v a- lidit y sub-tasks, and an explicit word relev ance judgment task (judging the w ords as “relev ant” or “irrelev ant”). Ev ery trial started with a warning signal (the w ords “Starting trial”), follo wed by the presentation of the mask. An initial sentence separator was shown b efore the word stream was shown. The w ord stream consisted of the se- quen tial presen tation of each word in the first sen tence, follo wed b y a sen tence separator, the w ords in the sec- ond sen tence, and concluded b y a final sentence separator. Ev ery word and sentence separator was presen ted for ex- act 699 ms (SD = 0 . 3 ms). Punctuation marks were not sho wn. Masking effects w ere coun tered to some exten t b y the frame resizing, which k eeps the level of fo veal stimula- tion constan t. Prior tests suggested that p eople had more difficult y reading with than without short masks b et w een the bursts, so as a consequence we remov ed them. It is p ossible these masking effects may b e muc h more signif- ican t with strong ”flashing”, as would b e the c ase with v ery short stim ulus durations. Here, the w ords appearing at a slow rate of ca 700 ms p er w ords made reading very easy . F ollowing the w ord stream, t wo extra sub-tasks w ere presen ted to v alidate that the participants had remem- b ered their c hosen word and that they had paid atten- tion to b oth sentences. First, they were asked to type in the name of the relev ant topic in order to ascertain they had not forgotten. Then, a recall task was presented to prev ent the participants from selectively concentrating on one of the t wo sentences. One of the sen tences w as se- lected randomly and presen ted in full on the screen, with one of the nouns or v erbs substituted b y question marks. P articipants w ere ask ed to t yp e in the word missing in the sentence. They were then presen ted with feedbac k in p oin ts regarding their p erformance on these t wo tasks as a motiv ational instrumen t (similar to [38]). Then, in the final part of the trial, the participan ts were ask ed to explicitly rate the relev ance of all words from the relev an t topic. All words were shown in one (if the s en- tence comprised fewer than 35 words) or t wo columns on the screen. A cursor was presented next to eac h word, in- dicating a tw o-alternativ e forced-c hoice decision. Pressing the left arrow key on the keyboard would rate the word as irrelev an t and pressing the right would rate it as relev ant. P articipants were instructed prior to the experiment that they should not re-interpret the relev ance of the words and instead mak e a decision based on their previous viewing of the sentence. T o facilitate this, they received a maxim um of 2 s to respond to each w ord, after whic h the cursor mo ved to the next w ord in the sen tence. After the last w ord w as rated, the trial was completed, with the next trial starting after an inter-trial interv al of ca. 1 s, unless it was the last trial in the blo c k. After completing a blo c k, they were requested to freely write ab out their c hosen, relev an t topic; this task w as de- fined to k eep the participant engaged. Finally , they filled out a questionnaire with tw o items for b oth topics, one re- garding their interest (“how in teresting do you find topic x ”) and one regarding their kno wledge (“ho w muc h do y ou kno w ab out topic x ”) using a 9-p oin t rating scale (1: not at all – 9: extremely so). Three self-timed breaks with a minim um of one min ute evenly split the blo c ks into four parts. The exp erimen t, excluding preparation and instruc- tion, lasted approximately one hour. 10.3 Apparatus and Stimuli W ords w ere presen ted with an 18-point Lucida Console blac k t yp eface at the center of the 19” LCD screen. They w ere shown against a silver (R GB 82%, 82%, 82%) back- ground in the middle of a 300 × 100 pixel pattern mask. The mask was a black rectangle with a grid-lik e pattern, with an opening to show the w ord. This w as used to con- trol the degree to which word length affected ligh t reac hing the eyes (i.e. to make sure longer words were not tanta- moun t to more blac k pixels on the screen). Sen tence sep- arators were word-lik e character rep etitions consisting of 4 to 9 num b ers ( 3333333 ) or other non-alphab etic char- acters ( &&&&&& ), which were designed to mimic the same early visual activity as words without evoking psycholin- guistic pro cessing. The screen w as p ositioned approximately 60 cm from the participants and was running at a resolution of 1680 x 1050 and a refresh rate of 60 Hz. Stimulus presentation, timing, and EEG synchronization w ere con trolled using E- Prime 2 Professional 2.0.10.353 on a PC running Windo ws XP SP3. EEG was recorded from 32 Ag/AgCl electrodes, p ositioned on standardized (using EasyCap elastic caps, EasyCap Gm bH, Herrsc hing, German y), equidistan t elec- tro de sites of the 10 − 20 system via a Quic kAmp (Brain- Pro ducts Gm bH, Gilc hing, Germany) amplifier running at 200 Hz. Additionally , the electro-o culogram for vertical ey e mo vemen ts (and ey e blinks) and horizon tal eye mov e- men ts was recorded using bi p olar electrodes p ositioned re- sp ectiv ely 2 cm sup erior/inferior to the right pupil and 1 cm lateral to the outer can thi of b oth ey es. 10.4 Pilot exp erimen ts Prelimary v ersions of the final exp erimen tal procedure and design w ere piloted with four separate participan ts. In these exp erimen ts, w e tested and ev aluated, for example, the stim ulus duration, the explicit feedbac k task, and the p oin ts system. The data of these pilot exp erimen ts w ere not used in the final analysis, except that some basic pa- rameter estimations for the final feature engineering pro- cess were based on cross-v alidation exp erimen ts on these data (e.g., num b er of feature windo ws). 11 SI Data Analysis Details The ev aluation setup for prediction and retriev al follow ed the general blo c k structure defined b y the experimental 9 design. W e applied an participant-specific and leav e-one- blo c k-out learning and ev aluation strategy . The individ- ual prediction models are single-trial prediction models [2]. W e report a veraged prediction and retriev al performance, unless otherwise noted. In detail, for a given participan t, B = { 1 , ..., 8 } blo c ks with explicit term relev ance judgmen ts pro vided b y the participan t w ere av ailable. In order to retriev e a br ain- fe e db ack -based list of relev ant do cumen ts for a sp ecific blo c k b , tw o steps were executed. First, to obtain a term relev ance prediction mo del for the given blo c k b , a clas- sification mo del f b w as trained using the data from the remaining { B \ b } blo c ks. The prediction p erformance of f b w as then ev aluated on the left-out blo c k b . Second, to retriev e the set of do cumen ts for blo c k b , the set of terms predicted to b e relev an t by the classifier f b with a probabil- it y higher than 0 . 5 were used. The retriev al p erformance w as ev aluated against the exp ert judgements of do cumen t relev ance for the relev ant topic of blo c k b . As a baseline comparison, we ev aluated the brain feedbac k-based p erformances against r andom-fe e db ack - based p erformances. The random-feedback scenario cor- resp onds to standard p erm utation tests and results in p erm utation-based p -v alues [10]. The follo wing sections giv e concrete details on the metho dology used. 11.1 EEG cleaning and preparing The EEG signals were cleaned and prepared following standard BCI guidelines [3]. During recording a hardware lo w-pass filter at 1000 Hz was applied. The continous EEG recordings w ere filtered with a 35 Hz FIR1 lo w-pass filter and a 0 . 5 Hz high-pass filter. The signal was then divided in to ep ochs ranging from − 250 ms to 1000 ms relativ e to the onset of each stim ulus. Baseline correction was p er- formed on each ep och using the pre-stimulus p eriod. A simple heuristic was applied to reject inv alid channels and ep ochs: First, inv alid ep o c hs were estimated based on the ep ochs’ v ariances ( < 0 . 5 µ V) and the max-min criterium (40 µ V). A c hannel was remo v ed if the num b er of in- v alid epo c hs was higher than 10% of all a v ailable ep ochs. After remo ving all inv alid channels, inv alid epo c hs were es- timated again and remo ved. This data cleaning approach w as carried out in order to eliminate noise and p oten tial confounds by common artifacts such as eye mov emen ts and blinks, as w ell as artefacts caused by lo ose electro des or a cap that did not fit perfectly . T able 2 shows the statistics for the cleaning process for eac h participan t. 11.2 F eature engineering Ev ent-related potentials are characterized b y their temp o- ral evolution and the corresp onding spatial p oten tial dis- tributions. W e follo wed standard feature engineering pro- cedures to create spatio-temp oral ERP features for classi- fication [3]. F or each ep och, the raw EEG data (after ba- sic cleaning) w ere av ailable as the spatio-temporal matrix X m × t 0 , with m channels and t 0 sampled time p oin ts. F or eac h ep och, the time w as divided into t = 7 equidistant windo ws b et ween 250 ms and 950 ms after the stimulus onset. The num ber of windo ws w as chosen based on data recorded during the pilot exp erimen ts. F or each c hannel, the p oten tial v alues within one window were av eraged, re- sulting in the spatio-temporal matrix X m × t . The final feature representation of one ep och w as the concatenation of all columns into one vector X m · t . And, for a sp ecific blo c k b with n ep ochs, the full spatio-temp oral feature matrix used for classification was X n × m · t . Note that the n umber of channels m and the num b er of ep ochs n were participan t-sp ecific, as they were dep enden t on the EEG cleaning and preparing pro cedure. T able 2 shows the con- crete num b ers for eac h participan t. 12 SI T erm Relev ance Prediction W e developed term relev ance prediction mo dels within the framework of the linear EEG mo del [28] and single- trial ERP classification [3]. In detail, we utilized Linear Discriminan t Analysis (LDA, see [13]) and learned linear binary classifiers, which we used to predict class mem- b erhsip probabilities. The assumptions of the metho d are that the observ ations X ha ve b een drawn from tw o mul- tiv ariate Normal distributions N ( µ k , Σ), one for the class of “relev ant” observ ations, and the other for the class of “irrelev an t” observ ations. F or the estimation of the mo d- els we used shrink age LD A, a co v ariance-regularized LDA with a shrink age parameter selected by the analytical solu- tion developed by Sch¨ afer and Strimmer [36]. The c hoice of this simple method was based on the man y existing successful applications using this metho d in the BCI com- m unity [3]. In addition, one ma jor reason is robustness against class imbalance [44], an obvious situation in the prop osed paradigm (see also T able 2 for the relev ance class distribution p er participan t). 12.1 Lea v e-one-blo c k-out ev aluation F or each participant, we trained a set of eigh t classifiers. The classifer f b for blo c k b was trained with the ep ochs from the other blo c ks, i.e., with the spatio-temp oral fea- ture matrix X n l × m · t { B \ b } . The classifier f b w as ev aluated on the ep ochs from blo c k b , i.e., on the matrix X n t × m · t b . The p erformance measures of in terest were the A r e a under the RO C curve (AUC), pr e cision , and tf-idf -weigh ted preci- sion. The AUC is defined as the area under the R OC curv e, which links the true p ositiv e rate to the false p osi- tiv e rate. A perfect mo del has an A UC of 1, and a random mo del has an AUC of 0 . 5. AUC is a global quality mea- sure of the classification mo del. This measure w as cho- sen b ecause it allo wed us to correctly ev aluate the mo dels in the existing class imablance scenario and b ecause it is a comprehensive measure for comparison to the random feedbac k mo dels. Precision is defined as tp / (tp + fp), (1) 10 where tp is the n umber of true positives (i.e., relev an t w ords predicted to b e relev ant) and fp is the num b er of false p ositives (i.e., irrelev an t words predicted to b e rele- v an t). This measure w as chosen b ecause we wan t to hav e a high precision (i.e., many correct relev ant w ords) for the do cumen t retriev al step. W eighted precision is defined as ( w tp ∗ tp) / ( w tp ∗ tp + w fp ∗ fp), (2) where w tp is the sum of the term-frequency–inv erse do cu- men t frequency ( tf-idf ) v alues of the true p ositiv e w ords, and w fp is the sum of tf-idf v alues of the false p ositiv e pre- dicted words. In our case, the tf-idf v alues either come from the relev an t document or the irrelev ant do cumen t. F or a p ositiv e predicted word that is not a v ailable in a do cumen t, the tf-idf is set to 0. This reflects that this w ord has no influence on the do cumen t retriev al. 12.2 Random feedbac k ev aluation F or a giv en block b , a classifier w as trained and ev aluated on data with p erm uted relev ance judgmen ts. If executed for a large num b er of p erm utations, this random-feedback strategy is a p erm utation test, resulting in a p erm utation- based p -v alue [24]. The n ull h yp othesis of the test assumes that the brain data and the relev ance judgments are in- dep enden t. A small p -v alue indicates that the classifier is able to find a significan t structure discriminating “rel- ev an t” and “irrelev an t” brain signal patterns. F or each blo c k, k = 1000 p erm utations were p erformed, meaning that the smallest possible p -v alue is 0 . 001 [10]. 13 SI Inten t Mo deling-based Rec- ommendation W e dev elop ed an inten t estimation mo del to predict how relev an t each term the user read is to the topic of interest. This mo del was then used to retrieve new documents from the database. The motiv ation for the inten t mo del is that the predictions of the term-relev ance mo del can indicate the relev ance to a topical inten t, but the individual w ords for which the predictions are drawn ma y not represen t the whole topic. F or example, the words “matter” and “neutrons” are related to the topic “A tom,” but would not alone be sufficien t searc h terms to retriev e information ab out the topic “Atom.” Therefore, these w ords are used as p ositiv e feedback for the in tent mo del to predict that, for example, the words “atom,” “atomic,” and “n ucleus” are also relev ant for the user giv en the positively predicted w ords “matter” and “neutrons.” W e call the resulting mo del the in tent mo del of the user [33]. 13.1 Document represen tation The documents and words are mo deled as a term- do cumen t matrix K with i terms and j do cumen ts. The term v ector k i indicates the weigh t of a stemmed w ord for eac h of the docume n ts. The w ords are stemmed using the English P orter Stemmer [31], and the stemmed words are referred to as terms. Before stemming, English stop w ords were remov ed b ecause they app eared in the Apache Lucence 4.10 stop w ord list 1 . W e used tf-idf w eighting to accoun t for the frequency and sp ecificit y of eac h term [17]. 13.2 In ten t mo del The inten t mo del estimates a weigh t for each term based on the input from the term-relev ance prediction classifier. The feedback from the term-relev ance predictions is de- noted as r i ∈ [0 , 1] for a subset of terms indexed b y i . W e assume that the term-relev ance prediction r i of a term k i is a random v ariable with exp ected v alue E [ r i ] = k i · w , such that the exp ected weigh t is a linear function of the terms. The unkno wn weigh t v ector w is essentially the represen- tation of the user’s inten t and determines the relev ance of terms. T o estimate w w e utilize the LinRel algorithm [1]. It learns a linear regression mo del of the form r = w K . Lin- Rel allo ws control for the uncertaint y related to the term w eight estimates. The choice of this method was based on its robustness against sub optimal input, which is the case for potentially noisy predictions of the term-relev ance prediction mo del. LinRel computes a regularized regression w eight vector for each term k i in K : a i = k i ( K > K + λI ) − 1 K > , (3) where I is the identit y matrix, and λ is a regularization parameter set to 0 . 5, and all terms except k i on the right- hand side are shared for all k eywords. Then for eac h k ey- w ord, the final relev ance score w i at the curren t iteration is computed by taking into account the feedback obtained so far: w i = a i · s t + c 2 k a i k , (4) where s t is the vector of term-relev ance predictions ob- tained, a i is the weigh t vector of a single k eyw ord i in the data K , k a i k is the L 2 norm of the w eigh t v ector, and the constan t c is used to adjust the influence of the his- tory (we used c = 2 to give equal weigh t for exploration and exploitation). It can b e shown that this procedure is equiv alen t to estimating the upp er confidence b ound in a linear regression problem [1]. 13.3 Retriev al mo del In tent mo del estimates a weigh t w for each term whic h, in turn, is used to retriev e new do cumen ts from the database, to b e recommended for the user. W e use a unigram lan- guage modeling approac h of information retriev al [30]. In detail, the v ector w is treated as a sample of a desired do cumen t, and do cumen ts d j are rank ed b y the probabil- it y that w would b e generated by the resp ectiv e language mo del M d j for the do cumen t d j . 1 https://lucene.apac he.org/ 11 Using maximum likelihoo d estimation, we get P ( w | M d j ) = | w | Y i =1 ˆ P mle ( k i | M d j ) w i , (5) and to av oid zero probabilities and impro v e the estima- tion we then compute a smo othed estimate b y Bay esian Diric hlet smo othing so that ˆ P mle ( k i | M d j ) = c ( k i | d j ) + µp ( k i | C ) P k c ( k | d j ) + µ , (6) where c ( k | d j ) is the count of term k in document d j , p ( k i | C ) is the o ccurrence probabilit y (prop ortion) of term k i in the whole document collection, and the parameter µ is set to 2000 as suggested in the literature [46]. 13.4 Recommendation ev aluation The ev aluation setup for the recommendation was de- signed analogously to term-relev ance prediction. Eac h classifier output f b for a blo c k b was given as input for the in tent mo del. The resulting inten t mo del was used to pre- dicted relev an t words, and a ranked set of the top-30 do c- umen ts were retriev ed from the whole English Wikip edia corpus. 13.5 Random feedbac k recommendation F or a giv en blo c k b , the recommendation w as ev aluated with term-relev ance input resulting from permuted rele- v ance judgments. Similarly to the relev ance prediction, this random strategy is also a p erm utation test. A small p -v alue indicates that the recommendation system is able to gain more relev an t do cumen ts based on the brain in- put than with the random input. F ollowing the ev aluation setup of term-relev ance prediction, k = 1000 permutations w ere p erformed for eac h block. 13.6 P erformance measures The recommendation p erformance was ev aluated using Cumulative information gain (CG) [16]. The cumulativ e information gain is defined simply as the sum of the rel- ev ance scores assigned by the exp erts for the do cumen ts that were ranked in the top-30 do cumen ts by the retriev al system in resp onse to the input. F ormally , C G = i =1 X 30 r el i , (7) where r el i is the relev ance score of the i th do cumen t in the ranked list. This measure w as chosen b ecause it al- lo ws graded relev ance assessments: some do cumen ts may b e highly relev an t and some do cumen ts ma y be marginally relev an t. The cum ulative gain may b e different for differ- en t topics: some topics may hav e man y highly relev an t do cumen ts, and some ma y ha ve only a few. P articipant p -v alue TRPB101 0.0030 TRPB102 0.0010 TRPB103 0.0010 TRPB105 0.0090 TRPB106 0.0010 TRPB107 0.0010 TRPB109 0.0010 TRPB110 0.8541 TRPB111 0.0010 TRPB112 0.0010 TRPB113 0.0040 TRPB114 0.0010 TRPB115 0.0050 TRPB116 0.0010 TRPB117 0.1439 T able 3: T est statistics for the tests results sho wn in Figure 5. F or eac h participant a p erm utation test with 1000 iterations was executed. In each iteration, the rel- ev ance judgments were p erm utated. The p -v alue is then based on the num ber of times the randomized classifica- tion is better than the brain feedbac k-based classification with resp ect to the A UC v alues. P articipant W p -v alue TRPB101 112811.5000 0.0009 TRPB102 111581.0000 0.9573 TRPB103 99082.5000 0.0001 TRPB105 104242.0000 0.4398 TRPB106 119899.0000 0.0000 TRPB107 94403.5000 0.9789 TRPB109 138593.5000 0.0000 TRPB111 136092.5000 0.0000 TRPB112 135740.0000 0.0000 TRPB113 127788.5000 0.0052 TRPB114 166286.0000 0.0000 TRPB115 110133.5000 0.0031 TRPB116 165508.0000 0.0000 T able 4: T est statistics for the tests results sho wn in Fig- ure 7. F or each participant a tw o-sided Wilco xon test was executed b et ween the brain feedback-based retriev ed do cu- men t scores and the random feedback-retriev ed do cumen t scores. 12 Document #Relev ant #Irrelevan t #Retrieved Do cumen ts #Relev ant Do cumen ts #Irrelevan t Documents T op-30 Score Maxim um Score Association fo otball 5 4 470 34 436 52 56 Atom 7 1 461 56 405 68 94 Automobile 5 4 477 40 437 36 46 Bank 4 4 537 47 490 47 64 Bicycle 2 5 380 35 345 53 58 Bill Clinton 2 5 306 55 251 48 73 Brain 5 0 257 46 211 47 63 Cat 7 4 545 55 490 66 91 Communism 3 4 428 43 385 47 60 Euro 2 3 288 41 247 63 74 India 6 0 468 120 348 90 244 Learning 4 5 497 81 416 70 125 Machine Learning 6 2 491 51 440 89 118 Michael Jackson 3 5 517 54 463 90 147 Money 4 5 478 123 355 90 249 Ocean 5 5 426 79 347 90 167 Pain ting 3 8 617 51 566 90 136 Plato 3 3 337 94 243 90 185 Politics 5 6 588 172 416 90 337 Rome 3 5 474 62 412 90 150 Sav anna 1 6 471 41 430 47 58 Schizophrenia 6 3 484 51 433 69 90 School 2 6 414 30 384 51 51 Society 5 6 688 79 609 53 102 Star 4 5 524 98 426 64 132 T elephone 2 3 354 44 310 60 74 Time 6 2 525 56 469 59 85 V olcano 4 3 442 76 366 60 106 Wife 2 5 450 66 384 49 85 Wine 4 3 577 89 488 90 183 T otal 120 120 13971 1969 12002 2008 3503 T able 1: Description of the 30 do cumen ts used in the exp erimen t. The first tw o columns show ho w often the do cumen t w as presented to the users and how often it then was chosen as relev an t or irrelev ant. The third column shows the n umber of retrieved do cumen ts for a given do cumen t p ooled o ver all experiments. The fourth and fifth columns sho w ho w many of the retrieved do cumen ts were judged b y the exp erts to b e relev ant or irrelev ant given the topic. The sixth column shows the sum of the relev ance scores of the top-30 do cumen ts. The seven th column shows the sum of all relev ance scores. Participan t #Recorded Channels # Accepted Channels #Blocks #Recorded Epo c hs #Accepted Ep ochs #Relev ant Ep ochs #Irrelevan t Epo c hs TRPB101 32 26 8 1941 1376 153 1223 TRPB102 32 26 8 1961 1659 193 1466 TRPB103 32 11 8 1936 1521 242 1279 TRPB105 32 30 8 1986 1521 198 1323 TRPB106 32 29 8 1959 1486 215 1271 TRPB107 32 20 8 1960 1566 245 1321 TRPB109 32 30 8 1869 1622 315 1307 TRPB110 32 20 8 1958 1021 103 918 TRPB111 32 31 8 1818 1045 170 875 TRPB112 32 30 8 2026 1588 268 1320 TRPB113 32 26 8 1939 1422 195 1227 TRPB114 32 26 8 1944 1226 204 1022 TRPB115 32 30 8 1896 1441 211 1230 TRPB116 32 28 8 1981 1662 242 1420 TRPB117 32 16 8 1906 1364 326 1038 T able 2: Description of the EEG recordings. The first tw o columns show the num b er of recorded and the num b er of accepted channels after cleaning p er participan t. The third column shows the n umber or blo cks recorded for each participan t. The fourth and fith columns show the n um b er of recorded and the n umber of accepted epo c hs after cleaning p er participan t. The sixth and seven th columns show the n um b er of relev an t and irrelv ant ep ochs. 13 Figure 9: Grand a verage-based top ographic sk alp plots of relev an t words (top) and irrelev ant w ords (b ottome) for differen t time windo ws. C3 C4 CP1 CP2 CP5 CP6 Cz F7 F8 FC1 FC2 FC5 FC6 FT10 FT9 Fz O1 O2 Oz P3 P4 P7 P8 Pz T7 TP10 F3 T8 F4 TP9 Fp1 Fp2 −2.5 0.0 2.5 −2.5 0.0 2.5 −2.5 0.0 2.5 −2.5 0.0 2.5 −250 0 250 500 750 1000 −250 0 250 500 750 1000 −250 0 250 500 750 1000 −250 0 250 500 750 1000 −250 0 250 500 750 1000 −250 0 250 500 750 1000 −250 0 250 500 750 1000 −250 0 250 500 750 1000 Time relative to stimulus onset (ms) Grand av erage (mV) Figure 10: Grand av erage ev ent-related potential at all c hannels of relev ant (red curv es) and irrelev an t (blue curves) terms. The gra y v ertical lines show the word onset even ts. 14 0.0 0.5 1.0 1.5 2.0 TRPB116 TRPB109 TRPB114 TRPB106 TRPB112 TRPB115 TRPB101 TRPB111 TRPB113 TRPB103 TRPB102 TRPB105 TRPB107 Information gain (Retriev al) based on expert judgments Brain feedback Random feedback 0.0 0.5 1.0 1.5 TRPB109 TRPB116 TRPB114 TRPB106 TRPB112 TRPB115 TRPB111 TRPB101 TRPB113 TRPB103 TRPB102 TRPB105 TRPB107 Information gain (Retriev al) based on expert judgments Brain feedback Random feedback Figure 11: Average cumulativ e information gain (on a scale 0-3) based on the T op-10 (left) and the T op 20 (right) retriev ed do cumen ts for the participant. Asterisks indicate a significantly b etter p ooled information gain based on brain feedback than randomized feedbac k retriev al on 1000 iterations. 15 References [1] Peter Auer. Using confidence b ounds for exploitation- exploration trade-offs. J. Mach. L e arn. R es. , 3:397– 422, March 2003. ISSN 1532-4435. [2] Benjamin Blankertz, Gabriel Curio, and Klaus- Rob ert M ¨ uller. Classifying single trial eeg: T o- w ards brain computer in terfacing. In T.G. Dietteric h, S. Beck er, and Z. Ghahramani, editors, A dvanc es in Neur al Information Pr o c essing Systems 14 , pages 157–164. MIT Press, 2002. [3] Benjamin Blank ertz, Steven Lemm, Matthias T reder, Stefan Haufe, and Klaus-Rob ert Mller. Single-trial analysis and classification of ERP comp onen ts – A tutorial. Neur oImage , 56(2):814–825, 2011. doi: http: //dx.doi.org/10.1016/j.neuroimage.2010.06.048. [4] Enzo Brunetti, P edro E. Maldonado, and F rancisco Ab oitiz. Phase sync hronization of delta and theta oscillations increase during the detection of relev ant lexical information. F r ontiers in Psycholo gy , 4(308), 2013. doi: 10.3389/fpsyg.2013.00308. [5] Cambridge English Language Assessmen t. T est your English – Adult Learners, 2014. h ttp://www.cambridgeenglish.org/test-y our- english/adult-learners/. [6] Jacob Cohen. W eighted k appa: Nominal scale agree- men t provision for scaled disagreement or partial credit. Psycholo gic al Bul letin , 70(4):213–220, 1968. doi: 10.1037/h0026256. [7] M. S. Cohen. Handedness questionnaire, 2014. h ttp://www.brainmapping.org/shared/Edinburgh.php. [8] Emanuel Donchin and Mic hael G. H. Coles. Is the P300 comp onen t a manifestation of context up- dating? Behavior al and Br ain Scienc es , 11:357– 374, 9 1988. ISSN 1469-1825. doi: 10.1017/ S0140525X00058027. [9] Manuel J. A. Eugster, T uukk a Ruotsalo, Michiel M. Spap ´ e, Ilkk a Kosunen, Oswald Barral, Niklas Rav a ja, Giulio Jacucci, and Samuel Kaski. Predicting term- relev ance from brain signals. In Pr o c e e dings of the 37th International A CM SIGIR Confer enc e on R e- se ar ch and Development in Information R etrieval , SIGIR ’14, pages 425–434, New Y ork, NY, USA, 2014. ACM. ISBN 978-1-4503-2257-7. doi: 10.1145/ 2600428.2609594. [10] Phillip I. Go od. Permutation T ests: A Pr actic al Guide to R esampling Metho ds for T esting Hyp othe- ses . Springer, 2 edition, 2000. ISBN 038798898X. doi: 10.1007/978- 1- 4757- 3235- 1. [11] Gro oth usen J Hagoort P , Bro wn C. The syntactic p ositiv e shift (SPS) as an ERP measure of syntactic pro cessing. L ang Co gn Pr o c ess , 8(4):439–483, 1993. [12] Uri Hanani, Brac ha Shapira, and Peretz Shov al. In- formation filtering: Overview of issues, research and systems. User Mo deling and User-A dapte d Inter ac- tion , 11(3):203–259, August 2001. ISSN 0924-1868. doi: 10.1023/A:1011196000674. [13] T rev or Hastie, Robert Tibshirani, and Jerome F ried- man. The Elements of statistic al le arning . 2 edition, 2009. [14] O Hauk and F Pulverm¨ uller. Effects of word length and frequency on the human ev ent-related p oten- tial. Clinic al Neur ophysiolo gy , 115(5):1090–1103, 2004. ISSN 1388–2457. doi: http://dx.doi.org/10. 1016/j.clinph.2003.12.020. [15] Steven A. Hilly ard and Marta Kutas. Electrophysi- ology of cognitive pro cessing. A nnual R eview of Psy- cholo gy , 34(1):33–61, 1983. [16] Kalervo J¨ arv elin and Jaana Kek¨ al¨ ainen. Cumulated gain-based ev aluation of IR techniques. AC M T r ans. Inf. Syst. , 20(4):422–446, Octob er 2002. ISSN 1046- 8188. doi: 10.1145/582415.582418. [17] Karen Sparc k Jones. A statistical in terpretation of term sp ecificit y and its application in retriev al. Jour- nal of Do cumentation , 28(1):11–21, 1972. [18] Diane Kelly and Jaime T eev an. Implicit feedbac k for inferring user preference: A bibliograph y . SIGIR F o- rum , 37(2):18–28, September 2003. ISSN 0163-5840. doi: 10.1145/959258.959260. [19] Alb ert Kok. Ev ent-related-potential (ERP) reflec- tions of men tal resources: A review and synthe- sis. Biolo gic al Psycholo gy , 45(1):19–56, 1997. doi: 10.1016/S0301- 0511(96)05221- 0. [20] Boris Kotchoubey and Simone Lang. Even t-related p oten tials in an auditory semantic o ddball task in h u- mans. Neur oscienc e L etters , 310(2):93–96, 2001. doi: 10.1016/S0304- 3940(01)02057- 2. 16 [21] M. Kutas and S. A. Hillyard. Brain p oten tials during reading reflect word exp ectancy and seman tic asso ci- ation. Natur e , 307(5947):161–163, 1984. [22] T. F. M ¨ unte, H. J. Heinze, M Matzke, B. M. Wieringa, and S. Johannes. Brain p oten tials and syn- tactic violations revisited: No evidence for specificity of the syntactic p ositiv e shift. Neur opsycholo gia , 36 (3):217–226, 1998. [23] Thomas F M ¨ unte, Bernardina M Wieringa, Helga W eyerts, Andras Szen tkuti, Mik e Matzk e, and S¨ onke Johannes. Differences in brain p oten tials to op en and closed class words: Class and frequency effects. Neur opsycholo gia , 39(1):91–102, 2001. ISSN 0028- 3932. doi: h ttp://dx.doi.org/10.1016/S0028- 3932(00) 00095- 6. [24] Markus Ojala and Gemma C. Garriga. P ermutation tests for studying classifier p erformance. Journal of Machine L e arning R ese ar ch , 11:1833–1863, 2010. [25] O. R. Oldfield. The assessment and analysis of hand- edness: The Edin burgh inv en tory . Neur opsycholo gia , 9(1):97–113, 1971. [26] L. Osterhout, R. McKinnon, M. Bersick, and V. Corey . On the language sp ecificit y of the brain resp onse to syntactic anomalies: Is the syntactic p os- itiv e shift a member of the P300 family? Co gn Neu- r osci J Of , 8(6):507–526, 1996. [27] Ken A. Paller and Marta Kutas. Brain potentials dur- ing memory retriev al provide neurophysiological sup- p ort for the distinction betw een conscious recollection and priming. Journal of Co gnitive Neur oscienc e , 4(4): 375–392, 1992. doi: 10.1162/jo cn.1992.4.4.375. [28] Lucas C. Parra, Clay D. Sp ence, Adam D. Gerson, and P aul Sa jda. Recip es for the linear analysis of EEG. Neur oImage , 28:326–341, 2005. [29] Adolf Pfefferbaum, Brant G W enegrat, Judith M F ord, W alton T Roth, and Bert S Kop ell. Clini- cal application of the P3 component of ev ent-related p oten tials. ii. dementia, depression and sc hizophre- nia. Ele ctr o enc ephalo gr aphy and Clinic al Neur o- physiolo gy/Evoke d Potentials Se ction , 59(2):104–124, 1984. doi: 10.1016/0168- 5597(84)90027- 3. [30] Jay M. Pon te and W. Bruce Croft. A language mod- eling approach to information retriev al. In Pr o c e e d- ings of the 21st Annual International ACM SIGIR Confer enc e on R ese ar ch and Development in Infor- mation R etrieval , SIGIR ’98, pages 275–281, New Y ork, NY, USA, 1998. ACM. ISBN 1-58113-015-5. doi: 10.1145/290941.291008. [31] M. F. Porter. An algorithm for suffix stripping. In Karen Sparck Jones and Peter Willett, editors, R e ad- ings in Information R etrieval , pages 313–316. Morgan Kaufmann Publishers Inc., San F rancisco, CA, USA, 1997. ISBN 1-55860-454-5. [32] Michael D. Rugg, Ruth E. Mark, P eter W alla, Astrid M. Sc hlo ersc heidt, Claire S. Birch, and Kevin Allan. Disso ciation of the neural correlates of implicit and explicit memory . Natur e , 392:595–598, 1998. doi: 10.1038/33396. [33] T uukk a Ruotsalo, Jaakko Peltonen, Manuel J.A. Eu- gster, Dorota Glow ack a, Ksenia Konyushk o v a, Ku- maripaba Ath ukorala, Ilkk a Kosunen, Aki Reijonen, P etri Myllym¨ aki, Giulio Jacucci, and Samuel Kaski. Directing exploratory searc h with interactiv e inten t mo deling. In Pr o c e e dings of the 22nd ACM Inter- national Confer enc e on Information and Know le dge Management , CIKM ’13, pages 1759–1764, New Y ork, NY, USA, 2013. ACM. ISBN 978-1-4503-2263-8. doi: 10.1145/2505515.2505644. [34] T uukk a Ruotsalo, Giulio Jacucci, Petri Myllym¨ aki, and Sam uel Kaski. Interactiv e in tent mo deling: In- formation disco very b ey ond searc h. Commun. ACM , 58(1):86–92, December 2015. ISSN 0001-0782. doi: 10.1145/2656334. [35] J. Sassenhagen, M. Schlesewsky , and I. Bornkessel- Sc hlesewsky . The P600-as-P3 h yp othesis revisited: Single-trial analyses rev eal that the late EEG positiv- it y follo wing linguistically deviant material is reaction time aligned. Br ain L anguage , 137(29–39), 2014. [36] Juliane Sc h¨ afer and Korbinian Strimmer. A shrink age approac h to large-scale cov ariance matrix estimation and implications for functional genomics. Statistic al Applic ations in Genetics and Mole cular Biolo gy , 4(1), 2005. doi: 10.2202/1544- 6115.1175, . [37] Andrew I. Schein, Alexandrin P op escul, Lyle H. Un- gar, and Da vid M. Pennock. Metho ds and met- rics for cold-start recommendations. In Pr o c e e dings of the 25th Annual International ACM SIGIR Con- fer enc e on R ese ar ch and Development in Informa- tion R etrieval , SIGIR ’02, pages 253–260, New Y ork, NY, USA, 2002. A CM. ISBN 1-58113-561-0. doi: 10.1145/564376.564421. [38] Michiel M. Spap ´ e, Guido PH Band, and Bernhard Hommel. Compatibility-sequence effects in the Simon task reflect episo dic retriev al but not conflict adapta- tion: Evidence from LRP and N2. Biolo gic al psychol- o gy , 88(1):116–123, 2011. doi: 10.1016/j.biopsycho. 2011.07.001. [39] Kenneth C Squires, Eman uel Donchin, Ronald I Herning, and Gregory McCarthy . On the influence of task relev ance and stimulus probability on even t- related-p oten tial comp onen ts. Ele ctr o enc ephalo gr a- phy and Clinic al Neur ophysiolo gy , 42(1):1–14, 1977. doi: 10.1016/0013- 4694(77)90146- 8. 17 [40] S. Sutton, M. Braren, J. Zubin, and E. R. John. Ev oked-potential correlates of stimulus uncertain ty . Scienc e , 150:1187–1188, 1965. [41] Samuel Sutton, Patricia T ueting, Joseph Zubin, and E. R. John. Information delivery and the sen- sory ev oked p oten tial. Scienc e , 155(3768):1436–1439, 1967. doi: 10.1126/science.155.3768.1436. [42] Carmen Vidaurre and Benjamin Blank ertz. T ow ards a cure for BCI illiteracy . Br ain T op o gr aphy , 23(2): 194–198, 2010. doi: 10.1007/s10548- 009- 0121- 6. [43] Wikimedia. Wikimedia downloads, 2014. h ttps://dumps.wikimedia.org/. [44] Jing-Hao Xue and D. Mic hael Titterington. Do un bal- anced data ha ve a negative effect on LDA? Pattern R e c o gnition , 41(5):1558–1571, 2008. doi: 10.1016/j. patcog.2007.11.008. [45] Thorsten O. Zander, Christian Kothe, Sabine Jatzev, and Matti Gaertner. Br ain-Computer Interfac es: Applying our Minds to Human-Computer Inter ac- tion , chapter Enhancing Human-Computer Inter- action withInput from Active and Passiv e Brain- Computer Interfaces, pages 181–199. Springer Lon- don, London, 2010. ISBN 978-1-84996-272-8. doi: 10.1007/978- 1- 84996- 272- 8 \ 11. [46] Chengxiang Zhai and John Lafferty . A study of smo othing metho ds for language mo dels applied to ad ho c information retriev al. In Pr o c e e dings of the 24th A nnual International ACM SIGIR Confer enc e on R e- se ar ch and Development in Information R etrieval , SI- GIR ’01, pages 334–342, New Y ork, NY, USA, 2001. A CM. ISBN 1-58113-331-6. doi: 10.1145/383952. 384019. 18

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment