The Computational Structure of Unintentional Meaning

Speech-acts can have literal meaning as well as pragmatic meaning, but these both involve consequences typically intended by a speaker. Speech-acts can also have unintentional meaning, in which what is conveyed goes above and beyond what was intended…

Authors: Mark K. Ho, Joanna Korman, Thomas L. Griffiths

The Computational Structure of Unintentional Meaning
The Computational Structur e of Unintentional Meaning Mark K. Ho (mho@princeton.edu) Department of Psychology , Princeton Univ ersity Princeton, NJ 08540 Joanna K orman * (jkorman@mitr e.org) The MITRE Corporation Bedford, MA 01730 Thomas L. Griffiths (tomg@princeton.edu) Department of Psychology , Princeton Univ ersity Princeton, NJ 08540 Abstract Speech-acts can hav e literal meaning as well as pragmatic meaning, but these both in volv e consequences typically in- tended by a speaker . Speech-acts can also have unintentional meaning , in which what is con ve yed goes above and beyond what was intended. Here, we present a Bayesian analysis of how , to a listener , the meaning of an utterance can significantly differ from a speaker’ s intended meaning. Our model em- phasizes how comprehending the intentional and unintentional meaning of speech-acts requires listeners to eng age in sophisti- cated model-based perspectiv e-taking and reasoning about the history of the state of the world, each other’ s actions, and each other’ s observ ations. T o test our model, we hav e human partic- ipants make judgments about vignettes where speakers make utterances that could be interpreted as intentional insults or un- intentional faux pas . In elucidating the mechanics of speech- acts with unintentional meanings, our account provides insight into how communication both functions and malfunctions. Keyw ords: Bayesian modeling, social cognition, common ground, speech-act theory , faux pas, theory of mind Introduction People sometimes communicate things that they did not in- tend or e xpect. Consider the follo wing vignette, adapted from Baron-Cohen et al. (1999): Curtains Paul had just moved into a new apartment. Paul went shopping and bought some new curtains for his bedroom. After he returned from shopping and had put up the new curtains in the bedroom, his best friend, Lisa, came over . Paul gav e her a tour of the apartment and asked, “Ho w do you like my bedroom?” “Those curtains are horrible, ” Lisa said. “I hope you’ re going to get some new ones!” Clearly , Lisa committed a social blunder or faux pas with her remark. What happened here? When Lisa says, “Those curtains look horrible, ” she is merely stating her priv ate aes- thetic experience of the curtains. The literal meaning is straightforward: The curtains look bad. And the intended or expected meaning of her utterance is largely captured by this literal meaning. Howe ver , to P aul, the utterance means more. Specifically , what Lisa is r eally saying is that he chose horri- ble curtains. Of course, Lisa did not “really” say that Paul’ s * The author’ s af filiation with The MITRE Corporation is provided for identification purposes only , and is not intended to convey or imply MITRE’s concurrence with, or support for , the positions, opinions, or viewpoints e xpressed by the author . choice in curtains w as horrible—she had no intention of con- ve ying such an idea. Paul might even realize this. Nonethe- less, the remark stings. Why? Lisa and Paul each possess a piece of a puzzle, and when put together , they entail that Paul has awful taste in curtains. At the outset, neither one knew that they each had a piece of a puzzle. But once Lisa makes her remark, she inadvertently completes the puzzle, at least from Paul’ s perspecti ve. Standard models of communication (Grice, 1957; Sperber & W ilson, 1986) tend to focus on how people use language successfully . For example, people can imply more than they literally mean (Carston, 2002), conv ey subtle distinctions via metaphor (T endahl & Gibbs Jr, 2008), and manage their own and others’ public face using politeness (Levinson, Brown, Levinson, & Levinson, 1987; Y oon, Frank, T essler, & Good- man, 2018). But things do not always go smoothly , as Paul and Lisa’ s situation indicates. Sometimes people find them- selves having inadvertently stepped on conv ersational land- mines, meaning things that they nev er anticipated meaning. Notably , because such situations present complex dilemmas of mutual perspecti ve-taking against a backdrop of di ver gent knowledge, they can serv e as adv anced tests of theory of mind (Baron-Cohen et al., 1999; Zalla, Sav , Stopin, Ahade, & Leboyer, 2009; Korman, Zalla, & Malle, 2017). But how do people reason about such dilemmas? And ho w can this be understood computationally? Disentangling unintentional meaning can shed light on how communication works in a broader social context as well as inform the design of artifi- cial intelligences that interact with people. Here, we develop a rational, cogniti ve account of interpret- ing unintentional speech-acts that builds on e xisting Bayesian models of language (e.g., Rational Speech Act [RSA] mod- els [Goodman & Frank, 2016]). T o do this, we analyze the general epistemic structure of social interactions such as the one described above and model listeners engaging in model- based perspective-taking . In particular , our model explains how the same utterance could be interpreted as either an (un- intentional) faux pas or an intentional insult depending on the context of a listener and speaker’ s interaction. W e then test sev eral model predictions in an experiment with human participants. In the follo wing sections, we outline our compu- tational model, experimental results, and their implications. A Bayesian Account of Unintentional Meaning During social interactions, people reason about the world as well as each other’ s perspecti ve on the world (Bro wn-Schmidt & Heller, 2018). Thus, our account has two components, which we formulate as probabilistic models. First, we spec- ify a world model that captures common-sense relationships between world states, actions, and ev ents. Second, we define agent models of a speaker and listener reasoning about the world and one another . W orld Model W e model the interaction as a partially observable stochas- tic game (POSG), a generalization of Markov Decision Pro- cesses (MDPs) with multiple agents with priv ate obser- vations (Kuhn, 1953). Formally , a world model W “ x I , S , A , Z , T y where: • I is a set of n agents indexed 1 , ..., n ; • S is a set of possible states of the world, where each state s P S is an assignment to k variables, s “ p x 0 , x 1 , ..., x k q ; • A “ Ś i P I A i is the set of joint actions, i.e., ev ery combi- nation of each agent i ’ s actions, A i (including utterances); • Z “ Ś i P I Z i is the set of joint pri v ate observ ations, which is ev ery possible combination of each individual agent i ’ s priv ate observation set, Z i ; and • T “ P p z , s 1 | s , a q is a transition function representing the probability of a joint observation z and next state s 1 giv en a previous state s P S and joint action a P A was taken. In Curtains , the initial state, s 0 , includes Paul with the old curtains in the apartment and Lisa elsewhere. There is also a latent state feature of interest: whether Paul has good or bad taste. At t “ 0, Paul’ s action, a Paul 0 , is choosing new cur- tains, while Lisa’ s action, a Lisa 0 , is going to the apartment. The joint action, a 0 “ p a Paul 0 , a Lisa 0 q , results in a ne w state, s 1 , with them both in the apartment, the curtains either good or bad, and P aul’ s taste. Paul’ s observation, z Paul 0 , b ut not Lisa’ s, z Lisa 0 , includes Paul having put up the curtains. These rela- tionships between world states (e.g. Paul and Lisa’ s loca- tions), actions (e.g. Lisa walking to Paul’ s apartment), and observations (e.g. Paul observing himself put up the cur - tains) are formally encoded in the transition function T . The sequence of states, joint actions and observations resulting from such interactions constitute the history up to a point t , ~ h t “ p s 0 , a 0 , z 0 , ..., s t ´ 1 , a t ´ 1 , z t ´ 1 , s t q . Agent Models Agents are modeled as Bayesian decision-makers (Bernardo & Smith, 1994) who can reason about the world and other agents as well as take actions—including making utterances. Interactive Belief State Agents’ beliefs are probability dis- tributions over variables that represent aspects of the current state, previous states, or each other’ s beliefs. The configu- ration of these first- and higher-order , recursi ve beliefs con- stitute their interactive belief state (Gmytrasie wicz & Doshi, 2005). W e refer to an agent i ’ s beliefs as b i . For example, if we denote Paul’ s taste as the variable T Paul , then Paul’ s belief that his taste is good is b Paul p T Paul “ Good q . Higher- order beliefs can also be represented. For instance, we can calculate Paul’ s expectation of Lisa’ s belief in his taste as E b Paul r b Lisa sp T Paul q “ ř b Lisa b Paul p b Lisa p T Paul qq . An agent i ’ s beliefs are a function of their prior, model of the world, model of other agents, and observation history up to time t , ~ z i t . Note that ~ z i t can include observations that are completely priv ate to i (e.g., Lisa’ s personal aesthetic expe- rience) as well as public actions and utterances (e.g., Lisa’ s remark to Paul). Thus, we denote P aul’ s belief about his taste at a time t as b Paul t p T Paul q “ b Paul p T Paul | ~ z Paul t q . Given a se- quence of observations, ~ z i t , posterior beliefs about a v ariable X are updated via Bayes’ rule: b p X | ~ z i t q 9 b p ~ z i t | X q b p X q (1) “ ÿ ~ h t b p ~ z i t | ~ h t q b p ~ h t , X q (2) The capacity to reason about higher-order beliefs (e.g., Paul’ s beliefs about Lisa’ s belief in his taste), along with Equation 2 express agents’ joint inferences about ev ents and model-based perspectiv e-taking. Speaker Model Speakers have beliefs and goals. When choosing what to say , the y may hav e beliefs and goals with re- spect to the listener’ s beliefs and goals. In our example, Lisa may care about being informativ e about ho w she sees the cur - tains, but may also think Paul cares about having good taste in curtains and care whether she hurts his feelings. Follow- ing previous work (e.g., Franke, 2009), we model speakers as reasoning about changes in belief states . Here, we are in- terested in how a speaker can intend to mean one thing but inadvertently mean another . Thus, we distinguish between state v ariables that the speaker wants to be informative about, X Info (e.g., how Lisa sees the curtains), and evaluative vari- ables, X Eval , that the listener w ants to tak e on a specific v alue x Eval ˚ (e.g., Paul’ s taste being good). The speaker then cares about the changes in those quantities. Formally: ∆ L-Info t “ b L t ` 1 p X Info “ x Info q ´ b L t p X Info “ x Info q , (3) where x Info is giv en by ~ h t ; and, ∆ L-Eval t “ b L t ` 1 p X Eval “ x Eval ˚ q ´ b L t p X Eval “ x Eval ˚ q . (4) A speak er who is interested in what the listener thinks about X Info and X Eval will, at a minimum, anticipate ho w their utterances will influence ∆ L-Info t and ∆ L-Eval t . A speak er would then hav e a re ward function defined as: R S p a S t , ~ z L t ` 1 q “ θ L-Info ∆ L-Info t ` θ L-Eval ∆ L-Eval t (5) where the θ terms correspond to ho w the speaker values cer- tain outcomes in the listener’ s mental state. F or instance, if θ L-Eval ă 0, the speaker wants to insult the speaker . Giv en Equation 5, a speaker can take utterances based on expected future utility/rew ards (or value [Sutton & Barto, ... (a) Those look horrible! Those look horrible! Those look horrible! (c) Those look horrible! Speaker Reasoning Event Sequence Listener Reasoning (b) Those look horrible! Speaker Observation History Those look horrible! Event Sequence Lisa believes that Paul did not choose the curtains and that he has good taste. Paul actually chose the curtains and does not have good taste. Lisa does not like the curtains. Paul initially believes that Lisa liked the curtains and that he has good taste. Lisa’ s remark leads Paul to realize that Lisa did not like the curtains, and that he does not have good taste. Figure 1: Model and e xample of unintentional meaning. (a) Influence diagram with state, action, and observation dependencies. Circles correspond to world state (e.g., s t ) and observation (e.g., z i t ) variables; squares correspond to agent action variables (including utterances) (e.g., a i t ). (b) Event sequence in Curtains (top) and speaker observation history (bottom). Lisa does not observe Paul choose the curtains. Only Lisa experiences whether the curtains look good or bad and comments on this experience. (c) Diagram of interactiv e belief state o ver time in Curtains . 1998]), where the expectation is taken with respect to the speaker’ s beliefs, b S t . That is, gi ven observations ~ z S t , the value of a S t is V S p a S t ; ~ z S t q “ E b S t “ R S p a S t , ~ z L t ` 1 q ‰ , and an action is cho- sen using a Luce choice rule (Luce, 1959). Listener Inference Our goal is to characterize ho w a listener’ s interpretation of an utterance can differ from a speaker’ s intended meaning, which requires specifying lis- tener inferences. W e start with a simple listener that under- stands the literal meanings of words when spok en. Follo wing previous models (Franke, 2009; Goodman & Frank, 2016), the literal meaning of an utterance a S is determined by its truth-functional denotation, which maps histories to Boolean truth v alues, r r a S s s : ~ h t ÞÑ y , y P t True , False u . A literal lis- tener’ s model of speaker utterances is: b p a S | ~ h t q 9 # 1 ´ ε if r r a S s sp ~ h t q ε if r r a S s sp ~ h t q where ε is a small probability of a S being said even if it hap- pens to be false. W e can also posit a more sophisticated listener who, rather than assuming utterances literally reflect reality , reason about how a speaker’ s beliefs and goals mediate their use of lan- guage. This type of listener draws inferences based on an intentional model of a speaker that track the quantities in Equations 3 and 4 as well as maximize the e xpected re wards. These inferences, howe v er , occur while the listener is also reasoning about the actual sequence of e v ents ~ h t , making it possible to draw inferences based on utterances that the speaker did not anticipate. Model Simulations In the original Curtains scenario, Lisa was not present when Paul put up the curtains. As a result, Lisa’ s comment (“Those curtains are horrible”) is interpreted in a diver ging observa- tion history context. But what if Lisa had been present when Paul put up the curtains and made the same utterance? Giv en a shared observ ation history , Lisa’ s utterance is still offen- siv e, but now Lisa has all the information needed to realize it would be offensi ve. Put simply , in the diverging history con- (a) (b) (c) Model Predictions Participant Results Participant Results (passed manipulation check) Speaker expected listener to feel bad about ability Listener thinks speaker expected listener to feel bad about ability Diverging History Shared History Scenarios/Conditions * * ** ** ** *** *** *** *** *** *** *** Figure 2: (a) Model predictions. The model predicts that the listener’ s change in belief in the ev aluati ve v ariable ( ∆ L-Eval t ) is equally negativ e in the di ver ging and shared history scenarios. Howe ver , whether the speaker anticipated the of fensi veness of their comment differs between the two scenarios, as do the listener’ s beliefs about the speaker’ s anticipation. (b) Judgments from all participants by question. Responses were normalized depending on whether response scales were valanced (Q1), likelihood (Q2-Q7), or qualitativ e (Q8). (c) Judgments from participants who correctly identified whether the speak er kne w the listener modified the object. ˚ : p ă . 05, ˚˚ : p ă . 01, ˚˚˚ : p ă . 001. text, the utterance is a faux pas, whereas in the shar ed history context, it is an intentional insult. In this section, we discuss ho w our model can be used to make these intuiti ve predictions precise and e xplain ho w the y arise from agents’ interactions and model-based perspectiv e- taking within a shared en vironment. W e implemented our model in W ebPPL (Goodman & Stuhlm ¨ uller, 2014), a pro- gramming language that can express stochastic processes like POSGs as well as Bayesian inference. Generative Model T o model a scenario like Curtains , we define agents, objects, and features assigned to them. These are the curtains, which hav e a location (inside Paul’ s apartment); the speaker (Lisa), who has a location (inside or outside Paul’ s apartment) and a perception of the curtains (good or bad); and the listener (Paul), who has a location (inside or outside) and ability to choose curtains (high or low). Additionally , the listener can either act on the curtains or not, while the speaker can enter the apartment and make an utterance about the curtains (“the curtains look good”, “the curtains look bad”, or ). The truth-conditional semantics of the utterances map onto world features in a standard manner , and we set ε “ . 05. Observations depend on whether agents and objects are co- located and are defined as subsets of state and action vari- ables. For instance, if Paul and Lisa are both inside the house and Paul modifies the curtains, they both observe that Paul acted on the curtains, but only Lisa directly knows whether they look good to her . Finally , we define a state and action prior for both agents such that the listener’ s ability is initially high ( p “ 0 . 90), the speaker’ s perception of the object is ini- tially random ( p “ 0 . 50), and the listener has a low probabil- ity of modifying the object ( p “ 0 . 05). Model Predictions Giv en the generativ e model, we can provide scenarios and calculate aspects of the resulting interactive belief state (the listener and speak er’ s beliefs about the world and each other’ s beliefs). In particular , we compare the results of a shar ed history with those of a diver ging history . In the shared his- tory , the speaker and listener are both present when the lis- tener modifies the object, whereas in the di ver ging history , the speak er is not present when the listener acts on the object. Otherwise, the two scenarios are the same and the speaker comments on the curtains being bad. Figure 2a displays the results of the simulation when gi ven each of the tw o histories. In both histories, the listener learns that their ability when modifying the object, X Eval , is low (i.e., ∆ L-Eval t ă 0). They also learn about the informativ e variable (i.e., ∆ L-Info t ą 0). Howe ver , the resulting interactive belief states differ in im- portant ways. For e xample, in the di verging history , although the listener concludes that the ev aluative variable is low , the speaker thinks the e valuati ve variable is high . Relatedly , the speaker thinks the utterance was informative ( E b S r ∆ L-Info s ą 0) b ut not of fensiv e ( E b S r ∆ L-Eval s “ 0). Moreov er , the listener knows the speaker believes that their comment was expected to be informativ e and not of fensi ve. In the shared history , this is not the case: The listener and speaker both believ e the ev aluative variable is low , and they both know the resulting informational and ev aluative ef fects. Because they were both present when the listener modified the object, they share ex- pectations about the utterance’ s meaning. Put intuitiv ely , whereas the shared history leads to an ex- pected insult , the diver ging history leads to a faux pas . Our model explains this difference in terms of differential trans- formations of the listener and speaker’ s interacti ve belief state. Experiment Our model explains how different observ ation histories re- sult in interactiv e belief states, which can produce uninten- tional meaning. T o test whether this accurately describes people’ s capacity to reason about unintentional meaning, we had people read vignettes that described scenarios in v olv- ing shared or div er ging observation histories. The under- lying logical structure of all the vignettes mirrored that of Curtains , and so the model predictions described in the previous section apply to all of them. Participants then provided judgments corresponding to predicted differences in listener/speaker beliefs. The study’ s main hypotheses were preregistered on the Open Science Frame work platform ( https://osf.io/84wqn ). Overall, we find that our model captures key qualitati ve features of people’ s inferences. Materials W e dev eloped a set of vignettes that included interactions in different conte xts as well as different histories of interaction. Each vignette in volv ed a listener (e.g., Paul) who could po- tentially interact with an object (e.g., curtains) as well as a speaker (e.g., Lisa) who makes an utterance about their neg- ativ e aesthetic experience of the object (e.g., “The curtains look horrible”). In the shared history versions of the vi- gnettes, the two agents were described as being both present when the listener acted on an object. In the di ver ging history versions of the vignettes, the speaker was not present when the listener interacted with the object. Each vignette in volved one of fiv e contexts: Curtain , Story-Prize , W ine-bottle , Cup- cakes , and P arking . Thus there were a total of ten items (Di- ver ging/Shared history ˆ 5 contexts). All items used in the experiment are a v ailable on the primary author’ s website. Procedur e One-hundred participants were recruited via MT urk to partic- ipate in our e xperiment using PsiT urk (Gureckis et al., 2016). Each participant read one of the ten context-history items, and then answered the following questions in order: • Q1: At this point, how does feel about their ability to ? [6 point scale ranging “V ery Bad” to “V ery Good” with no neutral option] • Q2: thinks that expected that their remark would make them feel . • Q3: thinks that in making the remark, wanted to mak e them feel . • Q4: thinks that thinks that . • Q5: knew that . • Q6: In making the remark, expected to feel . • Q7: In making the remark, wanted to feel . • Q8: How awkw ard is this situation? [5 point scale ranging “Not at all” to “Extremely” The v alues for , , and were specified parametrically based on the context, while the value for was filled in based on the answer to the first question. The response scale for questions 2-7 was a six-point scale ranging from “Definitely Not” to “Definitely”, with no neutral point. W e included question 8 because pre- vious work studying faux pas hav e focused on this ques- tion (Zalla et al., 2009). Participants were also given free response boxes to elaborate on their interpretation of the sit- uation and answered demographic questions. Question β S.E. df t p Q1 -0.06 0.07 94.0 -0.77 Q2 0.15 0.06 94.0 2.65 ** Q3 0.15 0.06 94.0 2.50 * Q4 0.18 0.06 94.0 2.78 ** Q5 0.25 0.06 94.0 4.34 *** Q6 0.14 0.06 94.0 2.53 * Q7 0.15 0.06 94.0 2.64 ** Q8 0.04 0.05 94.0 0.78 T able 1: T ests for Diver ging/Shared history factor . Experimental Results Manipulation check T o assess whether the Div erg- ing/Shared history manipulation work ed, we examined re- sponses to Q5 (whether the speaker knew the listener acted on the object). A comparison in which the responses were coded Figure 3: Judgment correlations (Pearson’ s r ). as Y es or No (i.e., above or below the middle of the response scale) showed that it was effecti ve ( χ 2 p 1 q “ 7 . 92 , p ă . 01). Howe ver , a number of participants (15 of 50 in Shared; 20 of 50 in Div erging) did not pass this manipulation check and gav e opposite answers than implied by the stories. Whether their responses are included does not af fect our qualitativ e re- sults, and in our analyses we use the full data set. Figure 2c plots the results for those who passed this check. Judgment differences Responses paralleled the model pre- dictions for the Shared versus Div erging history versions of the vignettes (Figure 2b). For each judgment, we fit mixed- effects linear models with context intercepts as a random ef- fect and history as a fixed effect. T able shows tests of sig- nificance on the Diver ging/Shared history parameters. Judg- ments about the listener’ s feelings (Q1) were negati ve and not significantly different, indicating that people perceived the psychological impact (at least with respect to ability) of the utterance as roughly equiv alent. In contrast, questions about the interactive belief state—the listener and speaker’ s beliefs about the world and each other’ s beliefs (Q2-Q7)—differed as predicted by the model. In particular , participants thought that the speaker neither expected that their utterance would hurt the listener’ s feelings, nor that they w anted to do so. Par - ticipants judged that the listener recognized this as well. Judgment correlations Judgments among questions about higher order mental states were strongly correlated, while those between the higher order mental states and the lis- tener’ s action were weaker (Figure 3). Specifically , those about speaker mental states (Q6, Q7) and listener beliefs about speaker mental states (Q2, Q3) were all highly corre- lated (all r P r 0 . 77 , 0 . 91 s , p ă . 001). In contrast, questions about kno wledge of the object being modified (Q4, Q5) were only moderately correlated with those about anticipated ef- fects (Q2, Q3, Q6, Q7) (all r P r 0 . 48 , 0 . 64 s , p ă . 001). Discussion People’ s actions can hav e unexpected consequences, and speech-acts are no different. T o understand unintentional meaning though, we need to characterize how a communica- tiv e act can lead to unanticipated epistemic consequences. Sometimes, a listener can learn something from an utter- ance that a speaker did not intend to con ve y or may not e ven believ e (e.g., as in Curtains ). Here, we hav e presented a Bayesian model and experiments testing how people reason about scenarios inv olving unintentional speech acts. Specif- ically , our account treats speech-acts as actions taken by a speaker that influence a shared interactiv e belief state—the beliefs each agent has about the world and each other’ s be- liefs. In doing so, we can capture the inferences that underlie unintentional meaning. The current work raises important empirical and theoreti- cal questions about how people reason about interactive be- liefs and unintentional meaning. For instance, our experi- ments focus on third-party judgments about how a listener interprets the unintended meanings of utterances, but further work would be needed to assess how listeners do this (e.g., when the victim of an offhand comment) or ev en ho w speak- ers can recognize this (e.g., realizing one has put their foot in their mouth). Additionally , we ha ve presented a Bayesian ac- count of unintentional meaning in which agents reason about a large b ut finite set of possible histories of interaction. In ev eryday con versation, the space of possible histories can be much larger or ev en infinite. It is thus an open question how people can approximate the recursive inferences needed to make sense of unintentional meaning. A rigorous characterization of unintentional meaning can deepen our understanding of how communication works in a broader social context. For example, attempts to b uild common ground through shared experience (Clark & Mar- shall, 1981; McKinley , Brown-Schmidt, & Benjamin, 2017) or manage face with polite speech (Levinson et al., 1987; Y oon et al., 2018) could be understood, in part, as strate- gies for forestalling unintentional meaning. And giv en that intentionality plays a key role in judgments of blame (Baird & Astington, 2004), phenomena like plausible deniability could be understood as people lev eraging the possibility of unintentional meaning to covertly accomplish communicativ e goals (Pinker , Now ak, & Lee, 2008). Although further in ves- tigation is needed to test the extent to which people can track and influence interactive belief states (as well as ho w artifi- cial agents can do so), this work provides a point of departure for computationally in vestigating these social and cogniti ve aspects of communication. Acknowledgments This material is based upon w ork supported by the NSF under Grant No. 1544924. References Baird, J. A., & Astington, J. W . (2004). The role of mental state understanding in the de velopment of moral cognition and moral action. New Dir ections for Child and Adolescent Development , 2004 (103), 37-49. doi: 10.1002/cd.96 Baron-Cohen, S., O’Riordan, M., Stone, V ., Jones, R., & Plaisted, K. (1999). Recognition of Faux Pas by Normally Developing Chil- dren and Children with Asperger Syndrome or High-Functioning Autism. Journal of Autism and Developmental Disor ders , 29 (5), 407–418. doi: 10.1023/A:1023035012436 Bernardo, J. M., & Smith, A. F . (1994). Bayesian theory . John W iley & Sons. Brown-Schmidt, S., & Heller , D. (2018). Perspecti ve-taking dur - ing con versation. In G. Gaskell & S. A. Rueschemeyer (Eds.), Oxfor d handbook of psycholinguistics (2nd ed.). Oxford: Oxford Univ ersity Press Oxford. Carston, R. (2002). Thoughts and utterances: The pragmatics of explicit communication . Blackwell Publishing. Clark, H. H., & Marshall, C. R. (1981). Definite reference and mutual knowledge. Elements of discourse understanding . Franke, M. (2009). Signal to act: Game theory in pra gmatics . Institute for Logic, Language and Computation. Gmytrasiewicz, P. J., & Doshi, P . (2005). A frame work for sequen- tial planning in multi-agent settings. Journal of Artificial Intelli- gence Resear ch , 24 , 49–79. Goodman, N. D., & Frank, M. C. (2016). Pragmatic language inter - pretation as probabilistic inference. T r ends in cognitive sciences , 20 (11), 818–829. Goodman, N. D., & Stuhlm ¨ uller , A. (2014). The Design and Implementation of Pr obabilistic Pr ogr amming Languages. http://dippl.org . (Accessed: 2018-9-12) Grice, H. P . (1957). Meaning. The philosophical re view , 66 (3), 377–388. Gureckis, T. M., Martin, J., McDonnell, J., Rich, A. S., Markant, D., Coenen, A., . . . Chan, P . (2016). psiT urk: An open-source framew ork for conducting replicable behavioral experiments on- line. Behavior r esear ch methods , 48 (3), 829–842. K orman, J., Zalla, T ., & Malle, B. F . (2017). Action understanding in high-functioning autism: The faux pas task revisited. In G. Gun- zelmann, A. Howes, T . T enbrink, & E. J. Davelaar (Eds.), Pro- ceedings of the 39th annual confer ence of the cognitive science society (p. 2451-2456). Austin, TX: Cognitiv e Science Society . Kuhn, H. (1953). Extensiv e games and the problem of information. In H. Kuhn & A. Tuck er (Eds.), Contributions to the theory of games II (pp. 193–216). Princeton University Press. Levinson, P ., Bro wn, P ., Levinson, S. C., & Levinson, S. C. (1987). P oliteness: Some universals in language usage (V ol. 4). Cam- bridge univ ersity press. Luce, R. D. (1959). On the possible psychophysical laws. Psycho- logical r eview , 66 (2), 81. McKinley , G., Brown-Schmidt, S., & Benjamin, A. (2017). Memory for con versation and the development of common ground. Memory and Cognition , 45 (8), 1281-1294. doi: https://doi.org/10.3758/s13421-017-0730-3 Pinker , S., Now ak, M. A., & Lee, J. J. (2008). The logic of indirect speech. Pr oceedings of the National Academy of Sciences , 105 (3), 833–838. doi: 10.1073/pnas.0707192105 Sperber , D., & W ilson, D. (1986). Relevance: Communication and cognition . Cambridge, MA, USA: Harvard Univ ersity Press. Sutton, R. S., & Barto, A. G. (1998). Reinfor cement learning: An intr oduction . Cambridge, MA: MIT press. T endahl, M., & Gibbs Jr, R. W . (2008). Complementary perspectiv es on metaphor: Cognitiv e linguistics and relev ance theory . Journal of pragmatics , 40 (11), 1823–1864. Y oon, E. J., Frank, M. C., T essler, M. H., & Goodman, N. D. (2018, Dec). P olite speech emerges fr om competing social goals. PsyArXiv. Retrie ved from psyarxiv.com/67ne8 doi: 10.31234/osf.io/67ne8 Zalla, T ., Sav , A.-M., Stopin, A., Ahade, S., & Leboyer , M. (2009, Feb 01). Faux pas detection and intentional action in asperger syndrome. a replication on a french sample. Journal of Autism and Developmental Disor ders , 39 (2), 373–382. doi: 10.1007/s10803- 008-0634-y

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment