The meaning-frequency law in Zipfian optimization models of communication
According to Zipf’s meaning-frequency law, words that are more frequent tend to have more meanings. Here it is shown that a linear dependency between the frequency of a form and its number of meanings is found in a family of models of Zipf’s law for word frequencies. This is evidence for a weak version of the meaning-frequency law. Interestingly, that weak law (a) is not an inevitable of property of the assumptions of the family and (b) is found at least in the narrow regime where those models exhibit Zipf’s law for word frequencies.
💡 Research Summary
The paper investigates the meaning‑frequency law – the empirical observation that more frequent words tend to have more meanings – within a family of Zipfian optimization models of communication. Building on the information‑theoretic framework introduced by Ferrer i Cancho and colleagues, the authors represent the lexicon as a bipartite network linking a set of forms (words) F to a set of meanings M through a binary adjacency matrix A(i,j). The degree k_i of a form i corresponds to the number of meanings it encodes, while the degree l_j of a meaning j indicates how many forms can express it.
Communication cost is modeled as a weighted sum of speaker effort and listener effort: C = α H(F) + (1‑α) H(M|F), where H(F) is the entropy of the form distribution (speaker’s production cost) and H(M|F) is the conditional entropy of meanings given forms (listener’s decoding cost). The parameter α∈
Comments & Academic Discussion
Loading comments...
Leave a Comment