Polysemy
Table of contents
Michel Bréal and Stephen Ullmann, both pioneers in the modern scientific study of semantics, believed that polysemy is an emblem of civilized societies (Bréal 1900; Ullmann 1963), because in such societies, developments of culture and technology prompt the extension of meaning of existing lexemes to fill the denotational gap presented by real-world invention or innovation. The study of polysemy has expanded considerably from this point in the last two decades or so, as language scholars examine the very nature of lexical meaning by trying to determine which modulations of meaning can be ascribed to the existing context alone, and which can be meaningfully identified as sense variation which is context-independent. The recent increase in interest in polysemy results naturally from the resurgence of interest in lexical semantic study in general; this in turn results from lexicalizing the study of syntax and the syntax/semantics interface. The growth in the study of natural language processing has prompted an interest among computer scientists in addition to the longer-term interest of linguists, philosophers, and psychologists. Moreover, linguistic theorists have moved to reconceptualize the lexicon as a structured inventory of generalizations (of semantics, as well as phonology and morphology) and not just a compendium of idiosyncrasies: this contrasts with the earlier generative-grammar distinction between ‘grammar’ and lexicon. So as in other types of lexical principles, one topic in polysemy research has concerned the question of whether relationships between senses can be stated in terms of general principles, rather than the observation of idiosyncratic, generally culturally-based developments characteristic of the links between senses studied in the previous century and the early part of this one.