Monthly Archives: December 2015

Susan Fitzmaurice at DH & Conceptual Change event (photo: Mikko Tolonen)

A manifesto for studying conceptual change (Manifesto pt. 1 of 3)

As those who follow our Twitter account will know, Linguistic DNA’s principal investigator, Susan Fitzmaurice, was among the invited speakers at the recent symposium on Digital Humanities & Conceptual Change (organised by Mikko Tolonen, at the University of Helsinki). It was an opportunity to set out the distinctive approach being taken by our project and the theoretical understanding of concepts that underpins it. What follows is the first of three blog posts based on extracts from the paper, aka the Linguistic DNA ‘manifesto’. Susan writes:

Linguistic DNA’s goal is to understand the ways in which the concepts (or paradigmatic terms) that define modernity emerge in the universe of Early Modern discourse. The methodology we are committed to developing, testing and using, i.e. the bottom-up querying of a universe of printed discourse in English, demands that we take a fresh look at the notion of a concept and its content. So how will we operationalise a concept, and how will we recognise a concept in the data?

Defining the content of a concept from above

Historians and semanticists alike tend to start by identifying a set of key concepts and pursue their investigation by using a paradigmatic approach. For semanticists, this entails identifying a ‘concept’ in onomasiological terms as a bundle of (near-)synonyms that refer to aspects of the semantic space occupied by a concept in order to chart conceptual change in different periods and variation in different lects.

Historians, too, have identified key concepts through keywords or paradigmatic terms, which they then explore through historiography and the inspection of historical documents, seeking the evidence that underpins the emergence of particular terms and the forces and circumstances in which these change (Reinhart Koselleck’s Begriffsgeschichte or Quentin Skinner’s competing discourses). Semanticists and historians alike tend to approach concepts in a primarily semasiological way, for example, Anna Wierzbicka (2010) focuses on the history of evidence, and Naomi Tadmor (1996) uses ‘kin’ as a starting point for exploring concepts based on the meanings of particular words.

Philosophers of science, who are interested in the nature of conceptual change as driven or motivated by scientific inquiry and technological advances, may see concepts and conceptual change differently. For example, Ingo Brigandt (2010) argues that a scientific concept consists of a definition, its ‘inferential role’ or ‘reference potential’ and the epistemic goal pursued by the term’s use in order to account for the rationality of semantic change in a concept. So the change in the meaning of ‘gene’, from the classical gene which is about inheritance in the 1910s and 1920s, to the molecular gene in the 1960s and 1970s which is about characteristics, can be shown to be motivated by the changing nature of the explanatory task required of the term ‘gene’. In such a case, the goal is to explain the way in which the scientific task changes the meaning associated with the terms, rather than exploring the change itself. Thus Brigandt tries to make it explicit that

‘apart from a changing meaning (inferential role) [the concept also has] an epistemic goal which is tied to a concept’s use and which is the property setting the standards for which changes in meaning are rational’ (2010: 24).

His understanding of the pragmatics-driven structure of a concept is a useful basis for the construction of conceptual change as involving polysemy through the processes of invited inference and conversational implicature (cf. Traugott & Dasher, 2002; Fitzmaurice, 2015).

In text-mining and information retrieval work in biomedical language processing, as reported in Genome Biology, concept recognition is used to extract information about gene names from the literature. William Baumgartner et al. (2008) argue that

‘Concepts differ from character strings in that they are grounded in well-defined knowledge resources. Concept recognition provides the key piece of information missing from a string of text—an unambiguous semantic representation of what the characters denote’ (2008: S4).

Admittedly, this is a very narrow definition, but given the range of different forms and expressions that a gene or protein might have in the text, the notion of concept recognition needs to go well beyond the character string and ‘identification of mentions in text’. So they developed ‘mention regularization’ procedures and disambiguation techniques as a basis for concept recognition involving ‘the more complex task of identifying and extracting protein interaction relations’ (Baumgartner et al. 2008: S7-15).

In LDNA, we are interested in investigating what people (in particular periods) would have considered to be emerging and important cultural and political concepts in their own time by exploring their texts. This task involves, not identifying a set of concepts in advance and mining the literature of the period to ascertain the impact made by those concepts, but querying the literature to see what emerges as important. Therefore, our approach is neither semasiological, whereby we track the progress and historical fortunes of a particular term, such as marriage, democracy or evidence, nor is it onomasiological, whereby we inspect the paradigmatic content of a more abstract, yet given, notion such as TRUTH or POLITY, etc. We have to take a further step back, to consider the kind of analysis that precedes the implementation of either a semasiological or an onomasiological study of the lexical material we might construct as a concept (e.g. as indicated by a keyword).

See the next post in this Manifesto series.

Distributional Semantics II: What does distribution tell us about semantic relations?

Distributional Semantics II: What does distribution tell us about semantic relations?

In a previous post, I outlined a range of meanings that have been discussed in conjunction with distributional analysis. The Linguistic DNA team is assessing what exactly it can determine about semantics based on distributional analysis: from encyclopaedic meaning to specific semantic relations. In my opinion, the idea that distributional data indicates ‘semantics’ has generally been a relatively vague one: what exactly about ‘semantics’ is indicated? In this post, I’d like to clarify what distribution can tell us about semantic relations in particular, including synonymy, hyponymy, and co-hyponymy.

In the Natural Language Processing (NLP) sphere, numerous studies have tested the effectiveness of distributional data in identifying semantic relations. Turney and Pantel (2010) provide a useful survey of such studies, many of which involve machine learning, and computer performance on synonymy tests including those found on English language exams. Examples of success on synonymy tests have employed windows of anything from +/-4 words up to +/-150 words, but such studies tend not to test various approaches against each other, and they rarely dissect the notion of synonymy, much less co-hyponymy or other semantic relations.

Only a few studies have tested distributional methods as indicators of specific semantic relations. The Quantitative Lexicology and Variational Linguistics (QLVL) team at KU Leuven has addressed this problem in several papers. For example, Peirsman et al. (2007) looked at evidence for synonymy, hyponymy, and co-hyponymy in proximity data for Dutch. (A hyponym is a word whose meaning is a member of a larger category – for example, a crow and a robin are both types of bird, so crow and robin are both hyponyms of bird, and crow and robin are co-hyponyms of each other, but they are not synonyms of each other). Peirsman et al. looked at raw proximity measures as well as proximity measures that incorporate syntactic dependency information. Their findings demonstrate that in Dutch, synonymy and hyponymy are more readily indicated by proximity analyses that include syntactic dependency. On the other hand, they show that co-hyponymy is most effectively evidenced by raw proximity measures that do not include syntactic information. This finding is a startling result, with fascinating implications for linguistic theory. Why should ignoring syntactic information provide better measures of co-hyponymy? Might English be similar? How about Early Modern English?

I think it is important to note that in Peirsman et al. (ibid.), 6.3% of words that share similar distributional characteristics with a given word, or node, are synonyms with that node, and 4.0% are hyponyms of that node. Put differently, about 94% of words identified by distributional analysis aren’t synonyms, and round 70% of the words elicited in these measures are not semantically related to the node at all. Experienced corpus semanticists will not be surprised by this. But what happens to the majority of words, which aren’t related in any clear way? A computer algorithm will output all significant co-occurrences. Often, the co-occurrences that are not intuitively meaningful are quietly ignored by the researcher. It seems to me that if we are going to ignore such outputs, we must do so explicitly and with complete transparency. But this raises bigger questions: If we trust our methods, why should we ignore counterintuitive outputs? Or are these methods valuable simply as reproducible heuristics? I would argue that we should be transparent about our perspective on our own methods.

Also from QLVL, Heylen et al. (2008a) tests which types of syntactic dependency relations are most effective at indicating synonymy in Dutch nouns, and finds that Subject and Object relations most consistently indicate synonymy, but that adjective modification can give the best (though less consistent) indication of synonymy. In fact, adjective modification can be even better than a combined method using adjective modification and Subject/Object relations. Again, the findings are startling, and fascinating—why would the consideration of Subject/Object relations actually hinder the effective use of adjective modification as evidence of synonymy? The answer is not entirely clear. In a comparable study, Van der Plas and Bouma (2005) found Direct Object relations and adjective modification to be the most effective relations in identifying synonymy in Dutch. Unlike Heylen et al.’s (2008a) findings, Van der Plas and Bouma (2005) found that combining dependency relations improved synonym identification.

Is proximity data more effective in determining the semantics of highly frequent words? Heylen et al. (2008b) showed that in Dutch, high frequency nouns are more likely to collocate within +/-3 words with nouns that have a close semantic similarity, in particular synonyms and hyponyms. Low frequency nouns are less likely to do so. In addition, in Dutch, syntactic information is the best route to identifying synonymy and hyponymy overall, but raw proximity information is in fact slightly better at retrieving synonyms for medium-frequency nouns. This finding, then, elaborates on the finding in Peirsman et al. (2007; above).

How about word class? Peirsman et al. (2008) suggest, among other things, that in Dutch, a window of +/-2 words best identifies semantic similarity for nouns, while +/-4 to 7 words is most effective for verbs.

For Linguistic DNA, it is important to know exactly what we can and can’t expect to determine based on distributional analysis. We plan to employ distributional analysis using a range of proximity windows as well as syntactic information. The team will continue to report on this question as we move forward.

*Castle Arenberg, in the photo above, is part of KU Leuven, home of QLVL and many of the studies cited in this post. (Credit: Juhanson. Licence: CC BY-SA 3.0.)

References

Heylen, Kris; Peirsman, Yves; Geeraerts, Dirk. 2008a. Automatic synonymy extraction: A Comparison of Syntactic Context Models. In Verberne, Suzan; van Halteren, Hans; Coppen, Peter-Arno (eds), Computational linguistics in the Netherlands 2007. Amsterdam: Rodopi, 101-16.

Heylen, Kris; Peirsman, Yves; Geeraerts, Dirk; Speelman, Dirk. 2008b. Modelling word similarity: An evaluation of automatic synonymy extraction algorithms. In: Calzolari, Nicoletta; Choukri, Khalid; Maegaard, Bente; Mariani, Joseph; Odjik, Jan; Piperidis, Stelios; Tapias, Daniel (eds), Proceedings of the Sixth International Language Resources and Evaluation. Marrakech: European Language Resources Association, 3243-49.

Peirsman, Yves; Heylen, Kris; Speelman, Dirk. 2007. Finding semantically related words in Dutch. Co-occurrences versus syntactic contexts. In Proceedings of the 2007 Workshop on Contextual Information in Semantic Space Models: Beyond Words and Documents, 9-16.

Peirsman, Yves; Heylen, Kris; Geeraerts, Dirk. 2008. Size matters: tight and loose context definitions in English word space models. In Proceedings of the ESSLLI Workshop on Distributional Lexical Semantics, 34-41.

Turney, Peter D. and Patrick Pantel. 2010. From Frequency to Meaning: Vector Space Models of Semantics. Journal of Artificial Intelligence Research 37, 141-188.

van der Plas, Lonneke and Gosse Bouma. 2005. Syntactic Contexts for finding Semantically Similar Words. In Proceedings of CLIN 04.