Tag Archives: semantic relations

Distributional Semantics II: What does distribution tell us about semantic relations?

Distributional Semantics II: What does distribution tell us about semantic relations?

In a previous post, I outlined a range of meanings that have been discussed in conjunction with distributional analysis. The Linguistic DNA team is assessing what exactly it can determine about semantics based on distributional analysis: from encyclopaedic meaning to specific semantic relations. In my opinion, the idea that distributional data indicates ‘semantics’ has generally been a relatively vague one: what exactly about ‘semantics’ is indicated? In this post, I’d like to clarify what distribution can tell us about semantic relations in particular, including synonymy, hyponymy, and co-hyponymy.

In the Natural Language Processing (NLP) sphere, numerous studies have tested the effectiveness of distributional data in identifying semantic relations. Turney and Pantel (2010) provide a useful survey of such studies, many of which involve machine learning, and computer performance on synonymy tests including those found on English language exams. Examples of success on synonymy tests have employed windows of anything from +/-4 words up to +/-150 words, but such studies tend not to test various approaches against each other, and they rarely dissect the notion of synonymy, much less co-hyponymy or other semantic relations.

Only a few studies have tested distributional methods as indicators of specific semantic relations. The Quantitative Lexicology and Variational Linguistics (QLVL) team at KU Leuven has addressed this problem in several papers. For example, Peirsman et al. (2007) looked at evidence for synonymy, hyponymy, and co-hyponymy in proximity data for Dutch. (A hyponym is a word whose meaning is a member of a larger category – for example, a crow and a robin are both types of bird, so crow and robin are both hyponyms of bird, and crow and robin are co-hyponyms of each other, but they are not synonyms of each other). Peirsman et al. looked at raw proximity measures as well as proximity measures that incorporate syntactic dependency information. Their findings demonstrate that in Dutch, synonymy and hyponymy are more readily indicated by proximity analyses that include syntactic dependency. On the other hand, they show that co-hyponymy is most effectively evidenced by raw proximity measures that do not include syntactic information. This finding is a startling result, with fascinating implications for linguistic theory. Why should ignoring syntactic information provide better measures of co-hyponymy? Might English be similar? How about Early Modern English?

I think it is important to note that in Peirsman et al. (ibid.), 6.3% of words that share similar distributional characteristics with a given word, or node, are synonyms with that node, and 4.0% are hyponyms of that node. Put differently, about 94% of words identified by distributional analysis aren’t synonyms, and round 70% of the words elicited in these measures are not semantically related to the node at all. Experienced corpus semanticists will not be surprised by this. But what happens to the majority of words, which aren’t related in any clear way? A computer algorithm will output all significant co-occurrences. Often, the co-occurrences that are not intuitively meaningful are quietly ignored by the researcher. It seems to me that if we are going to ignore such outputs, we must do so explicitly and with complete transparency. But this raises bigger questions: If we trust our methods, why should we ignore counterintuitive outputs? Or are these methods valuable simply as reproducible heuristics? I would argue that we should be transparent about our perspective on our own methods.

Also from QLVL, Heylen et al. (2008a) tests which types of syntactic dependency relations are most effective at indicating synonymy in Dutch nouns, and finds that Subject and Object relations most consistently indicate synonymy, but that adjective modification can give the best (though less consistent) indication of synonymy. In fact, adjective modification can be even better than a combined method using adjective modification and Subject/Object relations. Again, the findings are startling, and fascinating—why would the consideration of Subject/Object relations actually hinder the effective use of adjective modification as evidence of synonymy? The answer is not entirely clear. In a comparable study, Van der Plas and Bouma (2005) found Direct Object relations and adjective modification to be the most effective relations in identifying synonymy in Dutch. Unlike Heylen et al.’s (2008a) findings, Van der Plas and Bouma (2005) found that combining dependency relations improved synonym identification.

Is proximity data more effective in determining the semantics of highly frequent words? Heylen et al. (2008b) showed that in Dutch, high frequency nouns are more likely to collocate within +/-3 words with nouns that have a close semantic similarity, in particular synonyms and hyponyms. Low frequency nouns are less likely to do so. In addition, in Dutch, syntactic information is the best route to identifying synonymy and hyponymy overall, but raw proximity information is in fact slightly better at retrieving synonyms for medium-frequency nouns. This finding, then, elaborates on the finding in Peirsman et al. (2007; above).

How about word class? Peirsman et al. (2008) suggest, among other things, that in Dutch, a window of +/-2 words best identifies semantic similarity for nouns, while +/-4 to 7 words is most effective for verbs.

For Linguistic DNA, it is important to know exactly what we can and can’t expect to determine based on distributional analysis. We plan to employ distributional analysis using a range of proximity windows as well as syntactic information. The team will continue to report on this question as we move forward.

*Castle Arenberg, in the photo above, is part of KU Leuven, home of QLVL and many of the studies cited in this post. (Credit: Juhanson. Licence: CC BY-SA 3.0.)

References

Heylen, Kris; Peirsman, Yves; Geeraerts, Dirk. 2008a. Automatic synonymy extraction: A Comparison of Syntactic Context Models. In Verberne, Suzan; van Halteren, Hans; Coppen, Peter-Arno (eds), Computational linguistics in the Netherlands 2007. Amsterdam: Rodopi, 101-16.

Heylen, Kris; Peirsman, Yves; Geeraerts, Dirk; Speelman, Dirk. 2008b. Modelling word similarity: An evaluation of automatic synonymy extraction algorithms. In: Calzolari, Nicoletta; Choukri, Khalid; Maegaard, Bente; Mariani, Joseph; Odjik, Jan; Piperidis, Stelios; Tapias, Daniel (eds), Proceedings of the Sixth International Language Resources and Evaluation. Marrakech: European Language Resources Association, 3243-49.

Peirsman, Yves; Heylen, Kris; Speelman, Dirk. 2007. Finding semantically related words in Dutch. Co-occurrences versus syntactic contexts. In Proceedings of the 2007 Workshop on Contextual Information in Semantic Space Models: Beyond Words and Documents, 9-16.

Peirsman, Yves; Heylen, Kris; Geeraerts, Dirk. 2008. Size matters: tight and loose context definitions in English word space models. In Proceedings of the ESSLLI Workshop on Distributional Lexical Semantics, 34-41.

Turney, Peter D. and Patrick Pantel. 2010. From Frequency to Meaning: Vector Space Models of Semantics. Journal of Artificial Intelligence Research 37, 141-188.

van der Plas, Lonneke and Gosse Bouma. 2005. Syntactic Contexts for finding Semantically Similar Words. In Proceedings of CLIN 04.