Proximity Data

Background

The Linguistic DNA project will be interrogating cleaned-up EEBO and ECCO data in various ways, to get at its lexical semantic and conceptual content. But how do we get semantic and conceptual information from textual data? Sticking with  the original project proposal, we begin with an analysis of ‘proximity data’. What is proximity data, what does it tell us, and how can we measure it?

What is proximity?

Proximity relates to co-occurrence between terms in language. So, what is a term and what does it mean to co-occur?

A term may be:

  • a single word, a pair of words (or bigram), or a string of three or more words in order (an n-gram);
  • a grammatical construction whose ‘slots’ can be filled with appropriate words (e.g. ‘NOUN of NOUN’, ‘ADJECTIVE as NOUN’, or even ‘VERB MODIFIER DIRECT OBJECT’);
  • a phrase with lexical wild cards such as ‘very ___ ideas’.

Co-occurrence can then be defined as the presence of two or more terms within a given set of data, or in a given relationship. For example, we might be interested in the co-occurrence of two single words like Lord and law: In which texts do those terms co-occur? How close is one to the other? Or, we might be interested in the co-occurrence of a single word with a grammatical pattern: In which texts is see followed by a subordinate clause?

How do we investigate proximity?

We can ask a few different things about the distance between terms that co-occur. For example, we can inquire: ‘What terms occur within a given distance of term a (e.g. Lord)?’ Or, we can ask: ‘How far is term a (e.g. Lord) from term b (e.g. law)?’ Put differently, we can measure co-occurrence by selecting a starting point term (a node) and a distance from that starting point, and seeing what terms occur within that distance. Alternatively, we can select multiple nodes as starting points and measure the distance between them in use. We can also combine these two methods: we can first ask what words occur within a given distance of term a, and then take pairs of words from the resulting list and ask just how closely they occur to each other.

Finally, we can ask: ‘What occurs in a given relationship to term a?’ These questions can be syntactic: ‘What are the Direct Objects and Subjects of term a (e.g. see)?’ or related to Parts Of Speech (POS) ‘What noun occurs most frequently after term a (e.g. see)?’ We can also hypothetically ask about semantic relationships: ‘What is the Agent or Patient, Instrument or Theme related to term a?’ A syntactic approach is employed by the commercially-developed Sketch Engine software, and also generally, in various ways, in the Behavioural Profiling technique used by Stefan Gries (2012), in the collostructional approach used by Anatol Stefanowitsch and Gries (2008) and by Martin Hilpert (2012). This approach requires either satisfactory automated syntactic parsing or manual syntactic parsing—both of which seem to be impossible with EEBO because of the scale and variation documented previously. A POS approach is more viable with EEBO, but still difficult.

An alternative to syntactic and POS approaches is pair-pattern matrices: rather than investigating co-occurrence within grammatical relationships, we can investigate co-occurrence within given lexical structures such as ‘a cut(s) b’, ‘a work(s) with b’, etc. This has been explored in machine learning and artificial intelligence research (Turney and Pantel 2010).

What does proximity data tell us?

Proximity data represents a relatively data-driven approach to corpus semantics (and to semantic analysis in Natural Language Processing [NLP], artificial intelligence, data science, and other fields). In linguistics, the use of proximity data in this way is based upon the idea that words occurring together or in similar contexts are likely to share a similar meaning or occupy a similar conceptual field. This is known as a contextual theory of meaning, and in its early stages the theory was developed in particular by J. R. Firth, Michael Halliday, and John Sinclair (cf. Stubbs 1996; Oakey 2009). Sinclair pioneered the application of the theory in lexicography, with the Collins COBUILD Dictionary. That dictionary designed its entries around the most frequent collocational patterns for each dictionary headword, as evidenced by corpus data. In addition to lexicographical applications, proximity data are now used to study lexical semantics; to automatically identify Parts of Speech; to generate computer models of linguistic meaning in NLP and artificial intelligence studies; as well as to engineer text search tools, summarise texts, identify text topics, and even analyse writers’ ‘sentiment’ (cf. Manning and Schuetze 2001, Chapter 5).

But there is a crucial epistemological question that arises here. At its most basic level, co-occurrence data in corpora tell us directly about language use and usage. What is the link between corpus data showing lexical usage, on the one hand, and lexical semantics or conceptual fields, on the other? That is a question that will preoccupy Linguistic DNA as it evolves – and a question we will continue to address on the blog.

Works Cited

Gries, Stefan Th. 2012. Behavioral profiles: A fine-grained and quantitative approach in corpus-based lexical semantics. In Gary Libben, Gonia Jarema and Chris Westbury (eds), Methodological and analytic frontiers in lexical research. Amsterdam: John Benjamins Publishing Company. 57-80.

Hilpert, M. 2012. Diachronic collostructional analysis meets the noun phrase. In T. Nevalainen and E. C. Traugott (eds.), Oxford Handbook of the English Language. Oxford 2012. 233–44.

Manning, Christopher and Hinrich Schuetze. 2001. Foundations of statistical natural language processing. Boston: MIT Press.

Oakey, David. 2009. Fixed collocational patterns in isolexical and isotextual versions of a corpus. In Paul Baker (ed.), Contemporary corpus linguistics. London, Continuum. 140-58.

Stefanowitsch, Anatol & Stefan Th. Gries. 2008. Channel and constructional meaning: A collostructional case study.  In Kristiansen and Dirven (eds.), Cognitive Sociolinguistics: Language variation, cultural models, social systems, 129-152. Berlin: Mouton de Gruyter.

Stubbs, Michael. 1996. Text and corpus analysis. Oxford: Blackwell.

Turney, Peter D. and Patrick Pantel. 2010. From Frequency to Meaning:
Vector Space Models of Semantics. Journal of Artificial Intelligence Research 37, 141-188.