KCL • CCH • Minor programme • AV1000 • Text-analysis
The following is an attempt briefly to sketch a methodology for elementary text-analysis, with particular emphasis on how to approach a text one does not know well. It is essentially an abstraction of the practice illustrated in the exercises of the following pages under this topic. Here no particular tool for the activity is presumed, nor are particularly sophisticated tools in view. All of what follows can be done with conceptually quite simple ones, such as Monoconc.
Throughout “text-analysis” should be taken to mean “the analysis of text with the aid of algorithmic techniques”.
An algorithm may be defined as a step-by-step procedure capable of being run on a computer—i.e., an unambiguous and completely stated description of what the computer is to do. It can be expressed by a computer program but need not be; often the specifics of how an algorithm is implemented in a particular programming language would obscure the essentials. Text-analytic methods cover a spectrum between the completely algorithmic and the exploratory: in exploratory work we do not have a specific goal or procedure to follow but instead we look for leads. Most work mixes approaches from various points in the spectrum: we may make a word frequency list by algorithmic methods, but the results always need to be interpreted and investigated further, usually by much less algorithmic means.
Text-analysis may be divided into the following kinds, usually practiced at different places along the algorithmic–exploratory spectrum:
There are two reasons why one might legitimately be using text-analytic techniques on a text one does not know well. First, corpora of use in the humanities are approaching and some are already past the point at which a human being could read through their contents in a lifetime—especially given when that person might begin his or her reading; furthermore, some of these are not intended for normal reading, such as the non-literary collections meant for historical or linguistic purposes. Second, and more importantly, text-analysis is fundamentally different from manual methods and so reveals aspects of even well-known texts that one is likely not to have considered before. To the degree to which these texts are made new by the change in perspective, understanding will be aided by text-analytic techniques.
The first reason, that corpora tend to be too large, can be put in more positive terms: a good command of these techniques will make it practical for the ignorant but intelligent person to profit from materials outside his or her own field. Thus interdisciplinary research tends to be fostered.
We assume, then, application of the first kind of analysis, concording, with some use of frequency lists, to unseen texts.
Nevertheless the place to begin is with whatever you know about the given body of text (known as the corpus). It is unlikely that you will know absolutely nothing at all about it, but in any case read around in it briefly, picking up what you can. Consider
In other words, the seemingly disembodied electronic text has several contexts essential to a full understanding of it. The more of that understanding you can have the better, though because the focus here is on technique, the point is not to dwell on acquiring knowledge of the contexts, only to get what you can quickly.
The methodology outlined here is like a fishing expedition: you go at the text with a quiet, open mind, having little or no idea what you are going to catch. If you are after something in particular, then of course it is a different kind of activity. Even in a focused enquiry, however, software allows you to ask certain kinds of questions so easily and get answers back so quickly that curiosity is given a much freer reign; you can afford to play, ask even apparently improbable questions, and so raise the chances that you will be surprised by an important result you had little reason to expect. Thus a certain amount of fishing is recommended even for the focused questioner.
High-frequency words. A quite crude but useful technique is to look through a list of the most frequent word-forms for anything that is unusual or particularly characteristic of the text in question. Frequency of word-forms is only roughly related to what a text says, but it is related, and so is useful to work with.
Two examples spring to mind from both the Simpson and Stephen corpora: the verb “know” and the first-person singular pronoun “I”. (Note, in the comparison study outlined in Corpus analysis of meaning, how so little information says so much about both, how it draws a contrastive parallel between the two men.)
There are of course severe limitations on what you can do with a frequency list, especially if you are interested in words (dictionary headwords, such as “know” or “I”) rather than word-forms (such as “knows” or “knew”, or “me” or “we”), and much more if you are focused on ideas (such as cognition or the self) rather than words. If the former, then you need to find all the inflected forms of the word and combine their frequencies. If the latter, you need to find all the relevant synonyms and combine the frequencies of all the inflected forms; even then, since ideas are only tangentially related to words, the result would be incomplete. Very often, however, the raw frequency list will prove useful enough.
Collocations. A somewhat more sophisticated tool for relating word-forms to meaning generates information on what words tend to be found together, either contiguously, such as “I didn't know that”, or within a specified proximity or span, e.g. “black” within 5 words of “bag”. The idea here is that repeated collocations are more reliable indicators of meaning that repetitions of single word-forms. See Sinclair 1991 (chapter 8) for a full discussion.
The program Monoconc and others will generate lists of collocations ordered by frequency so that you can identify recurring phrases and associations of words quickly. Note that if you wish to study collocations over a wider span than the program permits, you can do this by following these steps:
This listing will thus give you the frequencies of the collocates of your target word.
A government document, for example, will tend to have quite high frequencies of standard phrases; for a literary work, even two occurrences of a phrase may be highly significant. The Monoconc-style listing, of collocations within a span, is of course less bound to literal repetition—it will include together, for example, instances of the collocation of “don't” and “know” in the phrases “I don't know” and “I don't even know”.
Classicists will be interested in the collocation tools implemented by the Perseus Project; see in particular the Greek and Latin context search tools.
Concording. The essential idea behind the concordance, especially the KWIC, is to direct your attention to the immediate linguistic environment of the specified word. Hence when you find a potentially interesting word, often the next step is to run a concordance on it, then look down the concordance listing to see what patterns you can spot. With Monoconc generating collocation statistics will often immediately follow.
A KWIC is made considerably more useful by the ability to sort an on-screen listing according to the words to the left and right of the target words; Monoconc offers such ability, and the same can be done with other concordance software. Such sorting tends to bring out the patterns, since repetitions are grouped together.
Since current KWIC software deals only with word-forms rather than words, you will often also need to concord the inflected forms. In English many of these can be caught by use of the appropriate wildcards, but not all. An example is go, went and gone; another is I, me, my, mine, we, us, our(s), all forms of the first-person personal pronoun.
Synonyms, of course, are entirely your task to identify, but doing so is made considerably easier than it might be by the tendency in many writers and speakers to emphasise an idea by using a number of synonyms together or nearby each other. Thus the text can itself help you to build a reasonable list for further concording. Compiling such a list is a recursive activity—in the beginning, a new synonym will tend to turn up others; when the law of diminishing returns asserts itself, it is time to stop. The result we may call a “fixed vocabulary”, to which can be added the contiguous collocations you have identified. All together these represent a translation of an idea, as it were, into data.
A fixed vocabulary can then be used to turn up passages in the text for study—as is commonly done in “content analysis”. If you know the text well, then a very interesting further question to ask is, when does this vocabulary not identify passages in which the targeted idea clearly or arguably occurs? Why does it not? Some very interesting findings can result from pursuit of this question.
revised November 2007