9 – Can we compute History?

Not only historians have been tempted by the idea to apply statistics to history. Fields like Quantitative History are past their heydays and are out of fashion today. Today, concepts such as Complexity Theory, Computer Simulations and fields like Computational Linguistics have taken their place. What do they have to offer to historians? Is this a fad or can they teach us something new about history? And finally: Do historians need to know how to code?


 In class




Tool of the week

4 thoughts on “9 – Can we compute History?

  1. 1. Ted Underwood raises a key concern with computational analysis that looks at word frequency, through such tools as Google n-grams, to determine change over time. Namely, he argues that scholars risk imposing their own constructions of ideas, such as morality, onto the period in which they are studying. Underwood notes that Ryan Heuser and Long Le-Khac have instead used this kind of analysis to first determine time-specific word associations and then work from there. Although this is certainly a problem with computational analysis, can’t we identify this as a broader problem that historians generally have to confront?

    2. Underwood also questions whether or not using fiction as a source accurately reflects changing ideas over time. Decline in a specific genre, he suggests, might tell us something about literary culture, but perhaps not so much about broader social realties. How does this differ from historians who, through close reading, engage with literary texts as historical sources? Doesn’t this simply suggest another way in which digital history raises traditional methodological questions?

    3. In thinking through the differences between “digital” history and “computational” history, I find myself back to the beginning of the class, where we asked: is this a field? a method? a theory? an approach? Is a computational approach to history as flexible as a digital one?

    4. And how do the answers to these questions change within a field (?) like Cliodynamics. Peter Turchin, in his work on empire, seems to think that the only history worth doing is mathematical: “What is needed is a systematic application of the scientific method to history: verbal theories should be translated into mathematical models, precise predictions derived, and then rigorously tested on empirical material. In short, history needs to become an analytical, predictive science.” This feels eerily similar to the positivist approach of 19th century historians like Ranke, who once they answered an historical question, could move on until all of the questions could be neatly explained. Where is the room for dissent or re-interpretation in such an approach? And how might such an approach supersede other perspectives or epistemologies of history?

  2. 1. Stylometry, as mentioned in the article, appears to focus on high-frequency words with little meaning. While this seems like a great way to have unbiased data, how do other parts of language provide information (such as Underwood’s “loud mouthed Achilles”). Will the quantitative research methods not work for words that have multiple meanings, or have appropriate research methods not been developed for handling more interpretable writing?

    2. Even with large sets of data, it appears that it can be difficult to draw specific conclusions without having an understanding of the limitations. How is error calculated in quantitative history and to what degree can researchers abide to the accuracy of information? Could new analyses hold the potential to redact accepted ideas (such as the fall of Rome)?

    3. Would having skills in mathematical reasoning, coding, etc., eliminate competition in the field of history? Would “history” transform closer to a STEM field rather than remain a humanities field?

    4. The n gram viewer is a very diverse tool that allows for many different examinations of text over time. How dependent will the humanities become on tools like these? Will research opportunities for historical scholars become limited if the demand for Google’s tools go up?

  3. 1. In the end, did the article by the poindexters at Harvard actually deliver any insight into issues that weren’t already known? Is this what “culturomics” (ugh) should be for? Confirmatory studies?

    2. The aforementioned poindexters point out that the typical approach to humanistic inquiry “rarely enables precise measurement of the underlying phenomena.” Should that even be a goal? Why are some people so ready to accept quantification as an inherently worth end?

    3. In a review of Matthew Jockers’s “Macroanalysis” in the LA Review of Books (http://lareviewofbooks.org/review/an-impossible-number-of-books), Matthew Wilkens states that humanists are “very rarely trained in dealing with large-scale data.” Why don’t we classify the vast international system of libraries and archives, which humanists are trained to navigate, as “large-scale data”?

    4. Regardless of their underlying scholastic merit, what do discussions about these techniques of processing big data say about the need for efficiency in scholarship? So often, the need for big data analysis is framed as necessary due to the large amount of time and energy it takes to ask questions about the past. Is “efficiency” in that regard desirable?

    sorry, these made me cranky

  4. 1. As “Quantitative Analysis,” I have seen many articles and people saying that interpretation is still important in the digital age but they tend not to show how they did and what they did to interpret materials. Is it because of the limitation of publication style (blog, short articles, twitter, videos)? Or do I overlook them?

    2. Quantitative analysis is not quite a thing. But why do digital quantitative methods tend to less contextualize their works in the long tradition of quantitative methods and emphasize its newness?

    3. Many articles deal with language and words. How do we apply the discussion here in non-written materials?

    4. Google Ngram is a very new thing to me but Ted Underwood’s post points out what I felt uneasy about digital quantitative analysis. Do historians use these tools to know what is unseen? To what extent do historians investments’ in these tools affect the research outcome?

Leave a Reply

Your email address will not be published. Required fields are marked *