Cliometrics: Or, What Historians Can Tell Us about Metrics

Measurement and quantification have become the guiding lights of our age. Numbers are becoming the principal means by which we make sense of our lives and our world. Wonderfully, or so it may seem, we can put a numerical value on almost every aspect of our experience: our health, our happiness, our success, all can be rated and compared with the health, happiness and success of others. How liberating it is to work out whether we are happy without relying on such messy and imprecise things as the nuances of feelings and subjective experience! I may feel happy, but am I happy? Best check it according to one of the many ‘scientific’ ways of quantifying it. Even better, I can check my life satisfaction against the national average on David Cameron’s happiness index (happily funded to the tune of £2m per year in these times of austerity). It may even boost my happiness rating to discover that I’m happier than most other people…

It is increasingly hard to resist this brave new world—the numbers insidiously work their way into every area of our lives. How is my writing going? I’d better check the stats on views and visitors and referrers to my blog. Was my teaching last term successful? Let’s look at the average ratings the students gave the course. Am I popular? The number of friends on social media will answer that. Am I in good shape? Best work out my Body Mass Index. Is this meal healthy? I can cross reference its calories and sodium content with recommended daily averages. Which of these books should I buy as a Christmas present? I’ll let the average reviewer ratings help me decide. Is it worth reading this piece of research? Let’s check the ‘impact factor’ of the journal it has been published in… oh, that’s quite a low number, so best not bother to read it then. How absurd is all this measuring of feeling and quantifying of quality? Well, it rates highly on my recently-devised and extremely scientific Absurdity Factor.

It’s not that the formulae, numbers and statistics in themselves are bad: they are simply pieces of information about which we need to exercise critical judgment, to make evaluations as to their worth or not. It is the growing tendency to dispense with evaluation that is the problem: doubting and foregoing our ability to make subjective judgments, we instead treat the numbers and ratings as if they are reliable, scientific truths. In the process—and here it gets really serious—careers and lives are destroyed. Few employees are free from the use of performance indicators, the metrics that are used to measure whether someone is doing their job well. As I’ve discussed elsewhere, higher education has become obsessed with them: the quality of scholarly research is judged not on reading it but on metrics; the performance, futures and lives of academics hinge on a set of numbers that are hopeless at assessing such things as the quality of teaching and research but which are beloved of university managers, heads of department and HR departments as the definitive guide identifying whom to hire and whom to fire. The ‘successful’ academic of the future is likely to be the one who has swallowed the notion that quality is no more than a number and that there are ways to ‘game’ the metrics in order to achieve the required number—such as having your research published in a journal which is part of a citations cartel.

Both the value and the limitations of quantification are familiar to most historians. In particular, quantitative methods have become an important part of the armoury of the social historian. It would be inconceivable for a history department not to teach social history, but this has only been the case since the 1960s. Before then social history was a niche, poorly-valued area. In part this was because of prevailing attitudes among historians (‘history is about great leaders and high politics; who wants to know about the dreary lives of common people?’), but it was also because there were real difficulties researching the subject—there was no absence of useful data about many periods but there was a lack of adequate tools to make meaningful interpretations about past societies. For the historian interested in early modern English society, for example, plenty of records and documents exist (parish registers, wills, inventories, accounts, court records, manorial records, diaries, letters, etc.), but for those sources to provide more than isolated snapshots of social life would require time, labour and resources far in excess of that available to any single historian. But then along came computers and databases, and with them the birth of cliometrics.

Cliometrics (a coinage joining the ancient muse of history, Clio, with metrics, the system or standard of measuring things) involved applying to history statistical, mathematical and quantitative models that had been developed by economists. Large datasets (a population census is an example) could be fed into a computer and then interrogated, something no individual historian could have done before the advent of digital technology. The impact on historiography was huge: whole new areas of the past could be opened up to investigation, and general hypotheses could be framed and tested within minutes rather than reached only after years of painstaking and gradual accumulation of evidence. Historians became excited about the possibilities: assemble a body of data, feed it into a computer, ask the right question, and an answer will be provided in the time it takes to make a cup of tea. Even better, it was thought, the answers would be scientific. The distinguished French historian, Emmanuel Le Roy Ladurie, claimed that ‘history that is not quantifiable cannot claim to be scientific’ and envisaged a future in which all historians ‘will have to be able to programme a computer in order to survive’ (as quoted in Richard J. Evans, In Defence of History (London: Granta, 1997), p. 39.)

wrigley_and_schofieldOne of the earliest and most impressive applications of cliometrics stemmed from the Cambridge Group for History of Population and Social Structure, founded in 1964 by Peter Laslett and Tony Wrigley. Using English parish records (one of the legacies of Thomas Cromwell’s administrative reforms was the commencement of regular parish record-keeping from 1538) as a dataset, the outstanding achievement of the Cambridge Group was Wrigley and R.S. Schofield’s Population History of England 1541-1871: A Reconstruction (Cambridge: Cambridge University Press, 1981). In addition to presenting huge amounts of data for pre-industrial England about birth, marriage and death rates, population size and growth, mortality crises, and much else besides, their work demolished various myths and assumptions about the past. For example, they conclusively proved that the nuclear (rather than extended) family was the overwhelming norm, and that most couples had no more than two surviving children (it was only the wealthy who tended to have large broods), rendering as pointless the surprisingly common assumption that the social and family conditions of the developing world are comparable with those of the pre-industrial world. For historians of early modern social and family life Wrigley and Schofield’s research is one of the fundamental starting points for inquiry.

wrigley_and_schofield_table
A table from Wrigley and Schofield’s Population History of England

However, a good historian (as scientifically defined according to my recently-devised Good Historian Factor) would not consider the cliometrics of Wrigley and Schofield as the end point—unlike the policy makers and managers who see metrics as the end point. The historian would understand that it is one thing to quantify family structure or life expectancy, quite another to assess the quality of family life or the effects on emotions and thought of high mortality rates. In order to do the latter it is necessary to look beyond the numbers and do some old-fashioned source evaluation: the historian would need to engage in critical analysis of diaries, letters and other texts, to assess what images and artefacts tell us, and to think broadly with concepts, methods and theories. What results is not a number but an interpretation, and (much to the dismay of the policy makers and managers) not one that is scientific or definitive but one that is open to questions, challenges, discussion and debate.

Some of thtime_on_the_crosse dangers of placing too much faith in cliometrics can be seen in Time on the Cross: The Economics of American Negro Slavery (New York, 1974), an attempt to apply a quantitative analysis to the history of American slavery by two economic historians, Robert Fogel and Stanley Engerman. The work was in two volumes, the first presenting historical analysis, the second the quantitative data. Based on the data the authors reached several controversial conclusions: they argued that southern slavery was more efficient than the free agriculture in the north, that the material conditions of slaves were comparable with those of free labourers, that the economy in the south grew more rapidly than that in the north, and that slaves were rewarded for their labour more highly than had previously been thought. Although some critics questioned the quality of the data used by Fogel and Engerman, most acknowledged that the quantitative methodology was broadly sound. What was unsound was the failure to present the information in a qualitative way. The supposedly ‘objective’ analysis of American slavery, with its hard data pointing to growth within the southern economy and to work conditions comparable to those of free labourers, ends up presenting slavery in a benign light—however much the authors themselves were clearly and genuinely opposed to any justification of slavery. A much better historical approach would have been to place more emphasis on the social and political context of slavery, and to assess its psychological and cultural aspects. For example, the authors presented the statistical finding that slaves were whipped 0.7 times per year on average. On its own such a finding might suggest that the slave economy was anything but unremittingly brutal, and maybe was not so bad after all. But what that figure (and Fogel and Engerman) fails to tell us is what the whip, and the threat of the whip, meant psychologically to the slave. A more significant impact on the life experience of the slave than the 0.7 whippings per year was more likely the fear of the whip and the lack of freedom to escape this fear. Thoughts, feelings, mental states are impossible to quantify—but they are surely essential to an historical understanding of slavery.

The policy makers and managers are clearly not historians (or else they have a dismally low Good Historian Factor). If they were, then they would see metrics as interesting and often useful information (pretty much like all information, therefore), but also limited in what it tells us; they would appreciate that metrics can be distorted by insufficient or manipulated data; they would see how essential it is that metrics is only one part (and probably a small part) of how to understand something, and for metrics to be of any use there needs to be qualitative interpretation; they would recognize that to judge the quality of research (or anything else) solely by using quantitative approaches rates a high number on my recently-devised, objective and scientific Stupidity Factor.

Advertisements

2 thoughts on “Cliometrics: Or, What Historians Can Tell Us about Metrics

  1. I’ve often been told that “numbers don’t lie,” but how woefully untrue that statement really is ! Data can in fact be manipulated to bolster any argument. And as you mention, in many cases it is but one part in understanding the “whole” of an issue. I truly enjoy your writing. It gives one furiously to think.

    Like

    1. Thank you for your kind words!
      Behind it all is the fact that any system of measurement is something that humans have designed. The problem is when the system of measurement and the numbers and data that result from it are presented as if they are some form of higher, objective truth, or that they tell us something that isn’t in the data. For example: the number of journal citations tell us simply that, how many journals have cited an article (notwithstanding that this can be manipulated…). The problem is then to move from that piece of information, which may be of some use in its own way, to a much bigger claim that this data tells us about the quality of research. As I mentioned elsewhere, it would be like saying that the number of record sales tells us about the quality of a piece of music. And that would mean that One Direction make better quality music than Sonic Youth, which is evidently not the case…

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s