The Festive Freelancer

badsanta1
Merry Christmas! (Billy Bob Thornton as Bad Santa)

Spare a thought for many freelancers and self-employed at Christmas. A good friend of mine whose entire career has been spent in self-employment has always been unfailingly curmudgeonly at this time of year. I had never understood his loathing of Christmas. Sure, there are numerous annoyances (which do seem to accumulate as one gets older), but on the whole I always liked the idea of an extended and shared holiday season—a chance to catch up with friends, drink too much, party, read, sleep, the last of these probably being its greatest boon. What is not to like? Well, for a lot of freelancers, as I’m beginning to learn, plenty.

The biggest problem is that income dries up. Of course this doesn’t happen to all freelancers—it’s a good time of year to be a freelance Father Christmas, for example, so that’s an idea for a future revenue stream (but only if I could do it in the style of Billy Bob Thornton in Terry Zwigoff’s fine festive movie, Bad Santa). But for a lot of us, from about the middle of December until the first week of January, work becomes at best a trickle. Companies are more focused on planning the office party and the festive shutdown than they are on contracting writers, designers, developers and artists (note to self: freelance party planning as a possible business idea). Certainly the self-employed plumber stands a chance of landing a lucrative festive job as somewhere a boiler gives up the ghost on Christmas Eve, but the self-employed private tutor is holding out little hope that anyone will be making an urgent request for an A-level history lesson on Boxing Day (another note to self: HND in plumbing as idea for 2015?).

So the days pass without any money coming in—but a lot of it going out. That’s one of the biggest downsides to freelancing: the salaried employee not only gets to take guilt-free holiday, but is even paid to do so; for the freelancer, however, time off work is time not earning money, and when that time is spent drinking Buck’s Fizz, eating Quality Street and buying novelty Christmas socks as gifts then it involves spending the money that is not being earned. So what used to be the good things about the festive season—puffing on cigars and drinking lots of whisky being highlights—now turn out to be not so great after all; and all the bad things—engineering works on the railways, revellers with reindeer hats throwing up in the street, and, of course, the indescribably horrible experience of having to do a lot of shopping—just seem even worse.

Another downside to freelancing is that sickness, like holidays, is not paid for. And at some point during the winter I’m bound to catch a cold. Now is therefore the best time of year to do so, getting it out of the way while I won’t be earning money anyway. So that’s what I’d like for Christmas: a common cold, nothing too serious (I’m not expecting anyone to splash out on it), but quality enough to ensure I won’t be getting another one for many months, and ideal for re-gifting. Glad tidings of great joy indeed!

Advertisements

Cliometrics: Or, What Historians Can Tell Us about Metrics

Measurement and quantification have become the guiding lights of our age. Numbers are becoming the principal means by which we make sense of our lives and our world. Wonderfully, or so it may seem, we can put a numerical value on almost every aspect of our experience: our health, our happiness, our success, all can be rated and compared with the health, happiness and success of others. How liberating it is to work out whether we are happy without relying on such messy and imprecise things as the nuances of feelings and subjective experience! I may feel happy, but am I happy? Best check it according to one of the many ‘scientific’ ways of quantifying it. Even better, I can check my life satisfaction against the national average on David Cameron’s happiness index (happily funded to the tune of £2m per year in these times of austerity). It may even boost my happiness rating to discover that I’m happier than most other people…

It is increasingly hard to resist this brave new world—the numbers insidiously work their way into every area of our lives. How is my writing going? I’d better check the stats on views and visitors and referrers to my blog. Was my teaching last term successful? Let’s look at the average ratings the students gave the course. Am I popular? The number of friends on social media will answer that. Am I in good shape? Best work out my Body Mass Index. Is this meal healthy? I can cross reference its calories and sodium content with recommended daily averages. Which of these books should I buy as a Christmas present? I’ll let the average reviewer ratings help me decide. Is it worth reading this piece of research? Let’s check the ‘impact factor’ of the journal it has been published in… oh, that’s quite a low number, so best not bother to read it then. How absurd is all this measuring of feeling and quantifying of quality? Well, it rates highly on my recently-devised and extremely scientific Absurdity Factor.

It’s not that the formulae, numbers and statistics in themselves are bad: they are simply pieces of information about which we need to exercise critical judgment, to make evaluations as to their worth or not. It is the growing tendency to dispense with evaluation that is the problem: doubting and foregoing our ability to make subjective judgments, we instead treat the numbers and ratings as if they are reliable, scientific truths. In the process—and here it gets really serious—careers and lives are destroyed. Few employees are free from the use of performance indicators, the metrics that are used to measure whether someone is doing their job well. As I’ve discussed elsewhere, higher education has become obsessed with them: the quality of scholarly research is judged not on reading it but on metrics; the performance, futures and lives of academics hinge on a set of numbers that are hopeless at assessing such things as the quality of teaching and research but which are beloved of university managers, heads of department and HR departments as the definitive guide identifying whom to hire and whom to fire. The ‘successful’ academic of the future is likely to be the one who has swallowed the notion that quality is no more than a number and that there are ways to ‘game’ the metrics in order to achieve the required number—such as having your research published in a journal which is part of a citations cartel.

Both the value and the limitations of quantification are familiar to most historians. In particular, quantitative methods have become an important part of the armoury of the social historian. It would be inconceivable for a history department not to teach social history, but this has only been the case since the 1960s. Before then social history was a niche, poorly-valued area. In part this was because of prevailing attitudes among historians (‘history is about great leaders and high politics; who wants to know about the dreary lives of common people?’), but it was also because there were real difficulties researching the subject—there was no absence of useful data about many periods but there was a lack of adequate tools to make meaningful interpretations about past societies. For the historian interested in early modern English society, for example, plenty of records and documents exist (parish registers, wills, inventories, accounts, court records, manorial records, diaries, letters, etc.), but for those sources to provide more than isolated snapshots of social life would require time, labour and resources far in excess of that available to any single historian. But then along came computers and databases, and with them the birth of cliometrics.

Cliometrics (a coinage joining the ancient muse of history, Clio, with metrics, the system or standard of measuring things) involved applying to history statistical, mathematical and quantitative models that had been developed by economists. Large datasets (a population census is an example) could be fed into a computer and then interrogated, something no individual historian could have done before the advent of digital technology. The impact on historiography was huge: whole new areas of the past could be opened up to investigation, and general hypotheses could be framed and tested within minutes rather than reached only after years of painstaking and gradual accumulation of evidence. Historians became excited about the possibilities: assemble a body of data, feed it into a computer, ask the right question, and an answer will be provided in the time it takes to make a cup of tea. Even better, it was thought, the answers would be scientific. The distinguished French historian, Emmanuel Le Roy Ladurie, claimed that ‘history that is not quantifiable cannot claim to be scientific’ and envisaged a future in which all historians ‘will have to be able to programme a computer in order to survive’ (as quoted in Richard J. Evans, In Defence of History (London: Granta, 1997), p. 39.)

wrigley_and_schofieldOne of the earliest and most impressive applications of cliometrics stemmed from the Cambridge Group for History of Population and Social Structure, founded in 1964 by Peter Laslett and Tony Wrigley. Using English parish records (one of the legacies of Thomas Cromwell’s administrative reforms was the commencement of regular parish record-keeping from 1538) as a dataset, the outstanding achievement of the Cambridge Group was Wrigley and R.S. Schofield’s Population History of England 1541-1871: A Reconstruction (Cambridge: Cambridge University Press, 1981). In addition to presenting huge amounts of data for pre-industrial England about birth, marriage and death rates, population size and growth, mortality crises, and much else besides, their work demolished various myths and assumptions about the past. For example, they conclusively proved that the nuclear (rather than extended) family was the overwhelming norm, and that most couples had no more than two surviving children (it was only the wealthy who tended to have large broods), rendering as pointless the surprisingly common assumption that the social and family conditions of the developing world are comparable with those of the pre-industrial world. For historians of early modern social and family life Wrigley and Schofield’s research is one of the fundamental starting points for inquiry.

wrigley_and_schofield_table
A table from Wrigley and Schofield’s Population History of England

However, a good historian (as scientifically defined according to my recently-devised Good Historian Factor) would not consider the cliometrics of Wrigley and Schofield as the end point—unlike the policy makers and managers who see metrics as the end point. The historian would understand that it is one thing to quantify family structure or life expectancy, quite another to assess the quality of family life or the effects on emotions and thought of high mortality rates. In order to do the latter it is necessary to look beyond the numbers and do some old-fashioned source evaluation: the historian would need to engage in critical analysis of diaries, letters and other texts, to assess what images and artefacts tell us, and to think broadly with concepts, methods and theories. What results is not a number but an interpretation, and (much to the dismay of the policy makers and managers) not one that is scientific or definitive but one that is open to questions, challenges, discussion and debate.

Some of thtime_on_the_crosse dangers of placing too much faith in cliometrics can be seen in Time on the Cross: The Economics of American Negro Slavery (New York, 1974), an attempt to apply a quantitative analysis to the history of American slavery by two economic historians, Robert Fogel and Stanley Engerman. The work was in two volumes, the first presenting historical analysis, the second the quantitative data. Based on the data the authors reached several controversial conclusions: they argued that southern slavery was more efficient than the free agriculture in the north, that the material conditions of slaves were comparable with those of free labourers, that the economy in the south grew more rapidly than that in the north, and that slaves were rewarded for their labour more highly than had previously been thought. Although some critics questioned the quality of the data used by Fogel and Engerman, most acknowledged that the quantitative methodology was broadly sound. What was unsound was the failure to present the information in a qualitative way. The supposedly ‘objective’ analysis of American slavery, with its hard data pointing to growth within the southern economy and to work conditions comparable to those of free labourers, ends up presenting slavery in a benign light—however much the authors themselves were clearly and genuinely opposed to any justification of slavery. A much better historical approach would have been to place more emphasis on the social and political context of slavery, and to assess its psychological and cultural aspects. For example, the authors presented the statistical finding that slaves were whipped 0.7 times per year on average. On its own such a finding might suggest that the slave economy was anything but unremittingly brutal, and maybe was not so bad after all. But what that figure (and Fogel and Engerman) fails to tell us is what the whip, and the threat of the whip, meant psychologically to the slave. A more significant impact on the life experience of the slave than the 0.7 whippings per year was more likely the fear of the whip and the lack of freedom to escape this fear. Thoughts, feelings, mental states are impossible to quantify—but they are surely essential to an historical understanding of slavery.

The policy makers and managers are clearly not historians (or else they have a dismally low Good Historian Factor). If they were, then they would see metrics as interesting and often useful information (pretty much like all information, therefore), but also limited in what it tells us; they would appreciate that metrics can be distorted by insufficient or manipulated data; they would see how essential it is that metrics is only one part (and probably a small part) of how to understand something, and for metrics to be of any use there needs to be qualitative interpretation; they would recognize that to judge the quality of research (or anything else) solely by using quantitative approaches rates a high number on my recently-devised, objective and scientific Stupidity Factor.

Bullying, Metrics, and the Death of Professor Stefan Grimm

On 25 September 2014, Stefan Grimm, professor of toxicology in the Faculty of Medicine at Imperial College London, was found dead in his home. He was 51. An inquest into his death has been opened and, while no official cause has yet been given, it would appear that he committed suicide. One reason to suspect suicide is that an unusual thing happened on 21 October, nearly four weeks after Grimm’s death. An email with the subject heading ‘How Professors are treated at Imperial College’ was sent from Grimm’s account to about forty internal addresses at the college. It would appear that Grimm had pre-set his account to send this email after his death; nothing has so far suggested that it is anything other than genuine. The email presents a dispiriting and disturbing insight into the state of modern British academia.

Included with Grimm’s message were two emails sent to Grimm by Martin Wilkins, professor of clinical pharmacology and head of the division of experimental medicine at Imperial. All these emails have subsequently been leaked and have now become public knowledge; the Times Higher Education has published them in full alongside an article on Grimm’s death. There has also been extensive commentary in other publications as well as on blogs (notably by David Colquhoun, emeritus professor of pharmacology at University College London).

The essence of the exchange and the circumstances outlined in the emails is as follows. Grimm, an active and successful researcher with over seventy publications to his name, a large number of grant applications and recipient of significant research funding, was informally and humiliatingly told by Wilkins that he would be sacked. Wilkins’ emails to Grimm confirm that steps were being taken that would in all likelihood lead to Grimm’s dismissal. With barely disguised insensitivity, Wilkins explained to Grimm ‘that you are struggling to fulfil the metrics of a Professorial post at Imperial College’, and that, unless Grimm’s performance improved, formal disciplinary procedures would be initiated. It is hard not to share Grimm’s bemusement that none of his various publications or research activity seemed to count in the eyes of the college. The final straw seems to have come when Grimm was informed by Wilkins that he would no longer be able to supervise a PhD student who had been accepted by the college and wished to work under Grimm. As Grimm wrote in his email: ‘He [the prospective PhD student] waited so long to work in our group and I will never be able to tell him that this should not now happen. What these guys [Wilkins and Gavin Screaton, then head of medicine at Imperial] don’t know is that they destroy lives. Well, they certainly destroyed mine.’

Anyone who has worked in academia will understand Grimm’s sentiments. This is not a career one falls into for want of better alternatives; it takes years of study, often combined with straitened financial circumstances and self-sacrifice, to acquire the experience, skills and knowledge necessary to work in academia. Why do this? Because of a passion and dedication about knowledge and furthering that knowledge through research and teaching, because academics care intensely about what they do and about its importance. There are times when research goes spectacularly well, but the nature of research is that there are also fallow periods, times when dead-ends are reached and new approaches need to be taken, times when patient, slow groundwork is being established that takes time to yield results. Part of the point of the university is to provide the institutional setting in which teaching and research can be nurtured—in which the commitment, hard work, and ups and downs of the life dedicated to academia will be understood, appreciated, respected and supported. Increasingly, however, universities regard their academic staff as little more than expendable items on a profit/loss balance sheet. Once that mentality has set in among university management, it does not take long for the type of shabby, undermining and humiliating treatment that appears to have been meted out to Grimm to become the rule rather than the exception.

Much of the comment on Grimm’s death and the circumstances surrounding it has focused on two things: the culture of academic bullying; and the absurdity of metrics. There is no doubt that Wilkins emerges from the exchange as a bully (or perhaps as the bullying henchman of Screaton, possibly ‘only following orders’); his approach to management and interpersonal relations comes across as arrogant, callous and deliberately humiliating. Some of the blogs and online commentary suggest that Wilkins is far from unique, and that a culture of bullying is rife not only at Imperial College but across academia. As Colquhoun notes on his blog, there has been a strikingly high number of university staff taking their employers to employment tribunals, and vastly more who have signed gagging orders preventing them from speaking out about their employers—evidence at the very least of widespread problems in employer-employee relations across academia.

The days of collegiality when management might be expected to support their academic staff are fast disappearing. As Grimm notes in his final email, Imperial (although for Imperial almost any university in the UK could be substituted here) ‘is not a university anymore but a business with [a] very few up in the hierarchy… profiteering [while] the rest of us are milked for money’. The culture of university management increasingly sees both academics and students as little more than sources of potential profit. The language used in universities gives it away: academics are expected to think about ‘branding’ and ‘marketization’; business plans and strategies are the new models for how to run an academic department; departments have business managers these days. Universities were originally centres of learning, teaching and research with managerial and bureaucratic structures designed to support that core function; but increasingly learning, teaching and research have assumed the new role of supporting the managerial and bureaucratic corporations that universities have become.

The problem with running universities as corporate businesses is that much of the activity of academics does not fit into a business model. Learning and teaching, for example, are hard to quantify since they do not generate any obvious profits, and thus tend not to be highly valued by management. Student recruitment and retention are seen as important, but not as goods in themselves, rather because high levels of recruitment and retention lead to increased income. Nor does much research sit easily with a business culture. In the older collegial culture it was understood that research needed to be nurtured; researchers often needed time and patience, and they needed support even if their field, however intrinsically important, was not high profile or likely to attract large amounts of funding. Quality, above all, was the key aim. In the current climate productivity and ‘impact’ are the only things that matter. Those academics able to churn out a steady stream of articles are favoured over those whose output is good but may have fallow periods when they need patiently to develop their research without the unremitting and constant pressure of having to publish at regular intervals. Moreover, much research, by its very nature, is an investigation the outcome of which is unclear or uncertain. But modern university managers have little time for this; they want to know even before the research has begun that it will have a significant impact—not on scholarship but on wider society outside. Much valuable research struggles to find a wide audience, yet is important for its long-term contributions to knowledge and understanding; modern university management has minimal interest in such work since it does not fit with their focus on the relentless pursuit of profit. The system favours those researchers who choose obviously high-profile topics, but of such a nature that neither breadth nor depth will get in the way of rapid production. The aim, it would seem, is to turn universities into research factories, academics into research machines. Academics who resist the bleak prospect of becoming nothing other than an efficient, productive research machine are marked for redundancy.

University managers will object by saying that they care greatly about quality of research, that in fact all sorts of measures have been designed to assess quality. These are the metrics, the means (it is supposed) by which the performance of an individual or an organization can be measured. Metrics tend to be highly complex—and absurd. (For those interested in why they are absurd, see Colquhoun’s discussion of them here and here.) It would seem obvious to most people that in order to assess the quality of research it might be a good idea actually to read that research. In the increasingly Kafkaesque world of the modern university, however, judgments are made about research not by reading it but according to baroque and opaque performance indicators. Formulae, spreadsheets and number-crunching have replaced old-fashioned concepts of reading and thinking about something in order to consider whether it is valuable or not. How many citations a piece of research has received, where in a journal a piece of research appears, what numerical rating has been assigned to the ‘impact factor’ of a journal, what numerical value has been assigned to the position a researcher’s name appears in the list of authors of the research—out of all these comes an overall numerical value which rates the quality of the research. It is the brave new world in which managers believe they have discovered the secret of quantifying quality without having to think about or understand what it is they are attempting to quantify. It would be like trying to assess the quality of music not by listening to the music itself but by working out a formula which factors in chart success, size of record label and writing credits to generate a (spuriously) scientific number representing quality.

While hardly bearing comparison with the experience of Stefan Grimm, a former colleague (an academic in the humanities) told me his own dealings with the new university culture. When asked whether he had any research to submit to the recent Research Evaluation Framework (REF) he suggested some articles written over the previous few years. He considered them to be good contributions to scholarship, but of course it was for others to judge; one way they might assess their quality would be to read them. However, his research was immediately dismissed out of hand, without being read, as being unsuitable for the REF: one article because it was co-authored (so much for encouraging collaboration in the humanities!), another because it was an essay in an edited book (that the book was edited by some of the leading scholars in their field meant nothing), a third because it was not in a prestigious enough journal, and a fourth because it was a review article, and again not in a journal with a sufficient international reputation (that the review was intended to make a useful contribution to a broad research area made no difference). Clearly he was a poorly-performing academic by the criteria of the university, notwithstanding the long hours he committed to the job and his extensive and, as was evident from feedback from both students and colleagues, successful teaching and administrative roles. His approach to research and academic work did not fit the REF-model and the current values of university management; thus it was made clear to him that, unless he started complying with the system, he had no realistic future in academia. Despite his dedication and contribution to his university, he has unsurprisingly become disillusioned enough to wonder whether academia is an environment he wants to be in any longer.

The modern values of university management are such that a university will abandon plans for a new building to house a Human Rights Centre of worldwide reputation, replacing it instead with a business school; it will attempt to close down the history of art department; it will suggest putting the Latin American collection up for sale; it will not renew the visiting post of a Nobel laureate; and it will lose a renowned writer and chair of the Man Booker International Prize because it is not prepared to accommodate her roles (the prestige and reputation of which clearly mean nothing to the managers) with the rigid and constantly-monitored targets devised by management. All this at the University of Essex (as recounted by Marina Warner in the London Review of Books, volume 36 number 17, 11 September 2014, pp. 42-3).

It is hardly surprising that such a culture fosters bullying on the part of managers, and stress, anxiety and insecurity among academic staff. Some will argue that this is a recipe for ‘success’: Imperial College is ranked, after all, among the top few universities in the world (using, of course, ranking systems based on yet more absurd metrics). Others may wonder whether the price to be paid for this ‘success’ is worth it: the important research that does not get done because it does not fit the current business model; the excellent teachers who are dispensed with because their work does not fit with the performance metrics; the students who are squeezed for every penny, and the unsavoury scramble for international students who bring in the highest fees; the rewards of long and dedicated service in academia coming in the form of intimidation, humiliation and mass sackings; the human suffering of depression, stress and anxiety among academics that comes in the wake of the managerial culture; and, possibly, the death of Stefan Grimm.