In defence of school absenteeism

Prior to 2013, schools had discretion to allow term-time absences for pupils. Regulations introduced in 2013 put a stop to this, allowing absences only in exceptional circumstances (such as to attend a funeral), and stipulating fines for families who did not comply. This system of penalties is now likely to be thrown into confusion by the decision of a court to overturn a £120 fine imposed on a father for taking his daughter on a week’s holiday during school term. The appeal against the penalty hinged on the question of what constitutes ‘regular’ attendance. The law simply states that a child must attend school ‘regularly’ without offering a precise definition of the term. In this case, the father pointed out that his daughter’s attendance had been 94% the previous year and was 100% at the point of going on holiday. Clearly—and the court felt so too—such numbers might be thought to fall within the definition of ‘regularly’; equally clearly, ‘regularly’ is a sufficiently vague term to make most fines for non-attendance unreasonable.

But the issues go deeper than legal argument about the definition of a word. In particular, it is worth considering who is responsible for a child’s education. Many people suppose that the state assumes responsibility. Legally, however, this is not the case. Education is compulsory, but the primary responsibility for ensuring that a child receives an education falls to parents. Section 7 of the 1996 Education Act states that it is the ‘duty of parents to secure education of children of compulsory school age’, that this duty involves arranging education suitable to the ‘age, ability and aptitude’ and ‘to any special educational needs’ of the child, and that this may be achieved ‘either by regular attendance at school or otherwise’.

The legal situation can be summarized in this way: parents have the responsibility for ensuring the education of their child, and they can decide to fulfil this, but do not have to do so, by sending their child to school. The phrase ‘or otherwise’ has generally been understood to cover home education—and, indeed, it is the basis for the excellent home education resources and campaigning group, Education Otherwise (whose slogan ‘Education is compulsory, school is optional’ summarizes the legal situation more succinctly than I have done).

The law here seems eminently reasonable: parents may well choose to delegate their child’s education to a school, but to compel them to do so would be an unwarrantable intrusion by the state on the freedom of parents to decide on the best interests of their child. Most parents are better placed than the impersonal state to understand the needs of their child, and there may be excellent reasons why they deem formal schooling unsuitable to those needs. Not all parents believe that corralling children into a classroom to be drilled for hours in preparation for an endless round of tests and exams comes close to representing an ‘education’.*

Both legally and morally, therefore, there are question marks over the power of the state to fine parents for pupil absences. As long as the parents are ensuring that education is being provided, then it would seem that there is compliance with the law. Even if we accept that delegating education to the state involves entering some form of agreement to ensure school attendance, it is not clear that occasional absences are in breach of this agreement. As the recent appeal case confirms, occasional absences do not breach the requirement for ‘regular’ attendance.

Of course, it may be objected that such absences do in fact breach the duty of parents to provide an education for their child. Many suppose that unless a child is in school, that child is not receiving an education—and they will particularly complain about a child being on holiday instead of in school. But such objections stem from an excessively narrow idea of what education is or should be about. Education is not solely about passing exams nor should it be exclusively concerned with preparation for the workplace. These may constitute important parts of it, but above all education is about preparing children for life. A child will learn far more, and far more important things, from a week on holiday (about, for example, a different culture, a different place, different activities, family relations, and so on) than from a week in the classroom. The notion that a few days of lessons are a more important life experience than a holiday is evidence of the skewed values of society today. (Indeed, a child will almost certainly learn more about life from occasional truancy than from the lessons that he or she has skipped.)

Consider the case of the couple who have been taken to court because they took their two young children to India to visit their extremely sick grandfather. There are compelling personal reasons why such an absence might reasonably have been granted, none of which seem to have resisted the bludgeoning approach of the state. But beyond that, it could be argued that the educational value of the trip far surpassed anything that might have been learnt in a week or two of school. Not only would it have entailed the experience of a different culture, but it would have involved (unfortunately, of course) an engagement with such important parts of life as illness, ageing and mortality. Only the most blinkered educationalist would maintain that a few literacy and numeracy classes have more educational value than that.

CalvinAndHobbes_12

The Department of Education claims that even one week of missed classes each year will damage a child’s chances at GCSE with a lasting effect on life chances. But this is simple scaremongering; reassuringly for those children who are occasionally ill (i.e. all of them), there is nothing daunting about catching up with a week or so of lessons here and there. I have no doubt that there are often serious problems arising from absenteeism, but trying to solve those problems by clamping down on all absences really does introduce the proverbial sledgehammer and nut into the debate.

Michael Gove, the former Education Secretary, proposed docking child benefit from parents who allow unauthorized absences; Michael Wilshaw, the chief inspector of Ofsted, would like to fine parents whose children don’t do their homework and has urged schools to ‘adopt a hard line and not allow the youngsters out’. In the punitive and narrow vision of Gove and Wilshaw, education amounts only to teachers in classrooms preparing pupils for exams, and the duty of parents ceases to be truly educational but instead one that may be likened to the task of prison officers transporting their charges to the institution and locking them down. I don’t doubt that classroom education has many benefits, nor do I reject the value of exams, but to equate education exclusively to school lessons and testing is not only life-denyingly dreary but also hugely mistaken.


*I declare an interest here: as a former home-educating parent myself, I understand the reservations that many parents feel about school. I intend to write more on this in a future article.

Hope, Experience, and Efficient File Management

It’s that time once again when hope joins battle with experience: the start of a new academic year. This year, I tell myself (as I have told myself every year since time out of mind), I shall be organized like never before. There will be no more coffee-fuelled all-nighters to prepare lessons; no more hours lost tracking down an old lecture to find it filed in my Music folder; no more staring blankly at my diary unable to decipher the name of the student I am supposed to be meeting; no more panic on an unmoving train because I have left a margin of error of less than a minute to arrive at a meeting. I will be a lean, keen teaching machine, gliding through preparation and lessons with elegance and efficiency, with plenty of time to write and research productively. I may even make some money at last.

If hope is outpointing experience by Christmas then I will be doing well.

First I need to find my desk. That involves hacking a path through piles of books and magazines to the area of my room where I last remember seeing said desk, and then excavating through the papers, unopened bills, junk mail, long-forgotten invoices, receipts, post-its, pens, pencils, paper clips, elastic bands and sundry miscellaneous items to what I believe will be the desk itself. Treasures, in the form of untouched Rizla packets, are likely to be found during this archaeological dig. All of this reminds me of how once, when I had an actual office in an actual academic department, I would advise students on the merits of efficient organization, advice delivered across a scenic landscape of mountainous chaos on my desk.

Then I need to sort out my computer desktop. Unlike my physical desk, upon which no organization at all can be detected, my computer desktop suffers from a worse state of affairs: a half-hearted semblance of organization. Clearly at some point (probably the start of the last academic year) I devoted a solid half hour of attention to this task before being distracted by a book that needed to be read, never to return. There are folders, inside many of which are further folders and files that logically belong there. Keeping them company are many other folders and files whose rationale for being there would surpass the wisdom of God. Different versions of my CV crop up incongruously in various locations as if it is a self-replicating, mutating virus.

Finally, emails… but that will surely, as always, be a task too far. (Ah, experience strikes an early blow against hope…)

I’ve come to realize that when it comes to organization I should either have none at all or a complete, fully worked out and implemented system. The virtue of the former (my preferred approach for many years) is that I became adept at storing relevant and important information in my head. It did not matter that a letter containing the time and date of an appointment was hopelessly buried beneath any one of several piles of papers, nor that I did not write down appointments in a diary, for I had already committed this information to memory. This non-system almost never failed. Problems arose when, having been foolishly seduced by the time-management and efficiency gurus, I introduced some formal systems of organization. Because I was never organized enough in the first place to introduce organization in the proper way (the fatal flaw afflicting these systems among the irredeemably disorganized), I ended up with a hideous hybrid, a jerry-rigged mishmash of systems and memory. So, some appointments ended up in my diary, and some were committed to memory—others ended up both in my diary and my memory, while others (the majority) ended up in neither.

The trick is to get beyond the overwhelming feeling that bureaucratic systems and organization suck the life out of existence, and to overcome an intuitive resistance to the idea that hope can only be realized by efficient organization. Are dreams really built on time management and effective filing systems?

Dreams of Ivory, part 2: ‘Nought but grief an’ pain’

The temptation to ink a personal motto on my body has never troubled me, but if ever a drunken state of folly lands me in a tattoo parlour then the following would serve well as a permanently etched reminder of lived reality:

The best-laid schemes o’ mice an’ men / Gang aft agley, / An’ lea’e us nought but grief an’ pain, / For promis’d joy!

At the risk of insulting the literary knowledge of my readers, the lines are from Robert Burns’ 1785 poem ‘To a Mouse, on Turning Her Up in Her Nest with the Plough’. A field mouse has laboriously and carefully constructed a nest in order to stay warm and secure through the winter. Unfortunately, along comes Mr Burns with his plough, accidentally destroying the creature’s home. Burns does what any of us would do: he decides to write a poem, reflecting philosophically on how the poor mouse’s calamity reveals the universal truth that (to Anglicize the lines) ‘the best laid plans of mice and men often go awry and leave us with nothing other than grief and pain instead of the expected joy’.

Like the mouse and her nest, I regarded my Cambridge application as a fine plan with prospects of security and joy. My hopes were boosted by the fact that Cambridge were seeking two researchers: since I’ve hardly ever come first in any application process, but have been second a few times, it was reassuring to know that my perennial runner-up status would be enough for Cambridge. Then, the day after the application expiry, Cambridge decided to extend the deadline. Aside from reflecting that I needn’t have gone through all the stress of meeting the original deadline after all, I inferred positive signs from this development: of course, I reasoned, Cambridge desired a little more competition for my own obviously outstanding application. (What was it again, according to the proverb, that comes before a fall?)

In the meantime I learnt that I had been shortlisted for a couple of small teaching roles, interviews for which took place the week before Cambridge’s own proposed interview day. At the very least they would provide some practice before Cambridge, but my sights were set higher on a run of success that would mark a resounding academic comeback. The two interviews went well: I came out of each with a good feeling about my chances. I was runner-up in both.

This setback was a pinprick to my optimism in relation to Cambridge. On the day when Cambridge had promised to contact shortlisted applicants a few ripples of pessimism were lapping at my hopes. The ripples gradually expanded into waves as my inbox remained devoid of the hoped for invitation to interview. Each new email alert on my phone was a ping of promise, only to be followed by the disappointing discovery that it was merely a new notification on Facebook or a suggestion of someone to follow on Twitter. By early afternoon I reasoned that, doubtless because of the extended deadline, the selection committee had to work into the afternoon to finalize the shortlist. This line of reasoning became increasingly unconvincing as each hour passed. By late evening I was clinging to an irrational notion that the selection process had got bogged down for some unforeseen reason. Over the next few days, however much I knew the game was up and that I had been unsuccessful, vestiges of hope refused to disappear entirely. It is perhaps not surprising that having poured so much into believing something I was unable entirely to let go of that belief. Even on the scheduled day for interviews I momentarily imagined that Cambridge had had to postpone the process and that perhaps my application was still in the running. It probably took another week before I completely resigned myself to the certainty that I had been unsuccessful.

And so my application concluded not in appointment, not even in a tragi-comic tale of a disastrously bizarre interview performance, but in the short journey from receipt to the rejection pile and the bin. A close friend reckoned that this was ‘Cambridge’s loss’. Yet I doubt that. I may have been deluded all along about the merits of my application, but not even I think that Cambridge has made a grave error of judgement that will cast a long shadow over its future. I am not (yet) mad: the academic job market is incredibly tough, and I’m realistic that my CV does not make me especially competitive. I also know that you can’t lose something you never had in the first place. So I lost nothing… apart of course from two f***ing weeks of my life, and a whole load of hopes and dreams…

However, I am not interested in organizing a pity party but rather in having a really good rant making constructive criticism about the application process. First, consider what I wrote in the first part of this post, about the hours, days and weeks, the effort and energy invested in the application, the accompanying hopes and stresses, and the time given freely and generously by my referees. Now consider this: the University of Cambridge did not bother to acknowledge receipt of the application, and their policy is not to inform unsuccessful candidates of the outcome of their application.

Every job application is in part a direction of focus and interest towards the potential employer. For sure my primary concern lay with my own career prospects, but inevitably this involved considerable attention to how Cambridge would benefit from employing me. I was not expecting a bespoke token of gratitude for these futile efforts: a handwritten and courier-delivered letter, stained with tears of appreciation and regret, is not numbered among my fantasies. But some sort of communication might have been decent. I’m well-versed in job applications and almost equally well-versed in the experience of rejection, and it is usual that applications are acknowledged and rejected applicants are informed. Of course, all of this is done by automated email and prefab rejection templates, hardly sublime examples of the human touch. But they are better than nothing: somebody somewhere thought it might be good to keep all applicants informed; perhaps they thought that the efforts and interest of the applicants deserved a small return on their side.

I can only guess at why Cambridge adopted this approach. It can hardly be due to stretched resources: the University of Cambridge is among the wealthiest in the world, and to employ someone to prepare email templates, set up automated replies, batch applicants and press ‘send’ is not likely to bring their entire administration to a grinding halt. Maybe it reflects a culture of arrogance specific to Cambridge, a reluctance to engage in contact with the many common failures who dare to wish to work for them. Possibly Cambridge regards the arduous process of applying to work for them as such a privilege in itself that any communication from them would lead to applicants feeling spoiled. Or perhaps it is part of a deeper malaise of dehumanization in academia, in which a cold wind of bureaucratic indifference blasts away simple human niceties. For while I accept that my complaint is essentially trivial, the point is that what is most human is frequently what is most trivial: the polite acknowledgements, the unnecessary yet kind words, the myriad tiny ways in which we show recognition and mutual appreciation of one another as human beings.

Still, I must be honest. Would I have felt better had Cambridge kept me informed of the progress of my application? Not really. It would have spared me anxiety and uncertainty, but it would have made no difference to my disappointment. Indeed, I should be grateful to the Cambridge application process. For I can now console myself with things I do not actually believe, such as that ‘I would not want to work for an institution that treats people like that’. It means that rather than focus on the shortcomings of my application I can wrap myself in indignation at the way Cambridge goes about things.

But I suspect that a cloak of indignation was not going to keep Robert Burns’ mouse sheltered and warm over the months ahead. Burns concludes his poem, however, by arguing that the mouse has an advantage over humans: whereas the mouse is concerned only with the present moment, doubtless laying down another ‘scheme’ without worrying whether it will go ‘agley’, Burns cannot avoid dwelling gloomily on past and future. In difficult circumstances we may sometimes challenge ourselves whether we are ‘man or mouse’. Burns’ poem suggests that ‘mouse’ may well be the better answer.

So I have woken from my dream of ivory with thoughts only of new plans. Oh, okay, mostly of new plans. But I spend no more than an hour or two each day panicking about the uncertainty of my future, no more than a few minutes feeling resentful about the University of Cambridge application process, and merely a second or two dwelling on the words ‘cold arrogant f***ers’.

When a butterfly flaps its wings in Seoul: UCL and the Tim Hunt affair

In chaos theory the butterfly effect goes something like this: A butterfly flaps its wings in Seoul and venerable scientific institutions in London fall over themselves to end the career of a Nobel laureate scientist.

Sir Tim Hunt made the following remarks at a world conference of science journalists in Seoul: ‘Let me tell you about my trouble with girls. Three things happen when they are in the lab. You fall in love with them, they fall in love with you, and when you criticise them, they cry.’ Hunt claims that what he said was ‘totally jocular [and] ironic’, and that it was uttered while feeling ‘nervous’ about speaking before the conference. The remarks certainly come across as the sort of nervy attempt to get an audience onside and engaged through an insider’s joke, and like many such attempts they are neither obviously funny nor well-considered. They are indeed sexist (which is not the same thing as saying that Hunt is sexist). It seems that Hunt didn’t think much had gone awry, but some of his audience were sufficiently unimpressed to start tweeting his remark. Escalation was rapid: strong condemnation appeared across social media, newspaper columnists quickly began intoning about problems of sexism in science, Hunt was labelled, among other things, ‘a clueless, sexist jerk’ and ‘a misogynist dude scientist’, and before Hunt’s plane had even landed back in England he had been forced to resign his honorary position at University College London. Subsequently he also resigned from the European Research Council and from his role at the Royal Society.

As to Hunt’s remarks themselves, they deserved a clarification and an apology—both of which Hunt provided. But it is a peculiarly modern phenomenon that 37 words can have the consequence of ending a distinguished career. I’m sure that among the many thousands of words I have spoken in lectures there might have been a few which, upon reflection, I might regret having said or would wish to have rephrased; there might also have been a few which, taken out of the specific context of the lecture itself and the tone in which I made them, would appear in isolation to be worthy of condemnation. One of my best jokes in my lectures (and, given it rarely raised a laugh from more than three students, evidence of my limited prospects as a comedian) was: ‘Let’s move from the delights of Balinese cocks to talk about French pussies.’ (The context was a discussion of Clifford Geertz’s essay on ‘Deep Play: Notes on the Balinese Cockfight’ and Robert Darnton’s ‘The Great Cat Massacre’.) Another of my favourite jokes involved me showing a slide of a bikini-clad Elle ‘The Body’ Macpherson (that one always got more laughs, but as a visual joke it would be difficult to explain it here). There was always a slight risk that such jokes might backfire or take on a life of their own in this age of Twitter and Facebook, and that I could have been caught up in a maelstrom of impassioned discussion about ‘appropriate’ language or the problem with white male historians. (My career would probably have benefited from that sort of thing, so I rather regret that my students were sufficiently sensible to understand or ignore or not laugh at my jokes; I could have done with the occasional hothead in my audience, full of unintelligent righteous indignation and with a massive following on Twitter.)

None of which is to defend Hunt’s ‘joke’. I merely make the point that we live in a time in which a career can easily be pegged onto a single sentence—and a sentence which may be misunderstood or misrepresented, and, even if neither of those, may amount to no more than the sort of ill-considered poorly expressed comment that we all make from time to time.

The really interesting feature of Hunt’s case seems to me less the lynching mentality amid the social media wildfire and more the response of UCL, the ERC and the Royal Society. Imagine you’re an eminent academic with a distinguished career, probably not well-versed in social media and likely with little or no media training; you make a rather crass remark, but one you hadn’t given much thought to at the time; once it has been brought to your attention you issue an apology and a retraction, sincerely admitting your error and folly; nevertheless, you discover that your initial remark is now being discussed with varying degrees of outrage all over the world. What would you expect your college—your employer and the institution which supposedly supports and nurtures your work from which they benefit—to do? It is understandable that they would wish to distance themselves from the remark, but you might reasonably expect that they would also wish to discuss the matter with you, to seek clarification, and not to rush to judgement. You are likely to hope that they would offer support and protection against the media lynch mob, and that they would use their expertise to help find a way through the storm. Sure, you may have to issue yet further apologies, and you are going to have to put up with a battering for a while, but assuming your initial remark was not totally beyond the pale and your subsequent apology is sincere, then you might reasonably expect that your college would stand by you and find ways of repairing your (and their) reputation.

Instead of speaking to Hunt about his comments in Seoul, UCL decided to tell his wife (Mary Collins, also a scientist at the college) that her husband should resign immediately or that he would be sacked. Hunt duly resigned as soon as he returned to England. UCL softened this ultimatum by saying it would be ‘a low-key affair’, but then proceeded to trumpet on their website and Twitter how they had got rid of him. Hunt’s positions at the ERC and the Royal Society became untenable as a result, so he resigned from them too. His long scientific career was over. As he has said: ‘I am finished. I had hoped to do a lot more to help promote science in this country and in Europe, but I cannot see how that can happen. I have become toxic. I have been hung to dry by academic institutes who have not even bothered to ask me for my side of affairs.’ Whatever the wrongs of his remarks in Seoul—foolish and crass, but surely not remotely near to being at the most offensive end of the spectrum, and over which he has been genuinely contrite—Hunt appears to be right: he has been hung out to dry by UCL.

Yet perhaps we should not be surprised at this outcome. Collins, who has described her husband’s remarks as ‘unbelievably stupid’ but utterly rejected the idea that he is sexist, has commented: ‘They [UCL] have let Tim and I down badly. They cared only for their reputation and not about the wellbeing of their staff.’ But such is the modern academy. The days when a university was collegiate—when academics were nurtured and supported through both good times and bad—are rapidly fading. The modern academic institution increasingly resembles a brand and a business: any inkling that the brand may be damaged, any indication that an academic’s output may fall short financial targets, then, no matter how dedicated or hard-working or distinguished the academic, the institution is likely to abandon rather than support the individual. Sometimes an entire department, if it has failed to ‘game’ the ridiculous REF exercise, faces the brutal axe; and in their different ways the experiences of Tim Hunt, Stefan Grimm and Marina Warner all point to a dispiritingly corporate and inhuman culture that pervades the academy today.

One way of looking at this is to see it as an example of a wonderful bureaucratic rationalism: as long as you meet all the performance targets and are careful to stay consistently ‘on message’, then you’ll be fine; fail to do either of those, then you’re out. Some would say this is the only way to maintain excellence. But in fact sober reason has nothing to do with this. If, on the basis of a single remark such as that of Tim Hunt, a long career is summarily terminated, then the world we live in is not rational; rather, it is a world of chaos.

Dreams of Ivory, part 1: ‘Promis’d joy’

Recently, amid an aberration of high hopes, I applied for a research position at the University of Cambridge. The project to which the researcher would contribute looked exciting and interesting. It combined several areas of inquiry that related closely to my own academic background and knowledge, and, in a happy rarity, I could actually tick all the boxes on the person specification without having to resort to mental casuistry. Clearly oblivious to the concept of hubris, I concluded that the position was ‘made for me’ and began envisaging all the fine research I would be doing. Then I remembered that I needed to apply for the job first, so I set about the task of making my application heroic and formidable.

university_of_cambridge
Obviously the place for me…

The application process presented more than the usual range of hoops to jump through. Cambridge wanted a completed application form and a CV; a sample piece of writing; a 2,000-word statement of research aims; and two references to be submitted by the deadline. Expecting applicants to assume responsibility for chasing up and supplying references seems to be a common characteristic of Oxbridge positions. Presumably it is a tactic to deter applications from chancers and non-entities and general riff-raff. On several occasions in the past I have judged that I probably fell into at least one, and often all three, of those categories and thus had been appropriately discouraged from attempting an application; after all, there is a risk of annoying potential academic supporters by pestering them to write references for applications whose chances of success are long odds against. But this time I assessed my chances as moderate to good (and ‘near dead cert’ in my more deluded moments), and so was reassured that I would not be annoyingly encumbering two busy academics with a pointless addition to their workload. I proceeded to discuss the application with potential referees and thankfully two excellent and eminent scholars agreed to supply references.

Meanwhile I was carefully researching the project and its personnel. I downloaded for a small cost a book by the project’s director, and then read it closely while taking copious notes. I located a few relevant journal articles and read those too. I risked giving myself a hernia by transporting home a dozen weighty library books relevant to the project, all of which I browsed, and some of which I actually read, mining them for ideas. My own bookshelves supplied several more books; and I scoured the internet, looking at images and early books, journal articles and, er, Wikipedia. Eventually, with several pages of notes and plans, I was ready to write the 2,000-word research statement (indeed, I felt capable of writing a 20,000-word statement had Cambridge wanted that). I settled down one early evening to turn the crystalline statement that had formed in my mind into real words on a page; six hours later I had one abandoned statement of 250 words, and a new beginning of about 50 words. And so I had to bed down for the long Easter weekend, toiling for hour after hour over the four days, neglecting to deliver chocolate to my children, until, in a fitting memorial to the risen Christ, I had produced a delicately crafted proposal, exquisitely calibrated to convey my knowledge, experience, enthusiasm and potential. Unfortunately, when I pasted the statement into the relevant text box on the website I discovered that what Cambridge really meant by a 2,000-word limit was in fact a 10,000-character limit. My word count was fine; but crucially my character count was not. So another couple of hours was spent chipping away at the statement while half-wondering whether I was about to have a stroke.

Next I updated and beautified my CV, applying delicate final touches with the loving attention of a High Renaissance sculptor. I completed the standard application form; I located after a convoluted search among memory sticks and email attachments an article I had once written which would serve as my sample of writing; I converted all the files into PDF (which, for some reason, was the only file format Cambridge was prepared to accept) and uploaded them; I irrationally and unnecessarily panicked over whether my referees would upload their references by the deadline; and, at last, I clicked the ‘Submit application’ button. Pondering whether to offer a prayer for divine assistance, I concluded that my lack of belief in God might make such a prayer ineffective; even then I wondered whether to make some sort of divine deal just in case He exists, before finally and forcefully pulling myself together. Evidently the fact that I had probably spent in total two full working weeks on the sundry tasks associated with this application had played havoc with my capacity for rational thought. But not so much that I wasn’t able to take a moment to congratulate myself on having got it all done, which then eased into several hours of daydreaming about all the clever and impressive things I would say in the interview. I performed brilliantly, bringing repeated appreciative nods from the panel; I speculated whether I would choose Prosecco or go all out for Champagne to celebrate being offered the position; I then reminded myself that Cambridge probably hadn’t even got round to shortlisting me yet in the three hours since I submitted my application. So I steeled myself for the weeks of waiting, optimism, uncertainty, dreams and fears to come.

The Right to Remain Silent? More Quiet in the Classroom

One of my earlier posts, ‘Quiet in the Classroom’, has prompted an interesting discussion on Facebook. The original article concerned my experience of university seminar teaching, specifically in relation to two contrasting seminar groups of history undergraduates: one, a highly vocal group who interacted with one another extremely well; the other, a group in which, no matter what I tried, profound silence was the unshifting, default position. What particularly interested me was that it was the latter group who performed at by far the highest level when it came to formal written assessment. I concluded that this was not necessarily surprising: history is a discipline requiring extensive and solitary reading, reflection and writing, and so those personality types (the introverts) who are most at home when quietly reflecting, and least comfortable when expected to interact publicly within a group, may be especially well-suited to the subject. Yet quiet students are increasingly seen within academia as ‘problem’ students, since they are not meeting academic expectations of contributing regularly to classroom discussions and engaging and interacting vocally with their peers. This seems to be a reflection of the more general tendency to see extroversion as an ideal and introversion as a problem. (In the article I refer to Susan Cain’s fine exploration of this issue in her book, Quiet: The Power of Introverts in a World That Can’t Stop Talking.)

It is clear from some of the comments I have received that a number of students share my feelings about the growing expectation that learning has to be a socially interactive process. As an undergraduate student once myself, I can recall how daunting the seminar was: for me as a shy, introverted type, the prospect of having to speak in front of a group of people (including, of course, someone with vastly more knowledge about the topic of discussion than I had) induced anxious and uncomfortable feelings. The risk of making a fool of myself in front of my peers was forever at the forefront of my mind. I am sure this is a common experience, particularly among students, many of whom are young, lacking in confidence, and anxious about social relationships and how they come across to others. Rather than make them feel that their reticence is a problem, we should instead reassure them that it is perfectly normal.

Of course, part of the point of education is to push and challenge students—to support them in gaining more confidence in areas they may find difficult and uncomfortable, and to prepare them for a future in which they may well at some point have to contribute to meetings or deliver a report. But to approach this task bullishly by demanding that students interact (as some teachers do) is insensitive and narrow-minded. It fails to appreciate that personalities vary, and, worse, it can generate unnecessary anxiety among students to the extent that, counterproductively, studying becomes for them a miserable experience.

A better approach would be to create the right conditions that enable students to contribute if they wish, while respecting the right to remain silent—we should not demand that anyone has to speak if (for whatever reason) they don’t want to, nor should we make a student feel embarrassed, ashamed or a ‘problem’ if they are quiet. Fundamental to the classroom environment is an atmosphere of respect and tolerance—which means that everyone (students and teachers) respect and tolerate the various personalities and views within the group. Rather than become frustrated that some students seem to say too little while others say too much, one might reflect on how the seminar is valuable training for how we can interact constructively with a wide range of people and personalities. For some that means learning to contribute more; for others it means learning to contribute less; for all it means respecting the contributions of others, however sparing or effusive they may be.

Does it really matter anyway if a student says little in class but is otherwise performing at a good academic level and seems generally happy? I would say not, and hence that there is no justification in worrying about it. Instead we might focus on that student’s evident strengths in other areas of the learning process; we might acknowledge that for some people it takes years to develop the confidence to speak regularly in social groups (something I know from personal experience); and we might realize that there are many paths through life for which confident social interaction is not an essential skill but the ability to reflect and think is. Listening and thinking are at least as important as talking and acting.

Cliometrics: Or, What Historians Can Tell Us about Metrics

Measurement and quantification have become the guiding lights of our age. Numbers are becoming the principal means by which we make sense of our lives and our world. Wonderfully, or so it may seem, we can put a numerical value on almost every aspect of our experience: our health, our happiness, our success, all can be rated and compared with the health, happiness and success of others. How liberating it is to work out whether we are happy without relying on such messy and imprecise things as the nuances of feelings and subjective experience! I may feel happy, but am I happy? Best check it according to one of the many ‘scientific’ ways of quantifying it. Even better, I can check my life satisfaction against the national average on David Cameron’s happiness index (happily funded to the tune of £2m per year in these times of austerity). It may even boost my happiness rating to discover that I’m happier than most other people…

It is increasingly hard to resist this brave new world—the numbers insidiously work their way into every area of our lives. How is my writing going? I’d better check the stats on views and visitors and referrers to my blog. Was my teaching last term successful? Let’s look at the average ratings the students gave the course. Am I popular? The number of friends on social media will answer that. Am I in good shape? Best work out my Body Mass Index. Is this meal healthy? I can cross reference its calories and sodium content with recommended daily averages. Which of these books should I buy as a Christmas present? I’ll let the average reviewer ratings help me decide. Is it worth reading this piece of research? Let’s check the ‘impact factor’ of the journal it has been published in… oh, that’s quite a low number, so best not bother to read it then. How absurd is all this measuring of feeling and quantifying of quality? Well, it rates highly on my recently-devised and extremely scientific Absurdity Factor.

It’s not that the formulae, numbers and statistics in themselves are bad: they are simply pieces of information about which we need to exercise critical judgment, to make evaluations as to their worth or not. It is the growing tendency to dispense with evaluation that is the problem: doubting and foregoing our ability to make subjective judgments, we instead treat the numbers and ratings as if they are reliable, scientific truths. In the process—and here it gets really serious—careers and lives are destroyed. Few employees are free from the use of performance indicators, the metrics that are used to measure whether someone is doing their job well. As I’ve discussed elsewhere, higher education has become obsessed with them: the quality of scholarly research is judged not on reading it but on metrics; the performance, futures and lives of academics hinge on a set of numbers that are hopeless at assessing such things as the quality of teaching and research but which are beloved of university managers, heads of department and HR departments as the definitive guide identifying whom to hire and whom to fire. The ‘successful’ academic of the future is likely to be the one who has swallowed the notion that quality is no more than a number and that there are ways to ‘game’ the metrics in order to achieve the required number—such as having your research published in a journal which is part of a citations cartel.

Both the value and the limitations of quantification are familiar to most historians. In particular, quantitative methods have become an important part of the armoury of the social historian. It would be inconceivable for a history department not to teach social history, but this has only been the case since the 1960s. Before then social history was a niche, poorly-valued area. In part this was because of prevailing attitudes among historians (‘history is about great leaders and high politics; who wants to know about the dreary lives of common people?’), but it was also because there were real difficulties researching the subject—there was no absence of useful data about many periods but there was a lack of adequate tools to make meaningful interpretations about past societies. For the historian interested in early modern English society, for example, plenty of records and documents exist (parish registers, wills, inventories, accounts, court records, manorial records, diaries, letters, etc.), but for those sources to provide more than isolated snapshots of social life would require time, labour and resources far in excess of that available to any single historian. But then along came computers and databases, and with them the birth of cliometrics.

Cliometrics (a coinage joining the ancient muse of history, Clio, with metrics, the system or standard of measuring things) involved applying to history statistical, mathematical and quantitative models that had been developed by economists. Large datasets (a population census is an example) could be fed into a computer and then interrogated, something no individual historian could have done before the advent of digital technology. The impact on historiography was huge: whole new areas of the past could be opened up to investigation, and general hypotheses could be framed and tested within minutes rather than reached only after years of painstaking and gradual accumulation of evidence. Historians became excited about the possibilities: assemble a body of data, feed it into a computer, ask the right question, and an answer will be provided in the time it takes to make a cup of tea. Even better, it was thought, the answers would be scientific. The distinguished French historian, Emmanuel Le Roy Ladurie, claimed that ‘history that is not quantifiable cannot claim to be scientific’ and envisaged a future in which all historians ‘will have to be able to programme a computer in order to survive’ (as quoted in Richard J. Evans, In Defence of History (London: Granta, 1997), p. 39.)

wrigley_and_schofieldOne of the earliest and most impressive applications of cliometrics stemmed from the Cambridge Group for History of Population and Social Structure, founded in 1964 by Peter Laslett and Tony Wrigley. Using English parish records (one of the legacies of Thomas Cromwell’s administrative reforms was the commencement of regular parish record-keeping from 1538) as a dataset, the outstanding achievement of the Cambridge Group was Wrigley and R.S. Schofield’s Population History of England 1541-1871: A Reconstruction (Cambridge: Cambridge University Press, 1981). In addition to presenting huge amounts of data for pre-industrial England about birth, marriage and death rates, population size and growth, mortality crises, and much else besides, their work demolished various myths and assumptions about the past. For example, they conclusively proved that the nuclear (rather than extended) family was the overwhelming norm, and that most couples had no more than two surviving children (it was only the wealthy who tended to have large broods), rendering as pointless the surprisingly common assumption that the social and family conditions of the developing world are comparable with those of the pre-industrial world. For historians of early modern social and family life Wrigley and Schofield’s research is one of the fundamental starting points for inquiry.

wrigley_and_schofield_table
A table from Wrigley and Schofield’s Population History of England

However, a good historian (as scientifically defined according to my recently-devised Good Historian Factor) would not consider the cliometrics of Wrigley and Schofield as the end point—unlike the policy makers and managers who see metrics as the end point. The historian would understand that it is one thing to quantify family structure or life expectancy, quite another to assess the quality of family life or the effects on emotions and thought of high mortality rates. In order to do the latter it is necessary to look beyond the numbers and do some old-fashioned source evaluation: the historian would need to engage in critical analysis of diaries, letters and other texts, to assess what images and artefacts tell us, and to think broadly with concepts, methods and theories. What results is not a number but an interpretation, and (much to the dismay of the policy makers and managers) not one that is scientific or definitive but one that is open to questions, challenges, discussion and debate.

Some of thtime_on_the_crosse dangers of placing too much faith in cliometrics can be seen in Time on the Cross: The Economics of American Negro Slavery (New York, 1974), an attempt to apply a quantitative analysis to the history of American slavery by two economic historians, Robert Fogel and Stanley Engerman. The work was in two volumes, the first presenting historical analysis, the second the quantitative data. Based on the data the authors reached several controversial conclusions: they argued that southern slavery was more efficient than the free agriculture in the north, that the material conditions of slaves were comparable with those of free labourers, that the economy in the south grew more rapidly than that in the north, and that slaves were rewarded for their labour more highly than had previously been thought. Although some critics questioned the quality of the data used by Fogel and Engerman, most acknowledged that the quantitative methodology was broadly sound. What was unsound was the failure to present the information in a qualitative way. The supposedly ‘objective’ analysis of American slavery, with its hard data pointing to growth within the southern economy and to work conditions comparable to those of free labourers, ends up presenting slavery in a benign light—however much the authors themselves were clearly and genuinely opposed to any justification of slavery. A much better historical approach would have been to place more emphasis on the social and political context of slavery, and to assess its psychological and cultural aspects. For example, the authors presented the statistical finding that slaves were whipped 0.7 times per year on average. On its own such a finding might suggest that the slave economy was anything but unremittingly brutal, and maybe was not so bad after all. But what that figure (and Fogel and Engerman) fails to tell us is what the whip, and the threat of the whip, meant psychologically to the slave. A more significant impact on the life experience of the slave than the 0.7 whippings per year was more likely the fear of the whip and the lack of freedom to escape this fear. Thoughts, feelings, mental states are impossible to quantify—but they are surely essential to an historical understanding of slavery.

The policy makers and managers are clearly not historians (or else they have a dismally low Good Historian Factor). If they were, then they would see metrics as interesting and often useful information (pretty much like all information, therefore), but also limited in what it tells us; they would appreciate that metrics can be distorted by insufficient or manipulated data; they would see how essential it is that metrics is only one part (and probably a small part) of how to understand something, and for metrics to be of any use there needs to be qualitative interpretation; they would recognize that to judge the quality of research (or anything else) solely by using quantitative approaches rates a high number on my recently-devised, objective and scientific Stupidity Factor.

Bullying, Metrics, and the Death of Professor Stefan Grimm

On 25 September 2014, Stefan Grimm, professor of toxicology in the Faculty of Medicine at Imperial College London, was found dead in his home. He was 51. An inquest into his death has been opened and, while no official cause has yet been given, it would appear that he committed suicide. One reason to suspect suicide is that an unusual thing happened on 21 October, nearly four weeks after Grimm’s death. An email with the subject heading ‘How Professors are treated at Imperial College’ was sent from Grimm’s account to about forty internal addresses at the college. It would appear that Grimm had pre-set his account to send this email after his death; nothing has so far suggested that it is anything other than genuine. The email presents a dispiriting and disturbing insight into the state of modern British academia.

Included with Grimm’s message were two emails sent to Grimm by Martin Wilkins, professor of clinical pharmacology and head of the division of experimental medicine at Imperial. All these emails have subsequently been leaked and have now become public knowledge; the Times Higher Education has published them in full alongside an article on Grimm’s death. There has also been extensive commentary in other publications as well as on blogs (notably by David Colquhoun, emeritus professor of pharmacology at University College London).

The essence of the exchange and the circumstances outlined in the emails is as follows. Grimm, an active and successful researcher with over seventy publications to his name, a large number of grant applications and recipient of significant research funding, was informally and humiliatingly told by Wilkins that he would be sacked. Wilkins’ emails to Grimm confirm that steps were being taken that would in all likelihood lead to Grimm’s dismissal. With barely disguised insensitivity, Wilkins explained to Grimm ‘that you are struggling to fulfil the metrics of a Professorial post at Imperial College’, and that, unless Grimm’s performance improved, formal disciplinary procedures would be initiated. It is hard not to share Grimm’s bemusement that none of his various publications or research activity seemed to count in the eyes of the college. The final straw seems to have come when Grimm was informed by Wilkins that he would no longer be able to supervise a PhD student who had been accepted by the college and wished to work under Grimm. As Grimm wrote in his email: ‘He [the prospective PhD student] waited so long to work in our group and I will never be able to tell him that this should not now happen. What these guys [Wilkins and Gavin Screaton, then head of medicine at Imperial] don’t know is that they destroy lives. Well, they certainly destroyed mine.’

Anyone who has worked in academia will understand Grimm’s sentiments. This is not a career one falls into for want of better alternatives; it takes years of study, often combined with straitened financial circumstances and self-sacrifice, to acquire the experience, skills and knowledge necessary to work in academia. Why do this? Because of a passion and dedication about knowledge and furthering that knowledge through research and teaching, because academics care intensely about what they do and about its importance. There are times when research goes spectacularly well, but the nature of research is that there are also fallow periods, times when dead-ends are reached and new approaches need to be taken, times when patient, slow groundwork is being established that takes time to yield results. Part of the point of the university is to provide the institutional setting in which teaching and research can be nurtured—in which the commitment, hard work, and ups and downs of the life dedicated to academia will be understood, appreciated, respected and supported. Increasingly, however, universities regard their academic staff as little more than expendable items on a profit/loss balance sheet. Once that mentality has set in among university management, it does not take long for the type of shabby, undermining and humiliating treatment that appears to have been meted out to Grimm to become the rule rather than the exception.

Much of the comment on Grimm’s death and the circumstances surrounding it has focused on two things: the culture of academic bullying; and the absurdity of metrics. There is no doubt that Wilkins emerges from the exchange as a bully (or perhaps as the bullying henchman of Screaton, possibly ‘only following orders’); his approach to management and interpersonal relations comes across as arrogant, callous and deliberately humiliating. Some of the blogs and online commentary suggest that Wilkins is far from unique, and that a culture of bullying is rife not only at Imperial College but across academia. As Colquhoun notes on his blog, there has been a strikingly high number of university staff taking their employers to employment tribunals, and vastly more who have signed gagging orders preventing them from speaking out about their employers—evidence at the very least of widespread problems in employer-employee relations across academia.

The days of collegiality when management might be expected to support their academic staff are fast disappearing. As Grimm notes in his final email, Imperial (although for Imperial almost any university in the UK could be substituted here) ‘is not a university anymore but a business with [a] very few up in the hierarchy… profiteering [while] the rest of us are milked for money’. The culture of university management increasingly sees both academics and students as little more than sources of potential profit. The language used in universities gives it away: academics are expected to think about ‘branding’ and ‘marketization’; business plans and strategies are the new models for how to run an academic department; departments have business managers these days. Universities were originally centres of learning, teaching and research with managerial and bureaucratic structures designed to support that core function; but increasingly learning, teaching and research have assumed the new role of supporting the managerial and bureaucratic corporations that universities have become.

The problem with running universities as corporate businesses is that much of the activity of academics does not fit into a business model. Learning and teaching, for example, are hard to quantify since they do not generate any obvious profits, and thus tend not to be highly valued by management. Student recruitment and retention are seen as important, but not as goods in themselves, rather because high levels of recruitment and retention lead to increased income. Nor does much research sit easily with a business culture. In the older collegial culture it was understood that research needed to be nurtured; researchers often needed time and patience, and they needed support even if their field, however intrinsically important, was not high profile or likely to attract large amounts of funding. Quality, above all, was the key aim. In the current climate productivity and ‘impact’ are the only things that matter. Those academics able to churn out a steady stream of articles are favoured over those whose output is good but may have fallow periods when they need patiently to develop their research without the unremitting and constant pressure of having to publish at regular intervals. Moreover, much research, by its very nature, is an investigation the outcome of which is unclear or uncertain. But modern university managers have little time for this; they want to know even before the research has begun that it will have a significant impact—not on scholarship but on wider society outside. Much valuable research struggles to find a wide audience, yet is important for its long-term contributions to knowledge and understanding; modern university management has minimal interest in such work since it does not fit with their focus on the relentless pursuit of profit. The system favours those researchers who choose obviously high-profile topics, but of such a nature that neither breadth nor depth will get in the way of rapid production. The aim, it would seem, is to turn universities into research factories, academics into research machines. Academics who resist the bleak prospect of becoming nothing other than an efficient, productive research machine are marked for redundancy.

University managers will object by saying that they care greatly about quality of research, that in fact all sorts of measures have been designed to assess quality. These are the metrics, the means (it is supposed) by which the performance of an individual or an organization can be measured. Metrics tend to be highly complex—and absurd. (For those interested in why they are absurd, see Colquhoun’s discussion of them here and here.) It would seem obvious to most people that in order to assess the quality of research it might be a good idea actually to read that research. In the increasingly Kafkaesque world of the modern university, however, judgments are made about research not by reading it but according to baroque and opaque performance indicators. Formulae, spreadsheets and number-crunching have replaced old-fashioned concepts of reading and thinking about something in order to consider whether it is valuable or not. How many citations a piece of research has received, where in a journal a piece of research appears, what numerical rating has been assigned to the ‘impact factor’ of a journal, what numerical value has been assigned to the position a researcher’s name appears in the list of authors of the research—out of all these comes an overall numerical value which rates the quality of the research. It is the brave new world in which managers believe they have discovered the secret of quantifying quality without having to think about or understand what it is they are attempting to quantify. It would be like trying to assess the quality of music not by listening to the music itself but by working out a formula which factors in chart success, size of record label and writing credits to generate a (spuriously) scientific number representing quality.

While hardly bearing comparison with the experience of Stefan Grimm, a former colleague (an academic in the humanities) told me his own dealings with the new university culture. When asked whether he had any research to submit to the recent Research Evaluation Framework (REF) he suggested some articles written over the previous few years. He considered them to be good contributions to scholarship, but of course it was for others to judge; one way they might assess their quality would be to read them. However, his research was immediately dismissed out of hand, without being read, as being unsuitable for the REF: one article because it was co-authored (so much for encouraging collaboration in the humanities!), another because it was an essay in an edited book (that the book was edited by some of the leading scholars in their field meant nothing), a third because it was not in a prestigious enough journal, and a fourth because it was a review article, and again not in a journal with a sufficient international reputation (that the review was intended to make a useful contribution to a broad research area made no difference). Clearly he was a poorly-performing academic by the criteria of the university, notwithstanding the long hours he committed to the job and his extensive and, as was evident from feedback from both students and colleagues, successful teaching and administrative roles. His approach to research and academic work did not fit the REF-model and the current values of university management; thus it was made clear to him that, unless he started complying with the system, he had no realistic future in academia. Despite his dedication and contribution to his university, he has unsurprisingly become disillusioned enough to wonder whether academia is an environment he wants to be in any longer.

The modern values of university management are such that a university will abandon plans for a new building to house a Human Rights Centre of worldwide reputation, replacing it instead with a business school; it will attempt to close down the history of art department; it will suggest putting the Latin American collection up for sale; it will not renew the visiting post of a Nobel laureate; and it will lose a renowned writer and chair of the Man Booker International Prize because it is not prepared to accommodate her roles (the prestige and reputation of which clearly mean nothing to the managers) with the rigid and constantly-monitored targets devised by management. All this at the University of Essex (as recounted by Marina Warner in the London Review of Books, volume 36 number 17, 11 September 2014, pp. 42-3).

It is hardly surprising that such a culture fosters bullying on the part of managers, and stress, anxiety and insecurity among academic staff. Some will argue that this is a recipe for ‘success’: Imperial College is ranked, after all, among the top few universities in the world (using, of course, ranking systems based on yet more absurd metrics). Others may wonder whether the price to be paid for this ‘success’ is worth it: the important research that does not get done because it does not fit the current business model; the excellent teachers who are dispensed with because their work does not fit with the performance metrics; the students who are squeezed for every penny, and the unsavoury scramble for international students who bring in the highest fees; the rewards of long and dedicated service in academia coming in the form of intimidation, humiliation and mass sackings; the human suffering of depression, stress and anxiety among academics that comes in the wake of the managerial culture; and, possibly, the death of Stefan Grimm.

A Morality Tale: The Warburg Institute and the University of London

Aby_Warburg
Aby Warburg (1866-1929)

The private researcher dreams of having the limitless resources to create the personal research library. For Aby Warburg (1866-1929) the dream was a reality. A member of the famous family of wealthy German bankers, it was not the world of finance that appealed to Warburg but rather the art of the Renaissance and the manifold ways in which the classical tradition of ancient Greece and Rome had been expressed culturally and intellectually throughout history, influencing and shaping the thought and culture of Europe and beyond. Backed by family money, he built his own impressive collection of books. Warburg’s library gradually attracted like-minded scholars, becoming the hub of a growing circle of researchers. One of these scholars, the art historian Fritz Saxl (1890-1948), was instrumental in turning an essentially private library into a fully-fledged research institute. Initially attached to the University of Hamburg, the rise to power of the Nazis led to the institute’s relocation to London in 1933.

warburg_institute_emblem
The Warburg Institute emblem

In 1944 the Warburg Institute was incorporated into the University of London with a guarantee that the university would maintain its library and preserve its independent status as a research institute in perpetuity. As a result of this apparently far-sighted decision, the University of London has been associated with one of the world’s great centres of research on cultural and intellectual history. Many important scholars have been associated with the institute—Ernst Cassirer, Erwin Panofsky, Henri Frankfort, Ernst Gombrich, Frances Yates, D.P. Walker and Anthony Grafton to name but a few—and for serious students of the classical tradition the resources of the institute are an essential aid to research. The institute has a thriving postgraduate and postdoctoral research culture, and is particularly noted for its support of young scholars (among which I was once one). The Warburg library is characterized by its unique and brilliant classification system designed to enable interdisciplinary research across art, history, literature, philosophy, theology, and much else besides; it now holds more than 350,000 volumes, over 98% of which are on open access shelves, and among them are a large proportion of rare and otherwise hard-to-find books. In addition, the institute is home to a photographic collection of more than 400,000 images.

Realistic aspirations to be considered among the world’s leading universities would encourage, it may be supposed, the University of London to nurture and support an institute with the excellence and impressive international reputation of the Warburg. Yet for years the university has effectively been undermining the institute. In particular, it has failed to honour the original trust deed between the University and the Warburg, and it has charged exorbitant rates for library space that have left the institute running at a large deficit. Faced with threats to its very existence, it is hardly surprising that relations between the university and the institute have long been strained, nor that occasional rumours circulated about the possible return of the Warburg to Germany. Matters finally came to a head in the High Court this year as the University of London legally challenged the terms of the trust deed—and lost. The judgment of 6 November rules in effect that the university has been in breach of its obligations to the institute for many years. The university has put a brave face on this, claiming that it is ‘pleased’ with the judgment; the fact that it has sought and obtained permission to appeal against the judgment, ready it would seem to spend further huge sums of money in court rather than in constructive discussion, suggests an unusual use of the word ‘pleased’. (On the legal battle and the High Court judgment see the Guardian article from 10 August 2014, the Times Higher Education Supplement report from 6 November, the press release on the Warburg Institute website, and an essay by Charles Hope in the London Review of Books, volume 36 number 23, 4 December 2014, pp. 32-4.)

The saga is a dispiriting reflection of modern academia. Universities, colleges and many departments are increasingly run in ways that resemble the world of corporate business; senior management, as well as many heads of department, focus on budgets, assets, profit and loss. Business models and strategies rather than scholarly and intellectual culture are the new order within the university and many of its departments. The research and scholarly value of the Warburg Institute, obvious to those of us who work on cultural and intellectual history but almost impossible to quantify on the accountant’s spreadsheet, counts for little alongside the narrow but ‘measurable’ productivity and outputs so beloved of current policymakers and university management. The University of London, it would appear, regards the Warburg Institute as a potential asset, but not in ways that demonstrate an understanding and fostering of the research culture that has been, and still is, at the heart of the institute’s international reputation.

There was a time, and not that long ago, when the university was an institution that protected and valued research and scholarship; it understood that academic work could not be reduced simply to figures on a balance sheet. The change in university culture over the last two or three decades is illustrated by another story involving the University of London. In 1956 a set of the earliest folios of Shakespeare was bequeathed to the university library, with the condition that they would be housed there permanently. The value of such a bequest would have been understood not only by literary scholars, but also by university management, for its contribution to the research culture and reputation of the university. How does modern university management value this bequest? At £3-5 million and an asset to be auctioned off. Due to a public outcry from scholars, the University of London abandoned its plans to sell the folios (see the Guardian, 5 September 2013), so for now at least the folios, like the Warburg Institute, are safe—but precariously so given the mindset of modern university management.

Who would confidently place faith in the university to nurture research and scholarship in the humanities today? Aby Warburg was fortunate enough not to have to rely on institutions to support his research. It would hardly be surprising if, alongside the respect and admiration in which he is widely held by scholars, there is also a fair degree of envy.

Like a Strange Marriage

I have not always been a good friend to Gale, which is a pity since, among other things, he has written a book on the importance of friendship. Too often I have found him to be burdensome, and there is no doubt that he can be a bore. The urge to abandon and forget him has filled me many times. Yet it is like a strange marriage: for better or for worse I am dutifully bound to him for life.

We first met in a library many years ago. I’d already heard a little about him; not much, but enough to suspect that I might find him interesting. Our initial encounter confirmed my suspicion: his knowledge of philosophy, theology, history and literature was impressive and enviable, and he brought to it a clear, if unorthodox, intelligence. Above all, it was Gale’s ideas that drew me to him, not that I agreed with many of them. Some of his thinking seemed wild and bizarre, the result of a rather obsessive commitment to the philosophy of Plato and a predilection for understanding history according to the Bible. But there was never any hint of madness; rather, I considered him simply wrongheaded and misguided, but attractively so for his assured and calm conviction that he was in pursuit of the truth. Over time, it is true, my enthusiasm paled as I sensed that he embodied a wasted brilliance; intellectually I grew to realize that for all his vast erudition, he was trapped in a dead end. His ceaseless, learned harping on his favourite themes could become tiresome and frustrating, and almost pathetic: I have come to think of him as akin to Edward Casaubon in George Eliot’s Middlemarch, desiccated, pedantic and increasingly hard to like. It’s a marvel, therefore, that I have continued to find time for him—but then I have always been oddly attracted to heroic futility, and Gale’s whole life seems to be the epitome of that.

As I got to know him better I learnt that he was originally from Devon, that he had had an aborted academic career at Oxford, and that after Oxford he spent a year living in France before moving to London to write and privately tutor. It seemed he wasn’t particularly good at the latter; there were stories of certain tutorial disasters, about which Gale has consistently remained reticent. Indeed, he is not forthcoming about much of his life. I have never been able to ascertain any interest in romance or relationships, and of his sexuality I have come to the conclusion that he has consciously chosen celibacy and may very well be asexual. He dislikes alcohol and considers smoking to be loathsome, only adding to the wonder that I consider myself under an obligation of loyalty to him. It is a loyalty arising from the fact that I liked and admired him once, and, every so often, find myself liking and admiring him still.

Above all, I once vowed to stand by him, no matter how wayward his ideas, how annoying his intellectual faults, and how sparse his friendships. I met him at a critical point in my life, and without doubt I owe to him an important meaning and direction that I found at a difficult time, however much I have moved on since then. So I am still happy to give him my attention, even to raise a glass to him. And I am still resolved to make a pilgrimage to his headstone, lying somewhere deep within Bunhill Fields cemetery, on which there is the simple engraving: ‘Theophilus Gale M.A., Born 1628, Died 1678.’

Possibly my feelings are similar to those of many PhD students: we cast around to find a suitably interesting subject, frequently lighting upon some neglected figure from the past who then becomes the focus of our energies and devotions for several years. Such was my discovery of the nonconformist theologian and philosopher, Theophilus Gale, the author of several lengthy tomes hard to surpass for their scholarship or for their ultimate intellectual failure. I could never make a convincing case that he is someone really worth reading, apart from by a few dedicated specialists of certain early modern currents of thought. But nor could I ever say that he deserves to be neglected and forgotten.

I often reflect on my relationship with him: the reasons why I chose to work on him in the first place; the sense that he might have been pleased that at last, after several centuries of little more than cursory attention, someone was prepared to read the several thousand pages of his writings; and the almost mystical notion that in finding him and giving him something of life again, that this may also happen to me (or to any one of us) at some point far in the future when I am long gone and almost completely forgotten. Perhaps this explains the emotional bond I feel towards Gale: it crystallizes that need for the living to love and to cherish the significance of the past and the memory of the dead.