In defence of school absenteeism

Prior to 2013, schools had discretion to allow term-time absences for pupils. Regulations introduced in 2013 put a stop to this, allowing absences only in exceptional circumstances (such as to attend a funeral), and stipulating fines for families who did not comply. This system of penalties is now likely to be thrown into confusion by the decision of a court to overturn a £120 fine imposed on a father for taking his daughter on a week’s holiday during school term. The appeal against the penalty hinged on the question of what constitutes ‘regular’ attendance. The law simply states that a child must attend school ‘regularly’ without offering a precise definition of the term. In this case, the father pointed out that his daughter’s attendance had been 94% the previous year and was 100% at the point of going on holiday. Clearly—and the court felt so too—such numbers might be thought to fall within the definition of ‘regularly’; equally clearly, ‘regularly’ is a sufficiently vague term to make most fines for non-attendance unreasonable.

But the issues go deeper than legal argument about the definition of a word. In particular, it is worth considering who is responsible for a child’s education. Many people suppose that the state assumes responsibility. Legally, however, this is not the case. Education is compulsory, but the primary responsibility for ensuring that a child receives an education falls to parents. Section 7 of the 1996 Education Act states that it is the ‘duty of parents to secure education of children of compulsory school age’, that this duty involves arranging education suitable to the ‘age, ability and aptitude’ and ‘to any special educational needs’ of the child, and that this may be achieved ‘either by regular attendance at school or otherwise’.

The legal situation can be summarized in this way: parents have the responsibility for ensuring the education of their child, and they can decide to fulfil this, but do not have to do so, by sending their child to school. The phrase ‘or otherwise’ has generally been understood to cover home education—and, indeed, it is the basis for the excellent home education resources and campaigning group, Education Otherwise (whose slogan ‘Education is compulsory, school is optional’ summarizes the legal situation more succinctly than I have done).

The law here seems eminently reasonable: parents may well choose to delegate their child’s education to a school, but to compel them to do so would be an unwarrantable intrusion by the state on the freedom of parents to decide on the best interests of their child. Most parents are better placed than the impersonal state to understand the needs of their child, and there may be excellent reasons why they deem formal schooling unsuitable to those needs. Not all parents believe that corralling children into a classroom to be drilled for hours in preparation for an endless round of tests and exams comes close to representing an ‘education’.*

Both legally and morally, therefore, there are question marks over the power of the state to fine parents for pupil absences. As long as the parents are ensuring that education is being provided, then it would seem that there is compliance with the law. Even if we accept that delegating education to the state involves entering some form of agreement to ensure school attendance, it is not clear that occasional absences are in breach of this agreement. As the recent appeal case confirms, occasional absences do not breach the requirement for ‘regular’ attendance.

Of course, it may be objected that such absences do in fact breach the duty of parents to provide an education for their child. Many suppose that unless a child is in school, that child is not receiving an education—and they will particularly complain about a child being on holiday instead of in school. But such objections stem from an excessively narrow idea of what education is or should be about. Education is not solely about passing exams nor should it be exclusively concerned with preparation for the workplace. These may constitute important parts of it, but above all education is about preparing children for life. A child will learn far more, and far more important things, from a week on holiday (about, for example, a different culture, a different place, different activities, family relations, and so on) than from a week in the classroom. The notion that a few days of lessons are a more important life experience than a holiday is evidence of the skewed values of society today. (Indeed, a child will almost certainly learn more about life from occasional truancy than from the lessons that he or she has skipped.)

Consider the case of the couple who have been taken to court because they took their two young children to India to visit their extremely sick grandfather. There are compelling personal reasons why such an absence might reasonably have been granted, none of which seem to have resisted the bludgeoning approach of the state. But beyond that, it could be argued that the educational value of the trip far surpassed anything that might have been learnt in a week or two of school. Not only would it have entailed the experience of a different culture, but it would have involved (unfortunately, of course) an engagement with such important parts of life as illness, ageing and mortality. Only the most blinkered educationalist would maintain that a few literacy and numeracy classes have more educational value than that.

CalvinAndHobbes_12

The Department of Education claims that even one week of missed classes each year will damage a child’s chances at GCSE with a lasting effect on life chances. But this is simple scaremongering; reassuringly for those children who are occasionally ill (i.e. all of them), there is nothing daunting about catching up with a week or so of lessons here and there. I have no doubt that there are often serious problems arising from absenteeism, but trying to solve those problems by clamping down on all absences really does introduce the proverbial sledgehammer and nut into the debate.

Michael Gove, the former Education Secretary, proposed docking child benefit from parents who allow unauthorized absences; Michael Wilshaw, the chief inspector of Ofsted, would like to fine parents whose children don’t do their homework and has urged schools to ‘adopt a hard line and not allow the youngsters out’. In the punitive and narrow vision of Gove and Wilshaw, education amounts only to teachers in classrooms preparing pupils for exams, and the duty of parents ceases to be truly educational but instead one that may be likened to the task of prison officers transporting their charges to the institution and locking them down. I don’t doubt that classroom education has many benefits, nor do I reject the value of exams, but to equate education exclusively to school lessons and testing is not only life-denyingly dreary but also hugely mistaken.


*I declare an interest here: as a former home-educating parent myself, I understand the reservations that many parents feel about school. I intend to write more on this in a future article.

Advertisements

Hope, Experience, and Efficient File Management

It’s that time once again when hope joins battle with experience: the start of a new academic year. This year, I tell myself (as I have told myself every year since time out of mind), I shall be organized like never before. There will be no more coffee-fuelled all-nighters to prepare lessons; no more hours lost tracking down an old lecture to find it filed in my Music folder; no more staring blankly at my diary unable to decipher the name of the student I am supposed to be meeting; no more panic on an unmoving train because I have left a margin of error of less than a minute to arrive at a meeting. I will be a lean, keen teaching machine, gliding through preparation and lessons with elegance and efficiency, with plenty of time to write and research productively. I may even make some money at last.

If hope is outpointing experience by Christmas then I will be doing well.

First I need to find my desk. That involves hacking a path through piles of books and magazines to the area of my room where I last remember seeing said desk, and then excavating through the papers, unopened bills, junk mail, long-forgotten invoices, receipts, post-its, pens, pencils, paper clips, elastic bands and sundry miscellaneous items to what I believe will be the desk itself. Treasures, in the form of untouched Rizla packets, are likely to be found during this archaeological dig. All of this reminds me of how once, when I had an actual office in an actual academic department, I would advise students on the merits of efficient organization, advice delivered across a scenic landscape of mountainous chaos on my desk.

Then I need to sort out my computer desktop. Unlike my physical desk, upon which no organization at all can be detected, my computer desktop suffers from a worse state of affairs: a half-hearted semblance of organization. Clearly at some point (probably the start of the last academic year) I devoted a solid half hour of attention to this task before being distracted by a book that needed to be read, never to return. There are folders, inside many of which are further folders and files that logically belong there. Keeping them company are many other folders and files whose rationale for being there would surpass the wisdom of God. Different versions of my CV crop up incongruously in various locations as if it is a self-replicating, mutating virus.

Finally, emails… but that will surely, as always, be a task too far. (Ah, experience strikes an early blow against hope…)

I’ve come to realize that when it comes to organization I should either have none at all or a complete, fully worked out and implemented system. The virtue of the former (my preferred approach for many years) is that I became adept at storing relevant and important information in my head. It did not matter that a letter containing the time and date of an appointment was hopelessly buried beneath any one of several piles of papers, nor that I did not write down appointments in a diary, for I had already committed this information to memory. This non-system almost never failed. Problems arose when, having been foolishly seduced by the time-management and efficiency gurus, I introduced some formal systems of organization. Because I was never organized enough in the first place to introduce organization in the proper way (the fatal flaw afflicting these systems among the irredeemably disorganized), I ended up with a hideous hybrid, a jerry-rigged mishmash of systems and memory. So, some appointments ended up in my diary, and some were committed to memory—others ended up both in my diary and my memory, while others (the majority) ended up in neither.

The trick is to get beyond the overwhelming feeling that bureaucratic systems and organization suck the life out of existence, and to overcome an intuitive resistance to the idea that hope can only be realized by efficient organization. Are dreams really built on time management and effective filing systems?

Dreams of Ivory, part 2: ‘Nought but grief an’ pain’

The temptation to ink a personal motto on my body has never troubled me, but if ever a drunken state of folly lands me in a tattoo parlour then the following would serve well as a permanently etched reminder of lived reality:

The best-laid schemes o’ mice an’ men / Gang aft agley, / An’ lea’e us nought but grief an’ pain, / For promis’d joy!

At the risk of insulting the literary knowledge of my readers, the lines are from Robert Burns’ 1785 poem ‘To a Mouse, on Turning Her Up in Her Nest with the Plough’. A field mouse has laboriously and carefully constructed a nest in order to stay warm and secure through the winter. Unfortunately, along comes Mr Burns with his plough, accidentally destroying the creature’s home. Burns does what any of us would do: he decides to write a poem, reflecting philosophically on how the poor mouse’s calamity reveals the universal truth that (to Anglicize the lines) ‘the best laid plans of mice and men often go awry and leave us with nothing other than grief and pain instead of the expected joy’.

Like the mouse and her nest, I regarded my Cambridge application as a fine plan with prospects of security and joy. My hopes were boosted by the fact that Cambridge were seeking two researchers: since I’ve hardly ever come first in any application process, but have been second a few times, it was reassuring to know that my perennial runner-up status would be enough for Cambridge. Then, the day after the application expiry, Cambridge decided to extend the deadline. Aside from reflecting that I needn’t have gone through all the stress of meeting the original deadline after all, I inferred positive signs from this development: of course, I reasoned, Cambridge desired a little more competition for my own obviously outstanding application. (What was it again, according to the proverb, that comes before a fall?)

In the meantime I learnt that I had been shortlisted for a couple of small teaching roles, interviews for which took place the week before Cambridge’s own proposed interview day. At the very least they would provide some practice before Cambridge, but my sights were set higher on a run of success that would mark a resounding academic comeback. The two interviews went well: I came out of each with a good feeling about my chances. I was runner-up in both.

This setback was a pinprick to my optimism in relation to Cambridge. On the day when Cambridge had promised to contact shortlisted applicants a few ripples of pessimism were lapping at my hopes. The ripples gradually expanded into waves as my inbox remained devoid of the hoped for invitation to interview. Each new email alert on my phone was a ping of promise, only to be followed by the disappointing discovery that it was merely a new notification on Facebook or a suggestion of someone to follow on Twitter. By early afternoon I reasoned that, doubtless because of the extended deadline, the selection committee had to work into the afternoon to finalize the shortlist. This line of reasoning became increasingly unconvincing as each hour passed. By late evening I was clinging to an irrational notion that the selection process had got bogged down for some unforeseen reason. Over the next few days, however much I knew the game was up and that I had been unsuccessful, vestiges of hope refused to disappear entirely. It is perhaps not surprising that having poured so much into believing something I was unable entirely to let go of that belief. Even on the scheduled day for interviews I momentarily imagined that Cambridge had had to postpone the process and that perhaps my application was still in the running. It probably took another week before I completely resigned myself to the certainty that I had been unsuccessful.

And so my application concluded not in appointment, not even in a tragi-comic tale of a disastrously bizarre interview performance, but in the short journey from receipt to the rejection pile and the bin. A close friend reckoned that this was ‘Cambridge’s loss’. Yet I doubt that. I may have been deluded all along about the merits of my application, but not even I think that Cambridge has made a grave error of judgement that will cast a long shadow over its future. I am not (yet) mad: the academic job market is incredibly tough, and I’m realistic that my CV does not make me especially competitive. I also know that you can’t lose something you never had in the first place. So I lost nothing… apart of course from two f***ing weeks of my life, and a whole load of hopes and dreams…

However, I am not interested in organizing a pity party but rather in having a really good rant making constructive criticism about the application process. First, consider what I wrote in the first part of this post, about the hours, days and weeks, the effort and energy invested in the application, the accompanying hopes and stresses, and the time given freely and generously by my referees. Now consider this: the University of Cambridge did not bother to acknowledge receipt of the application, and their policy is not to inform unsuccessful candidates of the outcome of their application.

Every job application is in part a direction of focus and interest towards the potential employer. For sure my primary concern lay with my own career prospects, but inevitably this involved considerable attention to how Cambridge would benefit from employing me. I was not expecting a bespoke token of gratitude for these futile efforts: a handwritten and courier-delivered letter, stained with tears of appreciation and regret, is not numbered among my fantasies. But some sort of communication might have been decent. I’m well-versed in job applications and almost equally well-versed in the experience of rejection, and it is usual that applications are acknowledged and rejected applicants are informed. Of course, all of this is done by automated email and prefab rejection templates, hardly sublime examples of the human touch. But they are better than nothing: somebody somewhere thought it might be good to keep all applicants informed; perhaps they thought that the efforts and interest of the applicants deserved a small return on their side.

I can only guess at why Cambridge adopted this approach. It can hardly be due to stretched resources: the University of Cambridge is among the wealthiest in the world, and to employ someone to prepare email templates, set up automated replies, batch applicants and press ‘send’ is not likely to bring their entire administration to a grinding halt. Maybe it reflects a culture of arrogance specific to Cambridge, a reluctance to engage in contact with the many common failures who dare to wish to work for them. Possibly Cambridge regards the arduous process of applying to work for them as such a privilege in itself that any communication from them would lead to applicants feeling spoiled. Or perhaps it is part of a deeper malaise of dehumanization in academia, in which a cold wind of bureaucratic indifference blasts away simple human niceties. For while I accept that my complaint is essentially trivial, the point is that what is most human is frequently what is most trivial: the polite acknowledgements, the unnecessary yet kind words, the myriad tiny ways in which we show recognition and mutual appreciation of one another as human beings.

Still, I must be honest. Would I have felt better had Cambridge kept me informed of the progress of my application? Not really. It would have spared me anxiety and uncertainty, but it would have made no difference to my disappointment. Indeed, I should be grateful to the Cambridge application process. For I can now console myself with things I do not actually believe, such as that ‘I would not want to work for an institution that treats people like that’. It means that rather than focus on the shortcomings of my application I can wrap myself in indignation at the way Cambridge goes about things.

But I suspect that a cloak of indignation was not going to keep Robert Burns’ mouse sheltered and warm over the months ahead. Burns concludes his poem, however, by arguing that the mouse has an advantage over humans: whereas the mouse is concerned only with the present moment, doubtless laying down another ‘scheme’ without worrying whether it will go ‘agley’, Burns cannot avoid dwelling gloomily on past and future. In difficult circumstances we may sometimes challenge ourselves whether we are ‘man or mouse’. Burns’ poem suggests that ‘mouse’ may well be the better answer.

So I have woken from my dream of ivory with thoughts only of new plans. Oh, okay, mostly of new plans. But I spend no more than an hour or two each day panicking about the uncertainty of my future, no more than a few minutes feeling resentful about the University of Cambridge application process, and merely a second or two dwelling on the words ‘cold arrogant f***ers’.

When a butterfly flaps its wings in Seoul: UCL and the Tim Hunt affair

In chaos theory the butterfly effect goes something like this: A butterfly flaps its wings in Seoul and venerable scientific institutions in London fall over themselves to end the career of a Nobel laureate scientist.

Sir Tim Hunt made the following remarks at a world conference of science journalists in Seoul: ‘Let me tell you about my trouble with girls. Three things happen when they are in the lab. You fall in love with them, they fall in love with you, and when you criticise them, they cry.’ Hunt claims that what he said was ‘totally jocular [and] ironic’, and that it was uttered while feeling ‘nervous’ about speaking before the conference. The remarks certainly come across as the sort of nervy attempt to get an audience onside and engaged through an insider’s joke, and like many such attempts they are neither obviously funny nor well-considered. They are indeed sexist (which is not the same thing as saying that Hunt is sexist). It seems that Hunt didn’t think much had gone awry, but some of his audience were sufficiently unimpressed to start tweeting his remark. Escalation was rapid: strong condemnation appeared across social media, newspaper columnists quickly began intoning about problems of sexism in science, Hunt was labelled, among other things, ‘a clueless, sexist jerk’ and ‘a misogynist dude scientist’, and before Hunt’s plane had even landed back in England he had been forced to resign his honorary position at University College London. Subsequently he also resigned from the European Research Council and from his role at the Royal Society.

As to Hunt’s remarks themselves, they deserved a clarification and an apology—both of which Hunt provided. But it is a peculiarly modern phenomenon that 37 words can have the consequence of ending a distinguished career. I’m sure that among the many thousands of words I have spoken in lectures there might have been a few which, upon reflection, I might regret having said or would wish to have rephrased; there might also have been a few which, taken out of the specific context of the lecture itself and the tone in which I made them, would appear in isolation to be worthy of condemnation. One of my best jokes in my lectures (and, given it rarely raised a laugh from more than three students, evidence of my limited prospects as a comedian) was: ‘Let’s move from the delights of Balinese cocks to talk about French pussies.’ (The context was a discussion of Clifford Geertz’s essay on ‘Deep Play: Notes on the Balinese Cockfight’ and Robert Darnton’s ‘The Great Cat Massacre’.) Another of my favourite jokes involved me showing a slide of a bikini-clad Elle ‘The Body’ Macpherson (that one always got more laughs, but as a visual joke it would be difficult to explain it here). There was always a slight risk that such jokes might backfire or take on a life of their own in this age of Twitter and Facebook, and that I could have been caught up in a maelstrom of impassioned discussion about ‘appropriate’ language or the problem with white male historians. (My career would probably have benefited from that sort of thing, so I rather regret that my students were sufficiently sensible to understand or ignore or not laugh at my jokes; I could have done with the occasional hothead in my audience, full of unintelligent righteous indignation and with a massive following on Twitter.)

None of which is to defend Hunt’s ‘joke’. I merely make the point that we live in a time in which a career can easily be pegged onto a single sentence—and a sentence which may be misunderstood or misrepresented, and, even if neither of those, may amount to no more than the sort of ill-considered poorly expressed comment that we all make from time to time.

The really interesting feature of Hunt’s case seems to me less the lynching mentality amid the social media wildfire and more the response of UCL, the ERC and the Royal Society. Imagine you’re an eminent academic with a distinguished career, probably not well-versed in social media and likely with little or no media training; you make a rather crass remark, but one you hadn’t given much thought to at the time; once it has been brought to your attention you issue an apology and a retraction, sincerely admitting your error and folly; nevertheless, you discover that your initial remark is now being discussed with varying degrees of outrage all over the world. What would you expect your college—your employer and the institution which supposedly supports and nurtures your work from which they benefit—to do? It is understandable that they would wish to distance themselves from the remark, but you might reasonably expect that they would also wish to discuss the matter with you, to seek clarification, and not to rush to judgement. You are likely to hope that they would offer support and protection against the media lynch mob, and that they would use their expertise to help find a way through the storm. Sure, you may have to issue yet further apologies, and you are going to have to put up with a battering for a while, but assuming your initial remark was not totally beyond the pale and your subsequent apology is sincere, then you might reasonably expect that your college would stand by you and find ways of repairing your (and their) reputation.

Instead of speaking to Hunt about his comments in Seoul, UCL decided to tell his wife (Mary Collins, also a scientist at the college) that her husband should resign immediately or that he would be sacked. Hunt duly resigned as soon as he returned to England. UCL softened this ultimatum by saying it would be ‘a low-key affair’, but then proceeded to trumpet on their website and Twitter how they had got rid of him. Hunt’s positions at the ERC and the Royal Society became untenable as a result, so he resigned from them too. His long scientific career was over. As he has said: ‘I am finished. I had hoped to do a lot more to help promote science in this country and in Europe, but I cannot see how that can happen. I have become toxic. I have been hung to dry by academic institutes who have not even bothered to ask me for my side of affairs.’ Whatever the wrongs of his remarks in Seoul—foolish and crass, but surely not remotely near to being at the most offensive end of the spectrum, and over which he has been genuinely contrite—Hunt appears to be right: he has been hung out to dry by UCL.

Yet perhaps we should not be surprised at this outcome. Collins, who has described her husband’s remarks as ‘unbelievably stupid’ but utterly rejected the idea that he is sexist, has commented: ‘They [UCL] have let Tim and I down badly. They cared only for their reputation and not about the wellbeing of their staff.’ But such is the modern academy. The days when a university was collegiate—when academics were nurtured and supported through both good times and bad—are rapidly fading. The modern academic institution increasingly resembles a brand and a business: any inkling that the brand may be damaged, any indication that an academic’s output may fall short financial targets, then, no matter how dedicated or hard-working or distinguished the academic, the institution is likely to abandon rather than support the individual. Sometimes an entire department, if it has failed to ‘game’ the ridiculous REF exercise, faces the brutal axe; and in their different ways the experiences of Tim Hunt, Stefan Grimm and Marina Warner all point to a dispiritingly corporate and inhuman culture that pervades the academy today.

One way of looking at this is to see it as an example of a wonderful bureaucratic rationalism: as long as you meet all the performance targets and are careful to stay consistently ‘on message’, then you’ll be fine; fail to do either of those, then you’re out. Some would say this is the only way to maintain excellence. But in fact sober reason has nothing to do with this. If, on the basis of a single remark such as that of Tim Hunt, a long career is summarily terminated, then the world we live in is not rational; rather, it is a world of chaos.

Dreams of Ivory, part 1: ‘Promis’d joy’

Recently, amid an aberration of high hopes, I applied for a research position at the University of Cambridge. The project to which the researcher would contribute looked exciting and interesting. It combined several areas of inquiry that related closely to my own academic background and knowledge, and, in a happy rarity, I could actually tick all the boxes on the person specification without having to resort to mental casuistry. Clearly oblivious to the concept of hubris, I concluded that the position was ‘made for me’ and began envisaging all the fine research I would be doing. Then I remembered that I needed to apply for the job first, so I set about the task of making my application heroic and formidable.

university_of_cambridge
Obviously the place for me…

The application process presented more than the usual range of hoops to jump through. Cambridge wanted a completed application form and a CV; a sample piece of writing; a 2,000-word statement of research aims; and two references to be submitted by the deadline. Expecting applicants to assume responsibility for chasing up and supplying references seems to be a common characteristic of Oxbridge positions. Presumably it is a tactic to deter applications from chancers and non-entities and general riff-raff. On several occasions in the past I have judged that I probably fell into at least one, and often all three, of those categories and thus had been appropriately discouraged from attempting an application; after all, there is a risk of annoying potential academic supporters by pestering them to write references for applications whose chances of success are long odds against. But this time I assessed my chances as moderate to good (and ‘near dead cert’ in my more deluded moments), and so was reassured that I would not be annoyingly encumbering two busy academics with a pointless addition to their workload. I proceeded to discuss the application with potential referees and thankfully two excellent and eminent scholars agreed to supply references.

Meanwhile I was carefully researching the project and its personnel. I downloaded for a small cost a book by the project’s director, and then read it closely while taking copious notes. I located a few relevant journal articles and read those too. I risked giving myself a hernia by transporting home a dozen weighty library books relevant to the project, all of which I browsed, and some of which I actually read, mining them for ideas. My own bookshelves supplied several more books; and I scoured the internet, looking at images and early books, journal articles and, er, Wikipedia. Eventually, with several pages of notes and plans, I was ready to write the 2,000-word research statement (indeed, I felt capable of writing a 20,000-word statement had Cambridge wanted that). I settled down one early evening to turn the crystalline statement that had formed in my mind into real words on a page; six hours later I had one abandoned statement of 250 words, and a new beginning of about 50 words. And so I had to bed down for the long Easter weekend, toiling for hour after hour over the four days, neglecting to deliver chocolate to my children, until, in a fitting memorial to the risen Christ, I had produced a delicately crafted proposal, exquisitely calibrated to convey my knowledge, experience, enthusiasm and potential. Unfortunately, when I pasted the statement into the relevant text box on the website I discovered that what Cambridge really meant by a 2,000-word limit was in fact a 10,000-character limit. My word count was fine; but crucially my character count was not. So another couple of hours was spent chipping away at the statement while half-wondering whether I was about to have a stroke.

Next I updated and beautified my CV, applying delicate final touches with the loving attention of a High Renaissance sculptor. I completed the standard application form; I located after a convoluted search among memory sticks and email attachments an article I had once written which would serve as my sample of writing; I converted all the files into PDF (which, for some reason, was the only file format Cambridge was prepared to accept) and uploaded them; I irrationally and unnecessarily panicked over whether my referees would upload their references by the deadline; and, at last, I clicked the ‘Submit application’ button. Pondering whether to offer a prayer for divine assistance, I concluded that my lack of belief in God might make such a prayer ineffective; even then I wondered whether to make some sort of divine deal just in case He exists, before finally and forcefully pulling myself together. Evidently the fact that I had probably spent in total two full working weeks on the sundry tasks associated with this application had played havoc with my capacity for rational thought. But not so much that I wasn’t able to take a moment to congratulate myself on having got it all done, which then eased into several hours of daydreaming about all the clever and impressive things I would say in the interview. I performed brilliantly, bringing repeated appreciative nods from the panel; I speculated whether I would choose Prosecco or go all out for Champagne to celebrate being offered the position; I then reminded myself that Cambridge probably hadn’t even got round to shortlisting me yet in the three hours since I submitted my application. So I steeled myself for the weeks of waiting, optimism, uncertainty, dreams and fears to come.

The Right to Remain Silent? More Quiet in the Classroom

One of my earlier posts, ‘Quiet in the Classroom’, has prompted an interesting discussion on Facebook. The original article concerned my experience of university seminar teaching, specifically in relation to two contrasting seminar groups of history undergraduates: one, a highly vocal group who interacted with one another extremely well; the other, a group in which, no matter what I tried, profound silence was the unshifting, default position. What particularly interested me was that it was the latter group who performed at by far the highest level when it came to formal written assessment. I concluded that this was not necessarily surprising: history is a discipline requiring extensive and solitary reading, reflection and writing, and so those personality types (the introverts) who are most at home when quietly reflecting, and least comfortable when expected to interact publicly within a group, may be especially well-suited to the subject. Yet quiet students are increasingly seen within academia as ‘problem’ students, since they are not meeting academic expectations of contributing regularly to classroom discussions and engaging and interacting vocally with their peers. This seems to be a reflection of the more general tendency to see extroversion as an ideal and introversion as a problem. (In the article I refer to Susan Cain’s fine exploration of this issue in her book, Quiet: The Power of Introverts in a World That Can’t Stop Talking.)

It is clear from some of the comments I have received that a number of students share my feelings about the growing expectation that learning has to be a socially interactive process. As an undergraduate student once myself, I can recall how daunting the seminar was: for me as a shy, introverted type, the prospect of having to speak in front of a group of people (including, of course, someone with vastly more knowledge about the topic of discussion than I had) induced anxious and uncomfortable feelings. The risk of making a fool of myself in front of my peers was forever at the forefront of my mind. I am sure this is a common experience, particularly among students, many of whom are young, lacking in confidence, and anxious about social relationships and how they come across to others. Rather than make them feel that their reticence is a problem, we should instead reassure them that it is perfectly normal.

Of course, part of the point of education is to push and challenge students—to support them in gaining more confidence in areas they may find difficult and uncomfortable, and to prepare them for a future in which they may well at some point have to contribute to meetings or deliver a report. But to approach this task bullishly by demanding that students interact (as some teachers do) is insensitive and narrow-minded. It fails to appreciate that personalities vary, and, worse, it can generate unnecessary anxiety among students to the extent that, counterproductively, studying becomes for them a miserable experience.

A better approach would be to create the right conditions that enable students to contribute if they wish, while respecting the right to remain silent—we should not demand that anyone has to speak if (for whatever reason) they don’t want to, nor should we make a student feel embarrassed, ashamed or a ‘problem’ if they are quiet. Fundamental to the classroom environment is an atmosphere of respect and tolerance—which means that everyone (students and teachers) respect and tolerate the various personalities and views within the group. Rather than become frustrated that some students seem to say too little while others say too much, one might reflect on how the seminar is valuable training for how we can interact constructively with a wide range of people and personalities. For some that means learning to contribute more; for others it means learning to contribute less; for all it means respecting the contributions of others, however sparing or effusive they may be.

Does it really matter anyway if a student says little in class but is otherwise performing at a good academic level and seems generally happy? I would say not, and hence that there is no justification in worrying about it. Instead we might focus on that student’s evident strengths in other areas of the learning process; we might acknowledge that for some people it takes years to develop the confidence to speak regularly in social groups (something I know from personal experience); and we might realize that there are many paths through life for which confident social interaction is not an essential skill but the ability to reflect and think is. Listening and thinking are at least as important as talking and acting.

Cliometrics: Or, What Historians Can Tell Us about Metrics

Measurement and quantification have become the guiding lights of our age. Numbers are becoming the principal means by which we make sense of our lives and our world. Wonderfully, or so it may seem, we can put a numerical value on almost every aspect of our experience: our health, our happiness, our success, all can be rated and compared with the health, happiness and success of others. How liberating it is to work out whether we are happy without relying on such messy and imprecise things as the nuances of feelings and subjective experience! I may feel happy, but am I happy? Best check it according to one of the many ‘scientific’ ways of quantifying it. Even better, I can check my life satisfaction against the national average on David Cameron’s happiness index (happily funded to the tune of £2m per year in these times of austerity). It may even boost my happiness rating to discover that I’m happier than most other people…

It is increasingly hard to resist this brave new world—the numbers insidiously work their way into every area of our lives. How is my writing going? I’d better check the stats on views and visitors and referrers to my blog. Was my teaching last term successful? Let’s look at the average ratings the students gave the course. Am I popular? The number of friends on social media will answer that. Am I in good shape? Best work out my Body Mass Index. Is this meal healthy? I can cross reference its calories and sodium content with recommended daily averages. Which of these books should I buy as a Christmas present? I’ll let the average reviewer ratings help me decide. Is it worth reading this piece of research? Let’s check the ‘impact factor’ of the journal it has been published in… oh, that’s quite a low number, so best not bother to read it then. How absurd is all this measuring of feeling and quantifying of quality? Well, it rates highly on my recently-devised and extremely scientific Absurdity Factor.

It’s not that the formulae, numbers and statistics in themselves are bad: they are simply pieces of information about which we need to exercise critical judgment, to make evaluations as to their worth or not. It is the growing tendency to dispense with evaluation that is the problem: doubting and foregoing our ability to make subjective judgments, we instead treat the numbers and ratings as if they are reliable, scientific truths. In the process—and here it gets really serious—careers and lives are destroyed. Few employees are free from the use of performance indicators, the metrics that are used to measure whether someone is doing their job well. As I’ve discussed elsewhere, higher education has become obsessed with them: the quality of scholarly research is judged not on reading it but on metrics; the performance, futures and lives of academics hinge on a set of numbers that are hopeless at assessing such things as the quality of teaching and research but which are beloved of university managers, heads of department and HR departments as the definitive guide identifying whom to hire and whom to fire. The ‘successful’ academic of the future is likely to be the one who has swallowed the notion that quality is no more than a number and that there are ways to ‘game’ the metrics in order to achieve the required number—such as having your research published in a journal which is part of a citations cartel.

Both the value and the limitations of quantification are familiar to most historians. In particular, quantitative methods have become an important part of the armoury of the social historian. It would be inconceivable for a history department not to teach social history, but this has only been the case since the 1960s. Before then social history was a niche, poorly-valued area. In part this was because of prevailing attitudes among historians (‘history is about great leaders and high politics; who wants to know about the dreary lives of common people?’), but it was also because there were real difficulties researching the subject—there was no absence of useful data about many periods but there was a lack of adequate tools to make meaningful interpretations about past societies. For the historian interested in early modern English society, for example, plenty of records and documents exist (parish registers, wills, inventories, accounts, court records, manorial records, diaries, letters, etc.), but for those sources to provide more than isolated snapshots of social life would require time, labour and resources far in excess of that available to any single historian. But then along came computers and databases, and with them the birth of cliometrics.

Cliometrics (a coinage joining the ancient muse of history, Clio, with metrics, the system or standard of measuring things) involved applying to history statistical, mathematical and quantitative models that had been developed by economists. Large datasets (a population census is an example) could be fed into a computer and then interrogated, something no individual historian could have done before the advent of digital technology. The impact on historiography was huge: whole new areas of the past could be opened up to investigation, and general hypotheses could be framed and tested within minutes rather than reached only after years of painstaking and gradual accumulation of evidence. Historians became excited about the possibilities: assemble a body of data, feed it into a computer, ask the right question, and an answer will be provided in the time it takes to make a cup of tea. Even better, it was thought, the answers would be scientific. The distinguished French historian, Emmanuel Le Roy Ladurie, claimed that ‘history that is not quantifiable cannot claim to be scientific’ and envisaged a future in which all historians ‘will have to be able to programme a computer in order to survive’ (as quoted in Richard J. Evans, In Defence of History (London: Granta, 1997), p. 39.)

wrigley_and_schofieldOne of the earliest and most impressive applications of cliometrics stemmed from the Cambridge Group for History of Population and Social Structure, founded in 1964 by Peter Laslett and Tony Wrigley. Using English parish records (one of the legacies of Thomas Cromwell’s administrative reforms was the commencement of regular parish record-keeping from 1538) as a dataset, the outstanding achievement of the Cambridge Group was Wrigley and R.S. Schofield’s Population History of England 1541-1871: A Reconstruction (Cambridge: Cambridge University Press, 1981). In addition to presenting huge amounts of data for pre-industrial England about birth, marriage and death rates, population size and growth, mortality crises, and much else besides, their work demolished various myths and assumptions about the past. For example, they conclusively proved that the nuclear (rather than extended) family was the overwhelming norm, and that most couples had no more than two surviving children (it was only the wealthy who tended to have large broods), rendering as pointless the surprisingly common assumption that the social and family conditions of the developing world are comparable with those of the pre-industrial world. For historians of early modern social and family life Wrigley and Schofield’s research is one of the fundamental starting points for inquiry.

wrigley_and_schofield_table
A table from Wrigley and Schofield’s Population History of England

However, a good historian (as scientifically defined according to my recently-devised Good Historian Factor) would not consider the cliometrics of Wrigley and Schofield as the end point—unlike the policy makers and managers who see metrics as the end point. The historian would understand that it is one thing to quantify family structure or life expectancy, quite another to assess the quality of family life or the effects on emotions and thought of high mortality rates. In order to do the latter it is necessary to look beyond the numbers and do some old-fashioned source evaluation: the historian would need to engage in critical analysis of diaries, letters and other texts, to assess what images and artefacts tell us, and to think broadly with concepts, methods and theories. What results is not a number but an interpretation, and (much to the dismay of the policy makers and managers) not one that is scientific or definitive but one that is open to questions, challenges, discussion and debate.

Some of thtime_on_the_crosse dangers of placing too much faith in cliometrics can be seen in Time on the Cross: The Economics of American Negro Slavery (New York, 1974), an attempt to apply a quantitative analysis to the history of American slavery by two economic historians, Robert Fogel and Stanley Engerman. The work was in two volumes, the first presenting historical analysis, the second the quantitative data. Based on the data the authors reached several controversial conclusions: they argued that southern slavery was more efficient than the free agriculture in the north, that the material conditions of slaves were comparable with those of free labourers, that the economy in the south grew more rapidly than that in the north, and that slaves were rewarded for their labour more highly than had previously been thought. Although some critics questioned the quality of the data used by Fogel and Engerman, most acknowledged that the quantitative methodology was broadly sound. What was unsound was the failure to present the information in a qualitative way. The supposedly ‘objective’ analysis of American slavery, with its hard data pointing to growth within the southern economy and to work conditions comparable to those of free labourers, ends up presenting slavery in a benign light—however much the authors themselves were clearly and genuinely opposed to any justification of slavery. A much better historical approach would have been to place more emphasis on the social and political context of slavery, and to assess its psychological and cultural aspects. For example, the authors presented the statistical finding that slaves were whipped 0.7 times per year on average. On its own such a finding might suggest that the slave economy was anything but unremittingly brutal, and maybe was not so bad after all. But what that figure (and Fogel and Engerman) fails to tell us is what the whip, and the threat of the whip, meant psychologically to the slave. A more significant impact on the life experience of the slave than the 0.7 whippings per year was more likely the fear of the whip and the lack of freedom to escape this fear. Thoughts, feelings, mental states are impossible to quantify—but they are surely essential to an historical understanding of slavery.

The policy makers and managers are clearly not historians (or else they have a dismally low Good Historian Factor). If they were, then they would see metrics as interesting and often useful information (pretty much like all information, therefore), but also limited in what it tells us; they would appreciate that metrics can be distorted by insufficient or manipulated data; they would see how essential it is that metrics is only one part (and probably a small part) of how to understand something, and for metrics to be of any use there needs to be qualitative interpretation; they would recognize that to judge the quality of research (or anything else) solely by using quantitative approaches rates a high number on my recently-devised, objective and scientific Stupidity Factor.