One of the many delusions of Brexit supporters is that the UK, freed from the shackles of the EU, will assume its rightful place as a heavyweight global power. This stems from their befuddled notion of reality: a shaky and selective grasp of history (which would appear to owe more to 1066 and All That than to any scholarly account of history) leads them to suppose that Britain’s status as a ‘top dog’ has been temporarily held in check by membership of the EU. In the delirious but intellectually feeble minds of men like Liam Fox and Nigel Farage, that Britain once had an empire and supposedly ruled the waves is evidence enough of an innate British ‘greatness’ that will once again be internationally recognized if only the country is liberated from the soft, emasculating tyranny of Brussels. Most of Boris Johnson’s vacuous and puffed-up nonsense is sung from the same page: just believe in Britain’s natural greatness and a bright future is guaranteed, etc.
Those of us with a surer understanding of past and present know that the Brexiteer view on the EU is fundamentally wrong. Far from destroying the European nation state, the EU has in fact preserved and strengthened it. With the arguable exception of Germany, not one of the EU member states would be able to compete globally on its own—at least, not in a way that would come anywhere near attaining its current level of prosperity. Indeed, this is one of the reasons why the EU is one of the great historical creations: not only has it ensured peace throughout most of a continent that for millennia had been a site of almost constant belligerence (to the point of near self-destruction by 1945), but it has simultaneously enabled a disparate collection of small and medium-sized countries to punch above their weight on the global scene. On their own, not one of the EU countries could compete on a relatively level playing field with the might of China, Russia or the US; collectively, they can.
Perhaps the UK’s difficulties in agreeing a deal on the Irish border will awaken some Brexiteers to this reality. Put simply, British arrogance (an alarmingly prominent characteristic among the Brexiteers) assumes that Ireland, as a relatively small and poor country, can be safely ignored or pushed around as Britain sees fit—a longstanding trope in Anglo-Irish relations that Brexit supporters see no reason in abandoning. But look what has happened: Ireland has drawn a line, one that is entirely reasonable, and Britain has been forced to accept it (the alternative, which is to reject it, would simply accelerate its own national suicide—it says much about the dangerously stupid thinking of the hard Brexiteers that rejecting it is, for them, a viable option). Nobody would deny that, when measured side by side, the UK is an economically bigger and stronger country than Ireland, and one that carries more international weight. So how is it that, on the matter of a Brexit deal, Ireland seems clearly stronger than the UK? Why is it (as of writing this) that Ireland is adamant that it will not back down? The answer is obvious: Ireland is strengthened by its belonging to the EU27.
For the Brexit fantasists, this ought to be a salutary lesson. If the UK pretty much has to concede to the wishes of its smaller neighbour in these negotiations, how will it fare when it starts seeking trade agreements in a post-Brexit international landscape? One can safely ignore the nonsense of Empire 2.0; the outlook for the UK is grim. On its own, the UK, a middle-ranking nation heading downwards, will be ill-placed to negotiate on its own terms. A country such as Ireland can carry itself in the world thanks to its membership of the EU—its EU membership makes it, for example, an attractive proposition for international investment. A post-Brexit UK, on the other hand, needing deals with other countries far more than they need them with the UK, will be forced into desperate acceptance of almost any terms. Far from ruling the waves, a post-Brexit UK will look more like a ragged castaway drifting on a rickety raft.
There is, of course, a way to avoid this bleak future (and I remain optimistic that, when the UK collectively comes to its senses, this will be the outcome): Brexit should be abandoned on the grounds that it is the most stupid, tragic, shameful and self-destructive event in modern British history; or, failing that, the UK should park its neuroses about Europe indefinitely in a Norway option, thereby at least retaining membership of the single market and avoiding the suicidal plunge off the cliff edge.
During the recent Westminster attack, a photograph was taken of a young Muslim woman, wearing a hijab, walking past one of the victims. She clasps one hand to the side of her face; she is looking at her phone, which she holds in her other hand. Behind her, a victim is being attended by two women; a group of four people are standing around, two of them looking at the victim, two of them talking to one another; and another woman, with grey hair but largely obscured from view, also appears to be walking by.
As The Guardian has reported, there have been several outraged responses to this photograph. Tim Young, who describes himself as a “political comedian” (despite his numerous tweets exhibiting neither comedic ability nor political intelligence) claimed that the image “could end up being one of the most iconic of our time”. The faulty, unspoken logic behind his tweet is this: an apparently Muslim terrorist act has been perpetrated; a young Muslim woman is unconcerned about this; therefore all Muslims are, at the very least, unconcerned by Islamist terror, and quite possibly approve of it.
Another Twitter user (who goes by the handle of “@SouthLoneStar”, moronically declares “Fuck Islam” in his profile and seems manically obsessed with tweeting endless, mindless and offensive Islamophobia) contrasted the photograph with that of the Conservative MP Tobias Ellwood attempting to save the life of the police officer stabbed by the assailant, suggesting that the two images show “the main difference between Christians and Muslims”.
Jamie Lorriman, the photographer who captured the image on Westminster Bridge, has said that the image has been “misappropriated” by those seeking to make Islamophobic capital from it. He points to another photograph in the sequence in which the Muslim woman is clearly distressed, and has commented: “Looking back at the pictures now, she looks visibly distraught in both pictures in my opinion. She’s in the middle of an unfolding horrific scene… I think her expression says to me that she’s horrified by what she’s seen and she just needs to get out of the situation.” As Lorriman adds, it’s “impossible to know” what the young woman was thinking.
I used to teach a class on visual evidence to first-year history undergraduates. One of the main points I tried to get across in the lecture was the importance of being highly critical of images as a form of evidence. In particular, we can be easily seduced by the power of the camera, and the notion that “the camera never lies”. But that notion is a fallacy. A photograph neither lies nor tells the truth; it simply records a tiny fragment of time and space. It then becomes subject to multiple interpretations that invariably have little relationship to the reality of the scene it depicts.
One of the images I showed the students is a controversial photograph taken by Thomas Hoepker during the 9/11 attacks. It depicts a group of young people in a Brooklyn park, casually dressed, looking relaxed and chatting among themselves while in the distance behind them smoke pours from the World Trade Center. For some commentators, the image exhibited the detached, possibly callous nature of modern youth: while thousands are dying across the Hudson, these New Yorkers are carrying on as normal, seemingly careless about the atrocity.
But ask this: how are people supposed to appear during such an incident? Should these New Yorkers have been exhibiting a constant state of distressed wailing on the off chance that a photographer may have been in the vicinity?
And consider: how many unstaged wedding photographs are there which show the bride or groom looking, in a seemingly unguarded moment, miserable? How many staged photographs have we all been in when, no matter how hard we tried to maintain a fixed smile and open eyes, we unfortunately get caught looking unhappy and half asleep? Later we may protest that the photograph misrepresents us: we were genuinely happy, we may sincerely and honestly say, but we are stuck with an image that is repeatedly and unfairly cited as evidence to the contrary.
And ask this: if we are quick to condemn the New Yorkers for their apparent lack of concern over 9/11, what do we say about the photographer choosing to spend his time in a Brooklyn park and focus his attention on park-goers? And what, indeed, do we say about ourselves, fixating on this image rather than on, say, images of the victims in Manhattan?
Recently I was involved in a brief Facebook discussion about a viral image of schoolchildren looking at their phones rather than at Rembrandt’s The Night Watch on the wall behind them. A “metaphor for our age”; “the ‘distracted society’. No wonder we’re in the shape we’re in now”; “what a sad picture of today’s society!”: these were some of the comments about the photograph on Twitter.
But the photograph tells us little, and is potentially highly misleading. It says nothing about what the kids were doing the rest of the time in the gallery; and it reveals nothing about what they were looking at on their phones. The reality is (as later confirmed by the teachers accompanying the school party) that the children were, as part of an assignment, researching Rembrandt’s painting using an app on their phones. For some, this is still a dismal comment on our society; but would they complain if, instead of learning about the painting on their phones, the children were all reading a catalogue? The Night Watch is not an easy painting to interpret; we all need some guidance to help us, and it is not obvious why such guidance is any better in physical rather than digital form.
So the photograph in fact shows children interacting with art; indeed, they appear rather engrossed in what they are learning about Rembrandt’s painting. Yet, by ignoring all context and by leaping unthinkingly to an abrupt judgement, this photograph ends up being used to illustrate the idea that young people are so hopelessly obsessed with their smartphones and social media they are no longer capable of interacting with art (and, perhaps, reality in general).
For those inclined to a negative view on the culture and character of young people, or on the effect of digital media on modern society, then it is as easy as it is erroneous to read into the photographs of the 9/11 park group or the school party in the Rijksmuseum confirmation of existing beliefs. Similarly, those already prejudiced against Islam will seize on the photograph of the young woman on Westminster Bridge and distort it to fit their own agenda.
To use one photograph out of the many thousands taken that day as a piece of Islamophobic evidence is a dangerous and wilful distortion of reality: it ignores the fact that another photograph shows the woman in a clear state of distress; it ignores the fact that at least three of the other people in the photograph are also displaying little obvious shock (there are folded arms, hands in pockets, conversations occurring without obvious attention to the victim); it ignores the fact that the police were clearing the bridge (the young woman was doing the right thing not to loiter around at the scene); it ignores the fact that many of those in the area were frantically contacting loved ones to let them know they were safe.
We are prone to see what we want to see, framing images to fit a narrative that suits our purpose. The Islamophobes haven’t looked at this photograph with any critical thought: they have simply read into it their existing prejudices, and they have used it to frame their anti-Islamic narrative. Desperate to exploit the Westminster attack for their own agenda, they have framed it as evidence of the supposed evils of Islam and the dangers of multiculturalism and immigration.
As the facts of the attack slowly emerge, these misleading interpretations look ever more irrational, hateful and nonsensical. But facts and reality count for little in the feverish minds of the Islamophobic far right. Hence they try to build grand “truths” out of an image that, showing no more than a millisecond of time and a minuscule slither of space, reveals next to nothing about the people it depicts, and even less about society and culture as a whole.
More than four months have passed since I promised this post—which is now written in the light of the election of Donald Trump as US president (something that I superstitiously predicted in the frustrated hope that I would, for once, be wrong about election outcomes). Like Brexit and the presidential election, my writing has been a drawn-out, chaotic process. This post as good as represents a new article rather than an obvious sequel to the first post. Above all, it responds far more to Trump’s election than it does to Brexit—it would, after all, be the height of parochialism to consider the latter anywhere near as significant as the former.
* * * * *
Right now, in the midst of Brexit and so soon after the election of Donald Trump as US president, historical perspective is not likely to yield much that is useful for helping to understand events that have only just begun to unfold. Clearly Trump and Brexit, as well as Putin, Erdoğan, Le Pen, Assad, Isis and much else besides across the world, point to the emergence of a global crisis and a treacherous future. But history will help us little to comprehend any of this in detail, how it may unfold and what may be the route out of the dark and grim cave we find ourselves in. On the other hand, history can provide an important broader perspective—and one that may even provide grounds for optimism.
Like anyone else, historians like to feel useful, so there are inevitably attempts to analyze recent events in light of the wisdom they have acquired from their expertise. One such attempt that garnered some attention (it was originally blogged at medium.com and was republished by The Huffington Post) is an article by Tobias Stone entitled (with somewhat hubristic confidence) ‘History tells us what will happen next with Brexit and Trump’.
Stone makes two broad points, which turn out to be disconnected, and arguably incompatible. The first is that comparatively small events, such as Brexit, can lead to larger events in a globally connected world. To illustrate the point, he sketches out a scenario in which Brexit is the triggering event in a chain that leads to global nuclear war. This is, of course, speculation rather than a serious claim that this ‘will happen’, and Stone himself concedes that one cannot know for sure what the outcome of Brexit will be, either for Britain or internationally. However, the general claim is sound to the point of being historical commonsense: events lead to other events, invariably in ways that are unforeseen at the time. No historian would dispute this.
Stone’s second main point is that history operates in a cyclical way. The cycle he presents is one in which a period of stability is inevitably followed by a period of destruction, from which society emerges in better shape and achieves stability again, only to descend once more into destruction; and so on. He suggests that most people are unaware of this because their understanding of the past is limited to about 50-100 years; but historians, who have a longer perspective on the past, will soon detect this cyclical pattern. Unfortunately, the only real example of this cycle he presents is one that is itself limited to the previous 100 years, encompassing the two world wars and various other events over the twentieth and twenty-first centuries that culminate in the emergence of Putin, Trump and Brexit. Stone does present a disparate list of other historical events—‘the collapse of the Roman Empire, Black Death, Spanish Inquisition, Thirty Years War, War of the Roses, English Civil War’—but without explaining how these wildly different events (including one—the Black Death—that has nothing to do with human agency, and another—the Spanish Inquisition—that was not so much an event as an institution that spanned centuries) illustrate a recurring cycle to the past.
The idea that the past reveals historical cycles is a popular one. It was a common topos among classical writers, and the notion of a wheel of fortune revolving and dictating human affairs has a long pedigree. Nineteenth-century social theorists and historians, fond of understanding society in biological terms, likened human affairs to the life cycle, with inevitable stages of youth, maturity, decline, death, and new life.
But, to put it bluntly, cyclical theory is utter rubbish, based on a groundless, quasi-mystical notion that some kind of metaphysical (or, alternatively, biological) law applies to history. There is no evidence that history works in cycles and that we can use the past and a ‘cyclical model’ to predict what will happen next. Of course, if one tries hard enough (and many historians have) it is possible to impose all sorts of patterns on the past—most notoriously by those historians influenced by Marxist theories on historical development. We have a tendency (and this is arguably a psychological truth) to impose or detect patterns because we prefer seeing comprehensible order rather than incomprehensible chaos. However, these patterns, whether they are Christian providentialist history or Marxist determinism or cyclical history, almost never stand up to real scrutiny. They are fictions, telling us far more about ourselves than they do about the past. The Christian providentialist history, for example, reveals more about the mentality of its author than it does about the past; cyclical theory tells us a lot about the human predisposition to view time and the universe in an anthropocentric way, and about the desire to render history as a science operating according to identifiable laws. There is no requirement to be a postmodernist to cast scepticism upon such grand, and undeniably imaginative, historicizing theories.
One thing that history does teach us is that it is unwise to draw direct comparisons between two historical periods, particularly when they are at significant temporal remove. Any suggestion that our own time might be compared with earlier historical periods is fraught with problems. In almost every area there are fundamental, incomparable differences between our age and any previous age, whether those differences be demographic, cultural, technological, scientific, intellectual, or social. For all that there are things which approximate to constants (or at least admit only tiny change over history)—geography, the environment, biology—difference rather than similarity characterizes the overwhelming part of human life and society when viewed across historical periods. The urban, post-industrial society that we live in today, the way we work, the way we communicate, the way we socialize—none of that can be compared with any previous period except in ways that are highly general or superficial. It is, for example, undoubtedly interesting and valuable to consider the modern digital revolution in media and communications with the print revolution of the early modern period, but to approach them from a cyclical perspective—as examples, perhaps, of cycles of technological change—ends up in a ridiculous and misconceived effort to incorporate the vast differences between the two ‘revolutions’ under a single explanatory ‘law’.
The capacity for folly would seem to qualify as a human constant transcending time. However, this folly invariably manifests itself in different ways depending on the historical context. Just because in both 1618 and 1939 Europe descended into profoundly destructive warfare emanating from Germany does not mean the two events are comparable instances of a deeper cyclical law. The Thirty Years’ War and World War II were vastly different conflicts, stemming from vastly different causes, and occurring in vastly different social, political, cultural and intellectual worlds. Similarly, just because the crisis of democracy and liberalism in the 1920s and 1930s led to totalitarianism, war, genocide and devastation does not mean the same will happen again in the present crisis. The economy, society and culture of interwar Europe resembles our own in few respects. Trump, Nigel Farage, Geert Wilders and Marine Le Pen may well be ‘fascistic’, and it may be interesting to compare them with Hitler and Mussolini, just as it may be interesting to compare Putin with Lenin or Stalin, but there are significant limits to how far one can take such comparisons. There are far more ways in which Trump differs from Hitler than there are ways in which he resembles him. Likewise, one does not have to be fond of the Republican Party to point out that they are not remotely like the German Nazi Party. To suppose that Hitler and the Nazis, and Trump and the Republicans are fulfilling the same destined cyclical role is a nonsense. In short, cyclical theories of history are junk—entertaining junk perhaps, and revealing of the mental world from which they originate, but junk all the same.
* * * * *
History can nevertheless shed light on contemporary events. A more fertile approach to understanding the past was that of the French historian, Fernand Braudel (1902-85). One of the central figures in what has come to be known as the Annales school of historiography—the influence of which on historical research cannot be overstated—Braudel was arguably the greatest historian, and his book, The Mediterranean and the Mediterranean World in the Age of Philip II (1949), arguably the greatest historical work of the twentieth century. The importance of The Mediterranean stems in part from its brilliant exposition of sixteenth-century Mediterranean society, economy, culture and politics, but above all from its broader structure. Braudel consciously rejected the traditional approach to history which focused on politics and events. Instead, he understood the past in terms of three different levels of historical time: geographical time, social time, and individual time. The first deals with the extremely slow, almost imperceptible, changes in geography, the environment and climate that shape human history; the second concerns demographic, social, economic and cultural structures and their gradual changes; and the third, individual time, is the domain of ‘events’, those
surface disturbances, crests of foam that the tides of history carry on their strong backs. A history of brief, rapid, nervous fluctuations, by definition ultra-sensitive; the least tremor sets all its antennae quivering. But as such it is the most exciting of all, the richest in human interest, and also the most dangerous. We must learn to distrust this history with its still burning passions, as it was felt, described, and lived by contemporaries whose lives were as short and as short-sighted as ours. It has the dimensions of their anger, dreams, or illusions… Resounding events are often only momentary outbursts, surface manifestations of these larger movements [of geographical and social time] and explicable only in terms of them. (The Mediterranean and the Mediterranean World in the Age of Philip II, trans. by Siân Reynolds (Berkeley: University of California Press, 1995; first published in French, 1949; second revised edn, 1966), p. 21.)
Traditional history fixated on events and individuals: kings and queens, statesmen, diplomats and generals, high politics, wars and revolutions. It is interesting, exciting and entertaining history, but on its own provides little understanding of the past. To understand the sixteenth-century Mediterranean world, as Braudel endeavoured to do, required attention to the longue durée, the long-term, the arena of geographical and social time. It necessitated understanding the Mediterranean as a sea with its islands and coastline, and the surrounding lands as varying regions of hills, mountains, plains and deserts. For it is this geography and climate which has shaped the social and economic culture of the Mediterranean peoples, fashioning the agriculture, the local and wider economies, the trade routes and financial systems. Only by grasping these features of the Mediterranean world—its geography, its climate, its economy, its society—is it possible to understand the individuals, politics and events that emerge from them.
Braudel’s metaphor of ‘surface disturbances, crests of foam’ suggests that most events are little more than froth. Perhaps one way of thinking about this is to offer a Braudelian adaptation of Shakespeare: events, whether wars, revolutions, political upheavals, are full of sound and fury, signifying nothing other than the larger movements of geographical and social time.
A Braudelian perspective, therefore, might regard the election of Trump and the vote for Brexit as surface manifestations of larger movements. An analysis of Trump and Brexit more plausible than the attempt to discern in them the recurrence of a cyclical stage is to consider them as reactions to rapid change (some of which might be described as progress). It is possible, for example, that they are the final, dying twitches of misogyny, white supremacy and blinkered nationalism in a world that increasingly has little place for such things; certainly, demographic, social, cultural and economic evidence suggests that possibility is more likely than that Trump and Brexit are ushering an enduring change in human history. There is a chance that these dying twitches will lead to global devastation and environmental catastrophe. But this is not inevitable, and assuming we manage to avoid such disasters, we may well find an era will follow—in five years or fifty years, who knows?—that once again embraces progressive, liberal and enlightened values suited to the demographically and culturally diverse world we live in.
The point is, to adopt this Braudelian view, that there is a flowing ocean of broad social, cultural and intellectual shifts on which Trump and Brexit are transitory crests of foam. One might consider gender history as an example. The election of Trump is undoubtedly a setback in the struggle for gender equality and women’s rights. But the long history of this struggle shows nothing cyclical about it; rather, it resembles a long and painfully slow story of progress against a background of gradual social, economic and cultural change. Trump is probably no more than a temporary setback, a desperate misogynist backlash, a brief and fleeting political manifestation of the rage and frustrations of men who are dimly aware they are almost certainly on the losing side of history. Even the misogyny of Trump and his supporters is not going to reverse female suffrage, say, or access to higher education. Despite Trump, all the historical signs are that one day in the future women will achieve equality. As Braudel put it, in ‘historical analysis… the long run always wins in the end’ (The Mediterranean, p. 1244). Individual historical actors, such as Trump, and events, such as the 2016 presidential election, are, for all their immediate and foreseeable pain, insignificant in the context of the broader tides of social and cultural change.
Despite the hopes of non-specialists that history may contain the secrets of what will happen in the future, historians have never been good at predicting the future with any precision. Any attempt to read from recent events a future sequence of events and their outcome is no more than speculative guesswork requiring no knowledge of history. Anyone could imagine, say, a scenario in which a terrorist incident on US soil in the name of Islamic fundamentalism early in the Trump presidency leads to virulent Islamophobia; or a win for Marine Le Pen that results in Frexit and further international instability. Equally, however, there may be neither a terrorist attack nor a victory for Le Pen. Whether there are or not, and what possible outcomes may arise, cannot be gleaned by looking at past historical events; they can only be based on an astute and informed assessment of current possibilities and probabilities.
But what history can illuminate are the broader and longer changes that generate events. Climate change, demographic change, social and cultural change, technological change: the long histories of these provide a better context for understanding recent events than a narrow analysis of personalities, political calculations and strategies. A recognition of this may help us avoid the despair of supposing recent events map out a road that ends only in catastrophe. And it certainly makes more sense than to view these events as manifestations of a mysterious historical law according to which humans will periodically enter into phases of self destruction. Not only is such a cyclical view nonsensical fiction, it is also likely to foster an attitude of resigned quietism.
Finally, it is worth stating that the present concerns are fourfold: to understand recent events; to avoid potential global disaster; to keep alive progressive values; and to work towards the acceptance and success of those values. History can help in these tasks, particularly those of understanding events and preserving values. This is because the past does not present a metaphysical law of inevitable cyclical return, but is rather a shared body of experience, knowledge and analysis from which to draw inspiration and understanding. It is for this reason that history, as the discipline concerned with the past, is invaluable in the present.
And since I began my first post with reference to the now deservedly Nobel laureate, Bob Dylan, I’ll end this one with a couple of lines worth keeping in mind: ‘For the loser now will be later to win, / For the times they are a-changin’.’
‘Something is happening here, but you don’t know what it is, do you Mister Jones?’ sang Bob Dylan in ‘Ballad of a Thin Man’. It’s a refrain appropriate to the political situation in the wake of the EU referendum (which increasingly seems to have occurred in a past life rather than a mere fortnight ago). For about the only thing about which we can be fairly certain is that nobody—not Mister Jones or Mr Gove or Mr Johnson or Mr Cameron or Mr Farage, not Mrs May or Mrs Leadsom, not the leader writers or the commentators, not the investors or the speculators, and, for sure, not me—knows what is happening or what will happen. The atmosphere is febrile, tumultuous and astonishingly, gloriously clueless. Perhaps, ultimately, nothing much will happen, yet Britain feels different, as if anything could happen—the real possibility of a bona fide loon such as Andrea Leadsom becoming Prime Minister is evidence of that. When not dispirited by the alarming and hideous rise in racist incidents over the past two weeks—hardly surprisingly the far right, among whose number one should include the Faragiste wing of the Brexiteers, are feeling very chipper right now—I will, a little guiltily, confess to finding the ‘Brexit crisis’ rather exciting and invigorating. How can one not when so clearly something is indeed happening here?
And yet—is it? On the one hand: the Prime Minister has resigned; Farage has resigned; most of the Shadow Cabinet has resigned; Boris Johnson’s absurdly vainglorious ambitions lie in tatters; the Tories remain divided, and yet find themselves, for the first time in British history, with the remarkable privilege, and strange constitutional quirk, of directly electing the next Prime Minister; Labour are daily disintegrating before our eyes; the pound is plummeting; the FTSE is ailing; even the Greens are in the midst of a leadership election. On the other hand, Brexit has not happened, is not likely to happen anytime soon, and may in fact never happen (and the chances of it happening in a way that would satisfy the Faragistes, who are pinning their hopes on the hopeless Leadsom, and the Goveites seem, to me at any rate, extremely remote—the reality is that even Brexit will have to involve, at the very least, some access to the single market and some concessions to freedom of movement).
All of this has consigned Britain to limbo (a place in hell, according to the Catholic church, it is worth remembering). For example, Britain is still a member of the EU. But nobody seems sure what this means. For some in the EU, and in Britain too, the referendum result makes Brexit a fait accompli; consequently Britain should no longer participate fully in EU decision-making. The UK is due to assume the EU Presidency in July 2017, yet will it or indeed should it? Doubtless all over Europe heads are being scratched, for until Britain invokes Article 50 so that exit negotiations can begin, the UK remains formally as much a member of the EU as it ever did—and even in the event of Article 50 being invoked, the two-year negotiating process could easily become overwhelmed by events that force dramatic rethinks of Brexit. With the Tory leadership contest still to be resolved, as well as elections in France and Germany that could well transform the situation by offering new possibilities and paths, the most likely thing to happen over the next few months is nothing much.
Even if not much happens for a while—apart from what now seems to be the business-as-usual fever, panic and wild, clueless running around in Westminster and the City—we will nevertheless be stuck in the ‘Brexit crisis’. Whether or not something is genuinely a crisis (and I think this is), calling it a ‘crisis’ always benefits some. Newspapers, journalists, commentators, even bloggers can do well out of a crisis as a feeding frenzy for information, opinion and comment takes hold. One can be sure that Brexit, both as potentiality and as actuality, will be the making of some people. Crises invariably are.
The words of the free-market guru Milton Friedman are relevant here:
Only a crisis—actual or perceived—produces real change. When that crisis occurs, the actions that happen depend on the ideas lying around. (Capitalism and Freedom (Chicago: University of Chicago Press, 1982; originally published, 1962), p. ix.)
Whatever one thinks of Friedman’s economic ideas, it is hard to dispute his assessment of crises. Revolutions and radical political change are born from them. The Bolsheviks and the Nazis emerged out of actual crisis; Margaret Thatcher came to power against the perceived crisis of 1970s union strife and the ‘Winter of Discontent’; and, as Naomi Klein has shown in The Shock Doctrine (London: Allen Lane, 2007), for the neoliberal disciples of Friedman economic crisis presented the perfect opportunity for radical free-market ideology to be imposed on states. The Brexit crisis has opened up a rare moment for those who desire radical change to progress in their goals.
Alarmingly, however, the main ‘ideas lying around’—the ideas likely to shape what happens over the coming weeks, months and years—are nationalism (both in its nasty form as embodied by the Faragistes and Goveites, UKIP and other far right groups, and in its cuddly variety as embodied by Nicola Sturgeon and the SNP), neoliberalism and racism. The political centre and left, both in some disarray, currently offer little in the way of a coherent vision. When Theresa May represents the best hope for the moderate centre, then there are grounds to worry about the tectonic shifts in British politics.
Yet Brexit, unwelcome as it may be, surely presents opportunities. Consider what it has already achieved: the end of Cameron, the effective termination of Osborne’s political ambitions, the wonderful demise of Johnson, the accidental (and, let’s be honest, quite funny given it all stemmed from some supposedly clever—too clever as it turned out—Macchiavellian manoeuvres) harikiri of Gove’s ambitions, and in general the mayhem and panic across the political landscape. What more could follow?
I did not, and do not, want Brexit to happen, but in so far as we now are stuck in a Brexit crisis, and in so far as there can be no return to a pre-Brexit state of affairs, then we may as well make the best of it. After all, the Leave campaign have trumpeted the ideas of taking back control, of reclaiming democracy. So why not pick up their baton and run with it? We live in a country with an unelected head of state, an unelected upper chamber, an unrepresentative voting system, an excessive concentration of power in the executive, and a politics dominated by unelected media bosses, big business and the City. The potential for co-opting the Brexiteer slogans and arguments for progressive ends is great.
Perhaps this more than anything explains why Corbyn and Momentum are so determined to survive: they want to ensure that the Left has a dog in the political fights and struggles to come. It would be interesting if they succeed. In a couple of months both the government and the opposition in Westminster may be commanded by minority and comparatively extreme factions: the Tories by Leadsom and her Faragiste followers, Labour by the Momentum-backed Corbynites. If so, we could be in for a period of car-crash politics. But would this be so unwelcome if it continued the process of ripping through the familiar Westminster politics and bringing about some overdue political change? We could find ourselves in some heady days as different varieties of progressives and reactionaries battle it out.
But I am getting much too far ahead. For one thing, the Establishment, divided though it is about Brexit, invariably finds a way of asserting itself in the face of challenges. For another, it is worth being careful about what one wishes for: Germany had its heady days of progressives and reactionaries in the 1930s, and the possibility of something similar arising in Britain in the near future is remote but not non-existent.
And for another thing, I return to the words of Dylan: something is happening, but we don’t know what it is. Perhaps that is because we are so dazzled and seduced by the undeniably exciting high politics—the machinations, the party in-fighting, the psychologies of the central actors in the drama—that we are missing the more important things that are happening beneath the surface. Cameron, Farage, Johnson, Gove, May, Leadsom and Corbyn; Momentum and UKIP; the plummeting pound one day, its slight recovery the next; Merkel, Hollande and Sarkozy—these may be no more than the ripples and the froth on the surface of the ocean. To understand what is really happening one may need to make the more difficult journey into the dark depths, for there is to be found the currents that generate the tide of events. And to get that one needs the mind of an historian (or a Bob Dylan perhaps…) rather than a journalist (or a Mister Jones). We might do well to consider the ideas of the great French historian—arguably the greatest of all twentieth-century historians—Fernand Braudel, to which I shall turn in the second part of this blog post.
Self-publicity and celebrity almost certainly require the readiness to make a fool of oneself. Among ‘media historians’ David Starkey, more than most, has made an art form of this. Starkey’s credentials as an historian are impressive, even if his attachment to the traditional field of high politics seems increasingly narrow and dated in an historical landscape that takes a much broader view of how we might study the past. But even more impressive is his fondness for pronouncing on topics about which he knows little, but on which he will reliably offer deliberately controversial soundbites. I recall his consistent entertainment value on the Moral Maze with his abrasive rent-a-quote shock-jock style designed (one can only assume) purely to upset those liberals and lefties who failed to see how lacking in substance his contributions were.
Starkey’s infamous, embarrassingly inept and staggeringly ignorant appearance on Newsnight in the wake of the 2011 riots stands out. Given how well-liked Enoch Powell’s ‘Rivers of Blood’ speech is within far-right circles, declaring that Powell was ‘absolutely right’ might not seem the wisest way for a supposedly intelligent historian to begin an interview. Not unless, that is, one’s aim is to be controversial rather than intelligent. His commitment to controversy rather than reasoned and thoughtful consideration was evident throughout the interview as he provocatively declared that ‘the whites have become black’ and made a crass argument that amounts to the claim that white culture is good and black culture is bad. Missing the point entirely, nothing he said offered any sensible insight into the riots. But he made some headlines for himself which may well have been his primary purpose. That he was prepared to risk coming across as a racist buffoon in the process suggests that all publicity may indeed be good publicity.
Deliberate perversity is a necessary trait of the good controversialist. Consider the following situation: you are asked whether you have watched a recent television programme; you reply honestly that you have not, and nor, you add for good measure, have you read the book on which the programme is based. Most people would refrain from making any comment or judgment on something about which they are ignorant; but ignorance is no bar to the controversialist. And so Starkey, after admitting that he had neither watched the television adaptation of Wolf Hall nor read the original novel by Hilary Mantel, nevertheless somewhat pompously proceeded to accuse the novel of ‘a deliberate perversion of fact’.
Quite what he means by this is unclear, since the novel is consistently faithful to the known historical facts of its subject—of course, Starkey is not likely to know this since he hasn’t read the book. The example he gives of a ‘perversion of fact’ is the ‘great deal of emoting’ by Thomas Cromwell over the death of his wife and daughters. As Starkey notes, there is no evidence to suggest that Cromwell was emotionally affected by this loss; but nor is there evidence to suggest he was emotionally unaffected. As with most of the past, the historical record is here silent. Many historians would claim that an appropriate response to this silence is one of silence themselves. It is not surprising, therefore, that some historians have little time for the historical novel in which the areas of silence become fertile ground for imagination.
But it is one thing to shun the imagination of the novel and quite another to suggest that imagination amounts to a ‘deliberate perversion of fact’. Had Starkey actually read the novel or watched the programme on which he confidently passed judgment, he might have discovered that Cromwell’s emotional response is one of great restraint by modern standards. It hardly seems to form a solid basis on which a sympathetic portrayal of Cromwell might be built. But one suspects that Starkey’s real objection is the possibility of a sympathetic portrait of this traditionally reviled figure (and, conversely, what he has heard about the apparently unsympathetic depiction of Thomas More). Again, had he read the novel (and its sequel, Bring up the Bodies) he would find that Mantel draws a nuanced, complex portrait of her central character. In so far as she makes Cromwell a sympathetic character, it follows from her overriding interest in the human story of Cromwell’s life and career. Unsurprisingly, Starkey misses this important point.
There is of course a wonderful irony here: Starkey criticizes Mantel’s interpretation of Cromwell on the grounds that it is not built on the evidence, yet his own criticism of Wolf Hall stems from a spectacular ignorance of the evidence. The contrast between Mantel’s years of diligent research before writing about Cromwell and Starkey’s inability to spend a few hours reading her novel before passing comment on it is striking.
I have argued elsewhere that Wolf Hall can be considered a legitimate and valuable contribution to our understanding of Tudor history. How we might reconstruct human stories where the historical record is lacking, in ways that have historical value, and whether such attempts should even be made, are interesting questions; in my view, Mantel’s sequence of novels about Cromwell are intelligent and thoughtful answers to those questions that deserve the attention of historians. I do not expect that argument to be shared by all historians, and I am certain that Starkey, given his forceful (albeit, in my view, wrong) distinction between ‘empirical fact’ and ‘fiction’, would not agree with me. What I’m not sure about is which of these is the more deliberately perverse: that an eminent Tudor historian chooses not to engage with a book and television programme that, whether for good or ill, will almost certainly contribute to the way the Tudor period is understood; or that the same historian is prepared to pass public and misguided comment on a book that he has not read.
A couple of days ago every text message on my phone vanished. This happened overnight: I slept secure in the knowledge that my 2,000 messages were safely stored; the next morning, on checking my phone after hearing a new text alert, I found only the single, solitary, new message I had just received. Where had they all gone? Quite possibly they exist somewhere, although that somewhere is probably buried deep within the US National Security Agency or Britain’s GCHQ. Thanks to Edward Snowden we know that an NSA program dubbed Dishfire was indiscriminately collecting nearly 200 million text messages every day in 2011. However, contacting the NSA and GCHQ on the off chance they possess an archive of my messages strikes me as foolish. So to all intents and purposes my messaging history over the past couple of years has disappeared for good.
An event such as this ought to hit me hard. I am the sort of person who saves every piece of communication from friends and colleagues that comes my way. I preserve every letter and card I am sent; I save and archive all emails I exchange (on one of my email accounts there is a folder containing all 5,000 or so emails between myself and an unhinged woman in New York, a ‘romantic’ story I am unfolding elsewhere); you can be sure if you contact me the communication will be archived, even if it is only a pithy email expressing the fervent wish that I ‘rot in hell’ (not, some may be surprised to know, a frequent wish, but it has happened). I archive primarily for archiving’s sake. I have no doubt that of all the many thousands and thousands of emails I preserve I will not look at more than a handful again in my remaining years. But in some way they all constitute a record of my past, and as an historian I instinctively like to preserve such records. And as evidence of my brilliant and witty contribution to the art of texting (if not sexting), the disappearance of my messages now means that people will simply have to take my word for it…
However the sudden evaporation of my messaging history has happened before with my phone (see here and here for evidence that this issue is not unique to me). And when I upgraded my phone a few years ago, there was no way of retrieving the message archive on my previous phone. So I have long been sceptical of the archiving stability of text messages. Doubtless there are ways to archive were I to devote enough of my precious time to investigating the matter; and I’m told that messages on the iPhone are stored in the Cloud (but since I try to avoid Apple as best I can, this is of no help to me). But in place of doing anything practical about my problem I’ve decided to think about it instead. Two issues seem interesting to me, both in relation to issues raised by digital communication: whether and what we should archive; and the accessibility and security of archives.
Preserving a record of the past is important; even a society which consciously constructs a false version of the past needs its records (a point brought out in George Orwell’s Nineteen Eighty-Four—Winston Smith’s job is to falsify records). Most societies have evolved careful rules and procedures about preserving official documents, court records, important correspondence, minutes, reports, and so on; similarly, companies and businesses are expected to retain records, and most wish to keep an archive. There are good reasons for doing this: the phone-tapping scandal revealed how important from a legal and transparency perspective it was that News International retained its email correspondence (and, fortunately, David Cameron’s struggle to understand what ‘LOL’ means in his text messages to Rebekah Brooks); whereas the failure of the Russian World Cup bid to preserve its emails has hampered the investigation into allegations of FIFA corruption.
But, as any archivist would confirm, it becomes impractical to archive all information—there is simply too much of it. The National Archives at Kew store approximately 11 million documents on 100 miles of shelving, adding at least a mile of shelving each year; to save space at Kew, deep salt mines in Cheshire are now used to hold some records. Even then, records need to be destroyed: the National Archives has drawn up a policy of which records can be discarded—those that are ‘deemed to have no long-term value’. Deciding where to draw the line between those records worthy of long-term preservation and those that can be disposed of as valueless ephemera is, however, fraught with difficulties (not least over who has the power to make these decisions). As many historians know, it is frequently those records which held little value in their contemporary society which go on to become valuable historical documents. A good example of this comes from one of my own areas of research interest: the visual print culture of early modern England. There is a good survival rate of those printed images from the sixteenth and seventeenth centuries at the higher end of the market, since these were the ones valued by collectors. But the survival of cheap prints is much patchier, even though they were almost certainly produced in far greater quantities than the ‘quality’ prints—ignored by connoisseurs and collectors, and regarded as disposable ephemera, many have been lost to us for good. Yet several centuries on, as historians increasingly focus on the everyday life of society and people in the past, it is precisely this low-end material that becomes most valuable as evidence.
Consider this in relation to digital communications. On Twitter it is estimated that 500 million tweets are sent each day, which works out at over 180 billion tweets per year; on Facebook, 12 billion messages are sent per day, and every minute 50,000 links are shared and 243,000 photos uploaded. A fair proportion of all this stuff is the stirring information that someone has decided to have a boiled egg for breakfast, or the inspirational pictures of kittens doing amusing and ‘cute’ things—all of which, one might suppose, epitomizes the very notion of ‘disposable rubbish’. But does it really lack value? Setting aside my regret that I decided to write articles rather than post endless pictures of kittens and cats (the latter being a more certain means of gaining an audience), I find the limitless ephemera flying around the internet to be fascinating, and I’m sure future historians will do too—as an insight into the culture and mentality of the world we live in, as evidence of our concerns, our ways of interacting, our means of dealing with the world and all its pressures, all of this is valuable material. While I fear for the sanity of the future PhD student who decides that ‘The Cultural Meaning of the Early 21st-Century Obsession with Kittens’ makes for a viable research topic (I start losing my mind at the sight of two or three kitten pictures, so I shudder at the idea of trawling daily through thousands of the damn things), there is a potentially interesting subject in this, and it is important that we preserve the archives to enable it to be pursued one day.
But will the archives exist for the future historian? My failure to preserve any of the text messages I have sent or received over the past ten years or so suggests that the information we exchange digitally is far from secure or stable. In relation to information management and archiving, this seems to me to be one of the pressing issues of our age: how do we manage and preserve the unimaginably vast amount of information we produce so that future generations, should they wish, can study it in order to understand our present and their past? Rapidly evolving and changing file types, media formats and means of storage do not inspire great confidence that our information will be accessible to future generations. If I put all my documents on a USB stick or CD, what are the chances that in, say, a century’s time there will be the easy means of retrieving these documents—let alone whether Word documents from the early twenty-first century will be compatible with whatever file types are standard in the twenty-second century? In my lifetime music has gone from being stored on vinyl and cassette, to CDs and to MP3s; VHS cassettes are now obsolete; and I haven’t seen a floppy disk drive in nearly twenty years.
I don’t know the solutions to these problems—but I do know that the questions are important. And they are important not simply because of the need to think of ourselves in relation to future generations; they are also important because the control of information is a vital issue of our time. Archivists, librarians, cataloguers—these are the gatekeepers to information. How information is stored, what information is to be preserved and what discarded, how information is organized, who has access to information—these are the concerns of the archivist, the librarian and the cataloguer. Whenever a society decides who the gatekeepers to information should be, it is making a decision about who has power. Clearly I have little power over my text messages as a body of information; a ‘decision’ was made about them by my phone (and possibly other decisions have been made by the NSA and GCHQ too). And that bothers me a lot more than my recent loss of occasional drunken declarations of desire, autocorrect mishaps and insightful comments about the boiled egg I am eating for breakfast.
Measurement and quantification have become the guiding lights of our age. Numbers are becoming the principal means by which we make sense of our lives and our world. Wonderfully, or so it may seem, we can put a numerical value on almost every aspect of our experience: our health, our happiness, our success, all can be rated and compared with the health, happiness and success of others. How liberating it is to work out whether we are happy without relying on such messy and imprecise things as the nuances of feelings and subjective experience! I may feel happy, but am I happy? Best check it according to one of the many ‘scientific’ ways of quantifying it. Even better, I can check my life satisfaction against the national average on David Cameron’s happiness index (happily funded to the tune of £2m per year in these times of austerity). It may even boost my happiness rating to discover that I’m happier than most other people…
It is increasingly hard to resist this brave new world—the numbers insidiously work their way into every area of our lives. How is my writing going? I’d better check the stats on views and visitors and referrers to my blog. Was my teaching last term successful? Let’s look at the average ratings the students gave the course. Am I popular? The number of friends on social media will answer that. Am I in good shape? Best work out my Body Mass Index. Is this meal healthy? I can cross reference its calories and sodium content with recommended daily averages. Which of these books should I buy as a Christmas present? I’ll let the average reviewer ratings help me decide. Is it worth reading this piece of research? Let’s check the ‘impact factor’ of the journal it has been published in… oh, that’s quite a low number, so best not bother to read it then. How absurd is all this measuring of feeling and quantifying of quality? Well, it rates highly on my recently-devised and extremely scientific Absurdity Factor.
It’s not that the formulae, numbers and statistics in themselves are bad: they are simply pieces of information about which we need to exercise critical judgment, to make evaluations as to their worth or not. It is the growing tendency to dispense with evaluation that is the problem: doubting and foregoing our ability to make subjective judgments, we instead treat the numbers and ratings as if they are reliable, scientific truths. In the process—and here it gets really serious—careers and lives are destroyed. Few employees are free from the use of performance indicators, the metrics that are used to measure whether someone is doing their job well. As I’ve discussed elsewhere, higher education has become obsessed with them: the quality of scholarly research is judged not on reading it but on metrics; the performance, futures and lives of academics hinge on a set of numbers that are hopeless at assessing such things as the quality of teaching and research but which are beloved of university managers, heads of department and HR departments as the definitive guide identifying whom to hire and whom to fire. The ‘successful’ academic of the future is likely to be the one who has swallowed the notion that quality is no more than a number and that there are ways to ‘game’ the metrics in order to achieve the required number—such as having your research published in a journal which is part of a citations cartel.
Both the value and the limitations of quantification are familiar to most historians. In particular, quantitative methods have become an important part of the armoury of the social historian. It would be inconceivable for a history department not to teach social history, but this has only been the case since the 1960s. Before then social history was a niche, poorly-valued area. In part this was because of prevailing attitudes among historians (‘history is about great leaders and high politics; who wants to know about the dreary lives of common people?’), but it was also because there were real difficulties researching the subject—there was no absence of useful data about many periods but there was a lack of adequate tools to make meaningful interpretations about past societies. For the historian interested in early modern English society, for example, plenty of records and documents exist (parish registers, wills, inventories, accounts, court records, manorial records, diaries, letters, etc.), but for those sources to provide more than isolated snapshots of social life would require time, labour and resources far in excess of that available to any single historian. But then along came computers and databases, and with them the birth of cliometrics.
Cliometrics (a coinage joining the ancient muse of history, Clio, with metrics, the system or standard of measuring things) involved applying to history statistical, mathematical and quantitative models that had been developed by economists. Large datasets (a population census is an example) could be fed into a computer and then interrogated, something no individual historian could have done before the advent of digital technology. The impact on historiography was huge: whole new areas of the past could be opened up to investigation, and general hypotheses could be framed and tested within minutes rather than reached only after years of painstaking and gradual accumulation of evidence. Historians became excited about the possibilities: assemble a body of data, feed it into a computer, ask the right question, and an answer will be provided in the time it takes to make a cup of tea. Even better, it was thought, the answers would be scientific. The distinguished French historian, Emmanuel Le Roy Ladurie, claimed that ‘history that is not quantifiable cannot claim to be scientific’ and envisaged a future in which all historians ‘will have to be able to programme a computer in order to survive’ (as quoted in Richard J. Evans, In Defence of History (London: Granta, 1997), p. 39.)
One of the earliest and most impressive applications of cliometrics stemmed from the Cambridge Group for History of Population and Social Structure, founded in 1964 by Peter Laslett and Tony Wrigley. Using English parish records (one of the legacies of Thomas Cromwell’s administrative reforms was the commencement of regular parish record-keeping from 1538) as a dataset, the outstanding achievement of the Cambridge Group was Wrigley and R.S. Schofield’s Population History of England 1541-1871: A Reconstruction (Cambridge: Cambridge University Press, 1981). In addition to presenting huge amounts of data for pre-industrial England about birth, marriage and death rates, population size and growth, mortality crises, and much else besides, their work demolished various myths and assumptions about the past. For example, they conclusively proved that the nuclear (rather than extended) family was the overwhelming norm, and that most couples had no more than two surviving children (it was only the wealthy who tended to have large broods), rendering as pointless the surprisingly common assumption that the social and family conditions of the developing world are comparable with those of the pre-industrial world. For historians of early modern social and family life Wrigley and Schofield’s research is one of the fundamental starting points for inquiry.
However, a good historian (as scientifically defined according to my recently-devised Good Historian Factor) would not consider the cliometrics of Wrigley and Schofield as the end point—unlike the policy makers and managers who see metrics as the end point. The historian would understand that it is one thing to quantify family structure or life expectancy, quite another to assess the quality of family life or the effects on emotions and thought of high mortality rates. In order to do the latter it is necessary to look beyond the numbers and do some old-fashioned source evaluation: the historian would need to engage in critical analysis of diaries, letters and other texts, to assess what images and artefacts tell us, and to think broadly with concepts, methods and theories. What results is not a number but an interpretation, and (much to the dismay of the policy makers and managers) not one that is scientific or definitive but one that is open to questions, challenges, discussion and debate.
Some of the dangers of placing too much faith in cliometrics can be seen in Time on the Cross: The Economics of American Negro Slavery (New York, 1974), an attempt to apply a quantitative analysis to the history of American slavery by two economic historians, Robert Fogel and Stanley Engerman. The work was in two volumes, the first presenting historical analysis, the second the quantitative data. Based on the data the authors reached several controversial conclusions: they argued that southern slavery was more efficient than the free agriculture in the north, that the material conditions of slaves were comparable with those of free labourers, that the economy in the south grew more rapidly than that in the north, and that slaves were rewarded for their labour more highly than had previously been thought. Although some critics questioned the quality of the data used by Fogel and Engerman, most acknowledged that the quantitative methodology was broadly sound. What was unsound was the failure to present the information in a qualitative way. The supposedly ‘objective’ analysis of American slavery, with its hard data pointing to growth within the southern economy and to work conditions comparable to those of free labourers, ends up presenting slavery in a benign light—however much the authors themselves were clearly and genuinely opposed to any justification of slavery. A much better historical approach would have been to place more emphasis on the social and political context of slavery, and to assess its psychological and cultural aspects. For example, the authors presented the statistical finding that slaves were whipped 0.7 times per year on average. On its own such a finding might suggest that the slave economy was anything but unremittingly brutal, and maybe was not so bad after all. But what that figure (and Fogel and Engerman) fails to tell us is what the whip, and the threat of the whip, meant psychologically to the slave. A more significant impact on the life experience of the slave than the 0.7 whippings per year was more likely the fear of the whip and the lack of freedom to escape this fear. Thoughts, feelings, mental states are impossible to quantify—but they are surely essential to an historical understanding of slavery.
The policy makers and managers are clearly not historians (or else they have a dismally low Good Historian Factor). If they were, then they would see metrics as interesting and often useful information (pretty much like all information, therefore), but also limited in what it tells us; they would appreciate that metrics can be distorted by insufficient or manipulated data; they would see how essential it is that metrics is only one part (and probably a small part) of how to understand something, and for metrics to be of any use there needs to be qualitative interpretation; they would recognize that to judge the quality of research (or anything else) solely by using quantitative approaches rates a high number on my recently-devised, objective and scientific Stupidity Factor.