The charmlessness of Utopia: Channel Four’s Naked Attraction

 

utopiamarriage
Illustration to a 1730 edition of Thomas More’s Utopia

Once, on a first date, I had an interesting exchange along the following lines: my date mentioned in passing a spreadsheet that she was using to track and rate her dates. I assumed she was joking, and said as much. But no, she insisted, she was being serious: she had an entire spreadsheet on which she calculated according to various categories how attractive a man was. I laughed. ‘Why are you laughing?’ Ms. Spreadsheet asked. ‘Because that’s not how attraction works’, I replied, ‘it’s not something that can be reduced to numbers. It’s not a science that can measure things; it’s a mystery, like the way the best poetry or the best art is a mystery.’ But she responded by telling me (a) that I was wrong, attraction is a science, no more, no less, and (b) that I was scoring quite well according to her Excel calculations. Ms. Spreadsheet was an enjoyable date.

Naked Attraction, Channel 4’s new dating show, purports to take this scientific approach to dating and reduce it right down to its basic, logical extreme; spreadsheets aren’t involved, but if they were they would contain categories such as ‘legs’, ‘penis’, ‘vagina’, ‘bottom’, ‘breasts’ and ‘face’. By dissecting humans down to their constituent and naked physical parts, it suggests that we can get closer to finding the real basis of attraction. The programme is rooted in two ideas: first, that attraction is a (literally) nakedly physical matter; second, that there is a core person or self beneath all the clothing, jewellery, small talk and movement we are socially obliged to display, that these are just so many ways of concealing who we essentially are. Both these ideas are fundamentally wrong.

The concept of the show is remarkably simple: six naked participants are gradually revealed before one clothed participant; the latter whittles the six down to two based on their physical attractiveness before joining the remaining two in full nudity and selecting which of the two to go on a date with. It sounds dull; in reality it is duller than it sounds. When the most exciting moments consist of brief bits of rubbish pop science (‘some scientists think that men with symmetrical faces have healthier sperm’—that sort of profundity) then you know that the concept is terrible. Many viewers will doubtless watch Naked Attraction in the hope that it is mildly sexy or erotic. All bar teenage boys will likely be disappointed; they will find more sexiness and eroticism if they turn over to BBC2 half way through to watch Newsnight (and no, I do not have a fetish about Newsnight).

The participants in Naked Attraction gamely try to justify the programme by using words such as ‘empowering’ or ‘new-found confidence’, and offer Twitter-size arguments that the show delves to the deeper core of dating. On the evidence of the first episode a rather different conclusion can be drawn. Naked Attraction is tedious, dispiriting superficiality. This is dating for the empty-headed consumer generation who think they are being edgy, bright and liberal but are in fact being boring, stupid and constrained. It is dating for those who do not understand sexuality and the erotic, who think that the dismal mechanics of pornography amount to erotica. At one point a participant is described as having a body like a figure from Botticelli—I suspect the person offering this comment was confused about her artists, since the body was nothing like a Botticellian figure but a lot like a Rubens nude—but the gulf between the eroticism of Botticelli (and Rubens) and Naked Attraction is immense. Tellingly, the Rubenesque/Botticellian participant was the first of the six to be rejected.

We can, however, find in the Renaissance an interesting historical and philosophical ancestor to Naked Attraction. In Thomas More’s Utopia (1516) there is the following account of a social custom among prospective brides and grooms on the fictional island:

When they’re thinking of getting married, they do something that seemed to us quite absurd, though they take it very seriously. The prospective bride, no matter whether she’s a spinster or a widow, is exhibited stark naked to the prospective bridegroom by a respectable married woman, and a suitable male chaperon shows the bridegroom naked to the bride. When we implied by our laughter that we thought it a silly system, they promptly turned the joke against us.

            ‘What we find so odd,’ they said, ‘is the silly way these things are arranged in other parts of the world. When you’re buying a horse, and there’s nothing at stake but a small sum of money, you take every possible precaution. The animal’s practically naked already, but you firmly refuse to buy until you’ve whipped off the saddle and all the rest of the harness, to make sure there aren’t any sores underneath. But when you’re choosing a wife, an article that for better or worse has got to last you a lifetime, you’re unbelievably careless. You don’t even bother to take it out of its wrappings. You judge the whole woman from a few square inches of face, which is all you can see of her, and then proceed to marry her—at the risk of finding her most disagreeable, when you see what she’s really like. (Thomas More, Utopia, trans. by Paul Turner (London: Penguin, 2003), p. 84)

Utopia is a description of an ideal society. If it really existed it would be the dullest and most oppressive place on earth. The Utopians have reduced everything to worthiness and reason; they set no store by gold and jewels; they see no point in fashion; they do not gamble; their leisure consists of improving pursuits such as mental games and study. Everyone is equal and no-one is idle: ‘There’s never any excuse for idleness. There are also no wine-taverns, no ale-houses, no brothels, no secret meeting-places. Everyone has his eye on you, so you’re practically forced to get on with your job, and make some proper use of your spare time.’ (p. 65) North Korea is probably the country that has come closest to realizing More’s utopian fantasy.

Naked Attraction is not, of course, the first step on the way to a nightmarish North Korean future. Judging from its first outing, my guess is that the only place Naked Attraction is heading towards is the long list of programmes that are so dire they never get a second series. I am confidently hopeful that this is the case. If I am wrong, if Naked Attraction actually resonates with viewers as a telling and zeitgeisty shows that reflects their own thoughts about dating, attraction and romance, then there may be grounds for some despair. For, if the show really does tap into our sexual and human values, what would it be telling us?

It would be telling us that imagination, subtlety, mystery, complexity and the erotic are on the way out. It reduces attraction to dreary talk about the shape of someone’s penis, the amount of body hair they have, or whether their nipples are good for flicking. (I guarantee that it is more exciting reading the previous sentence than watching these things being said on the programme.) It would be telling us that those keen on metrics—things that are measurable and quantifiable—are triumphing over those who prefer immeasurable qualities. But I’m confident that not even Ms. Spreadsheet, my former date, would have taken metrics to these extremes.

Above all, Naked Attraction would be telling us that many people have a fundamentally misconceived notion of the self. In normal dating, when clothes are worn and movements are observed and conversation is exchanged, we are telling interesting stories about ourselves. Naked Attraction is obsessed with the idea that all these things—clothes, movement, social interaction—are just so many ways of concealing who we really are, that beneath all the layers can be found the pure, true self. In fact, it is the other way round. The self is something we put on. We reveal ourselves through what we wear, the way we move, the things we talk about. There is no self beyond that; there is just an invariably quite dull lump of uninteresting physical matter.

Naked Attraction unwittingly confirms this truth: what makes the show so unsexy and unerotic is its fixation on nothing more than the naked body. It demonstrates that the truly erotic is to be found in the way we transform our dull bodies through movement and clothes, adornment and conversation. Sexiness is the way we all tell stories about ourselves through what we wear, do and say. Attraction is to charm each other through these stories. It is the absence of that which makes the unadorned nakedness of Naked Attraction so unattractive and charmless.

Cliometrics: Or, What Historians Can Tell Us about Metrics

Measurement and quantification have become the guiding lights of our age. Numbers are becoming the principal means by which we make sense of our lives and our world. Wonderfully, or so it may seem, we can put a numerical value on almost every aspect of our experience: our health, our happiness, our success, all can be rated and compared with the health, happiness and success of others. How liberating it is to work out whether we are happy without relying on such messy and imprecise things as the nuances of feelings and subjective experience! I may feel happy, but am I happy? Best check it according to one of the many ‘scientific’ ways of quantifying it. Even better, I can check my life satisfaction against the national average on David Cameron’s happiness index (happily funded to the tune of £2m per year in these times of austerity). It may even boost my happiness rating to discover that I’m happier than most other people…

It is increasingly hard to resist this brave new world—the numbers insidiously work their way into every area of our lives. How is my writing going? I’d better check the stats on views and visitors and referrers to my blog. Was my teaching last term successful? Let’s look at the average ratings the students gave the course. Am I popular? The number of friends on social media will answer that. Am I in good shape? Best work out my Body Mass Index. Is this meal healthy? I can cross reference its calories and sodium content with recommended daily averages. Which of these books should I buy as a Christmas present? I’ll let the average reviewer ratings help me decide. Is it worth reading this piece of research? Let’s check the ‘impact factor’ of the journal it has been published in… oh, that’s quite a low number, so best not bother to read it then. How absurd is all this measuring of feeling and quantifying of quality? Well, it rates highly on my recently-devised and extremely scientific Absurdity Factor.

It’s not that the formulae, numbers and statistics in themselves are bad: they are simply pieces of information about which we need to exercise critical judgment, to make evaluations as to their worth or not. It is the growing tendency to dispense with evaluation that is the problem: doubting and foregoing our ability to make subjective judgments, we instead treat the numbers and ratings as if they are reliable, scientific truths. In the process—and here it gets really serious—careers and lives are destroyed. Few employees are free from the use of performance indicators, the metrics that are used to measure whether someone is doing their job well. As I’ve discussed elsewhere, higher education has become obsessed with them: the quality of scholarly research is judged not on reading it but on metrics; the performance, futures and lives of academics hinge on a set of numbers that are hopeless at assessing such things as the quality of teaching and research but which are beloved of university managers, heads of department and HR departments as the definitive guide identifying whom to hire and whom to fire. The ‘successful’ academic of the future is likely to be the one who has swallowed the notion that quality is no more than a number and that there are ways to ‘game’ the metrics in order to achieve the required number—such as having your research published in a journal which is part of a citations cartel.

Both the value and the limitations of quantification are familiar to most historians. In particular, quantitative methods have become an important part of the armoury of the social historian. It would be inconceivable for a history department not to teach social history, but this has only been the case since the 1960s. Before then social history was a niche, poorly-valued area. In part this was because of prevailing attitudes among historians (‘history is about great leaders and high politics; who wants to know about the dreary lives of common people?’), but it was also because there were real difficulties researching the subject—there was no absence of useful data about many periods but there was a lack of adequate tools to make meaningful interpretations about past societies. For the historian interested in early modern English society, for example, plenty of records and documents exist (parish registers, wills, inventories, accounts, court records, manorial records, diaries, letters, etc.), but for those sources to provide more than isolated snapshots of social life would require time, labour and resources far in excess of that available to any single historian. But then along came computers and databases, and with them the birth of cliometrics.

Cliometrics (a coinage joining the ancient muse of history, Clio, with metrics, the system or standard of measuring things) involved applying to history statistical, mathematical and quantitative models that had been developed by economists. Large datasets (a population census is an example) could be fed into a computer and then interrogated, something no individual historian could have done before the advent of digital technology. The impact on historiography was huge: whole new areas of the past could be opened up to investigation, and general hypotheses could be framed and tested within minutes rather than reached only after years of painstaking and gradual accumulation of evidence. Historians became excited about the possibilities: assemble a body of data, feed it into a computer, ask the right question, and an answer will be provided in the time it takes to make a cup of tea. Even better, it was thought, the answers would be scientific. The distinguished French historian, Emmanuel Le Roy Ladurie, claimed that ‘history that is not quantifiable cannot claim to be scientific’ and envisaged a future in which all historians ‘will have to be able to programme a computer in order to survive’ (as quoted in Richard J. Evans, In Defence of History (London: Granta, 1997), p. 39.)

wrigley_and_schofieldOne of the earliest and most impressive applications of cliometrics stemmed from the Cambridge Group for History of Population and Social Structure, founded in 1964 by Peter Laslett and Tony Wrigley. Using English parish records (one of the legacies of Thomas Cromwell’s administrative reforms was the commencement of regular parish record-keeping from 1538) as a dataset, the outstanding achievement of the Cambridge Group was Wrigley and R.S. Schofield’s Population History of England 1541-1871: A Reconstruction (Cambridge: Cambridge University Press, 1981). In addition to presenting huge amounts of data for pre-industrial England about birth, marriage and death rates, population size and growth, mortality crises, and much else besides, their work demolished various myths and assumptions about the past. For example, they conclusively proved that the nuclear (rather than extended) family was the overwhelming norm, and that most couples had no more than two surviving children (it was only the wealthy who tended to have large broods), rendering as pointless the surprisingly common assumption that the social and family conditions of the developing world are comparable with those of the pre-industrial world. For historians of early modern social and family life Wrigley and Schofield’s research is one of the fundamental starting points for inquiry.

wrigley_and_schofield_table
A table from Wrigley and Schofield’s Population History of England

However, a good historian (as scientifically defined according to my recently-devised Good Historian Factor) would not consider the cliometrics of Wrigley and Schofield as the end point—unlike the policy makers and managers who see metrics as the end point. The historian would understand that it is one thing to quantify family structure or life expectancy, quite another to assess the quality of family life or the effects on emotions and thought of high mortality rates. In order to do the latter it is necessary to look beyond the numbers and do some old-fashioned source evaluation: the historian would need to engage in critical analysis of diaries, letters and other texts, to assess what images and artefacts tell us, and to think broadly with concepts, methods and theories. What results is not a number but an interpretation, and (much to the dismay of the policy makers and managers) not one that is scientific or definitive but one that is open to questions, challenges, discussion and debate.

Some of thtime_on_the_crosse dangers of placing too much faith in cliometrics can be seen in Time on the Cross: The Economics of American Negro Slavery (New York, 1974), an attempt to apply a quantitative analysis to the history of American slavery by two economic historians, Robert Fogel and Stanley Engerman. The work was in two volumes, the first presenting historical analysis, the second the quantitative data. Based on the data the authors reached several controversial conclusions: they argued that southern slavery was more efficient than the free agriculture in the north, that the material conditions of slaves were comparable with those of free labourers, that the economy in the south grew more rapidly than that in the north, and that slaves were rewarded for their labour more highly than had previously been thought. Although some critics questioned the quality of the data used by Fogel and Engerman, most acknowledged that the quantitative methodology was broadly sound. What was unsound was the failure to present the information in a qualitative way. The supposedly ‘objective’ analysis of American slavery, with its hard data pointing to growth within the southern economy and to work conditions comparable to those of free labourers, ends up presenting slavery in a benign light—however much the authors themselves were clearly and genuinely opposed to any justification of slavery. A much better historical approach would have been to place more emphasis on the social and political context of slavery, and to assess its psychological and cultural aspects. For example, the authors presented the statistical finding that slaves were whipped 0.7 times per year on average. On its own such a finding might suggest that the slave economy was anything but unremittingly brutal, and maybe was not so bad after all. But what that figure (and Fogel and Engerman) fails to tell us is what the whip, and the threat of the whip, meant psychologically to the slave. A more significant impact on the life experience of the slave than the 0.7 whippings per year was more likely the fear of the whip and the lack of freedom to escape this fear. Thoughts, feelings, mental states are impossible to quantify—but they are surely essential to an historical understanding of slavery.

The policy makers and managers are clearly not historians (or else they have a dismally low Good Historian Factor). If they were, then they would see metrics as interesting and often useful information (pretty much like all information, therefore), but also limited in what it tells us; they would appreciate that metrics can be distorted by insufficient or manipulated data; they would see how essential it is that metrics is only one part (and probably a small part) of how to understand something, and for metrics to be of any use there needs to be qualitative interpretation; they would recognize that to judge the quality of research (or anything else) solely by using quantitative approaches rates a high number on my recently-devised, objective and scientific Stupidity Factor.

Bullying, Metrics, and the Death of Professor Stefan Grimm

On 25 September 2014, Stefan Grimm, professor of toxicology in the Faculty of Medicine at Imperial College London, was found dead in his home. He was 51. An inquest into his death has been opened and, while no official cause has yet been given, it would appear that he committed suicide. One reason to suspect suicide is that an unusual thing happened on 21 October, nearly four weeks after Grimm’s death. An email with the subject heading ‘How Professors are treated at Imperial College’ was sent from Grimm’s account to about forty internal addresses at the college. It would appear that Grimm had pre-set his account to send this email after his death; nothing has so far suggested that it is anything other than genuine. The email presents a dispiriting and disturbing insight into the state of modern British academia.

Included with Grimm’s message were two emails sent to Grimm by Martin Wilkins, professor of clinical pharmacology and head of the division of experimental medicine at Imperial. All these emails have subsequently been leaked and have now become public knowledge; the Times Higher Education has published them in full alongside an article on Grimm’s death. There has also been extensive commentary in other publications as well as on blogs (notably by David Colquhoun, emeritus professor of pharmacology at University College London).

The essence of the exchange and the circumstances outlined in the emails is as follows. Grimm, an active and successful researcher with over seventy publications to his name, a large number of grant applications and recipient of significant research funding, was informally and humiliatingly told by Wilkins that he would be sacked. Wilkins’ emails to Grimm confirm that steps were being taken that would in all likelihood lead to Grimm’s dismissal. With barely disguised insensitivity, Wilkins explained to Grimm ‘that you are struggling to fulfil the metrics of a Professorial post at Imperial College’, and that, unless Grimm’s performance improved, formal disciplinary procedures would be initiated. It is hard not to share Grimm’s bemusement that none of his various publications or research activity seemed to count in the eyes of the college. The final straw seems to have come when Grimm was informed by Wilkins that he would no longer be able to supervise a PhD student who had been accepted by the college and wished to work under Grimm. As Grimm wrote in his email: ‘He [the prospective PhD student] waited so long to work in our group and I will never be able to tell him that this should not now happen. What these guys [Wilkins and Gavin Screaton, then head of medicine at Imperial] don’t know is that they destroy lives. Well, they certainly destroyed mine.’

Anyone who has worked in academia will understand Grimm’s sentiments. This is not a career one falls into for want of better alternatives; it takes years of study, often combined with straitened financial circumstances and self-sacrifice, to acquire the experience, skills and knowledge necessary to work in academia. Why do this? Because of a passion and dedication about knowledge and furthering that knowledge through research and teaching, because academics care intensely about what they do and about its importance. There are times when research goes spectacularly well, but the nature of research is that there are also fallow periods, times when dead-ends are reached and new approaches need to be taken, times when patient, slow groundwork is being established that takes time to yield results. Part of the point of the university is to provide the institutional setting in which teaching and research can be nurtured—in which the commitment, hard work, and ups and downs of the life dedicated to academia will be understood, appreciated, respected and supported. Increasingly, however, universities regard their academic staff as little more than expendable items on a profit/loss balance sheet. Once that mentality has set in among university management, it does not take long for the type of shabby, undermining and humiliating treatment that appears to have been meted out to Grimm to become the rule rather than the exception.

Much of the comment on Grimm’s death and the circumstances surrounding it has focused on two things: the culture of academic bullying; and the absurdity of metrics. There is no doubt that Wilkins emerges from the exchange as a bully (or perhaps as the bullying henchman of Screaton, possibly ‘only following orders’); his approach to management and interpersonal relations comes across as arrogant, callous and deliberately humiliating. Some of the blogs and online commentary suggest that Wilkins is far from unique, and that a culture of bullying is rife not only at Imperial College but across academia. As Colquhoun notes on his blog, there has been a strikingly high number of university staff taking their employers to employment tribunals, and vastly more who have signed gagging orders preventing them from speaking out about their employers—evidence at the very least of widespread problems in employer-employee relations across academia.

The days of collegiality when management might be expected to support their academic staff are fast disappearing. As Grimm notes in his final email, Imperial (although for Imperial almost any university in the UK could be substituted here) ‘is not a university anymore but a business with [a] very few up in the hierarchy… profiteering [while] the rest of us are milked for money’. The culture of university management increasingly sees both academics and students as little more than sources of potential profit. The language used in universities gives it away: academics are expected to think about ‘branding’ and ‘marketization’; business plans and strategies are the new models for how to run an academic department; departments have business managers these days. Universities were originally centres of learning, teaching and research with managerial and bureaucratic structures designed to support that core function; but increasingly learning, teaching and research have assumed the new role of supporting the managerial and bureaucratic corporations that universities have become.

The problem with running universities as corporate businesses is that much of the activity of academics does not fit into a business model. Learning and teaching, for example, are hard to quantify since they do not generate any obvious profits, and thus tend not to be highly valued by management. Student recruitment and retention are seen as important, but not as goods in themselves, rather because high levels of recruitment and retention lead to increased income. Nor does much research sit easily with a business culture. In the older collegial culture it was understood that research needed to be nurtured; researchers often needed time and patience, and they needed support even if their field, however intrinsically important, was not high profile or likely to attract large amounts of funding. Quality, above all, was the key aim. In the current climate productivity and ‘impact’ are the only things that matter. Those academics able to churn out a steady stream of articles are favoured over those whose output is good but may have fallow periods when they need patiently to develop their research without the unremitting and constant pressure of having to publish at regular intervals. Moreover, much research, by its very nature, is an investigation the outcome of which is unclear or uncertain. But modern university managers have little time for this; they want to know even before the research has begun that it will have a significant impact—not on scholarship but on wider society outside. Much valuable research struggles to find a wide audience, yet is important for its long-term contributions to knowledge and understanding; modern university management has minimal interest in such work since it does not fit with their focus on the relentless pursuit of profit. The system favours those researchers who choose obviously high-profile topics, but of such a nature that neither breadth nor depth will get in the way of rapid production. The aim, it would seem, is to turn universities into research factories, academics into research machines. Academics who resist the bleak prospect of becoming nothing other than an efficient, productive research machine are marked for redundancy.

University managers will object by saying that they care greatly about quality of research, that in fact all sorts of measures have been designed to assess quality. These are the metrics, the means (it is supposed) by which the performance of an individual or an organization can be measured. Metrics tend to be highly complex—and absurd. (For those interested in why they are absurd, see Colquhoun’s discussion of them here and here.) It would seem obvious to most people that in order to assess the quality of research it might be a good idea actually to read that research. In the increasingly Kafkaesque world of the modern university, however, judgments are made about research not by reading it but according to baroque and opaque performance indicators. Formulae, spreadsheets and number-crunching have replaced old-fashioned concepts of reading and thinking about something in order to consider whether it is valuable or not. How many citations a piece of research has received, where in a journal a piece of research appears, what numerical rating has been assigned to the ‘impact factor’ of a journal, what numerical value has been assigned to the position a researcher’s name appears in the list of authors of the research—out of all these comes an overall numerical value which rates the quality of the research. It is the brave new world in which managers believe they have discovered the secret of quantifying quality without having to think about or understand what it is they are attempting to quantify. It would be like trying to assess the quality of music not by listening to the music itself but by working out a formula which factors in chart success, size of record label and writing credits to generate a (spuriously) scientific number representing quality.

While hardly bearing comparison with the experience of Stefan Grimm, a former colleague (an academic in the humanities) told me his own dealings with the new university culture. When asked whether he had any research to submit to the recent Research Evaluation Framework (REF) he suggested some articles written over the previous few years. He considered them to be good contributions to scholarship, but of course it was for others to judge; one way they might assess their quality would be to read them. However, his research was immediately dismissed out of hand, without being read, as being unsuitable for the REF: one article because it was co-authored (so much for encouraging collaboration in the humanities!), another because it was an essay in an edited book (that the book was edited by some of the leading scholars in their field meant nothing), a third because it was not in a prestigious enough journal, and a fourth because it was a review article, and again not in a journal with a sufficient international reputation (that the review was intended to make a useful contribution to a broad research area made no difference). Clearly he was a poorly-performing academic by the criteria of the university, notwithstanding the long hours he committed to the job and his extensive and, as was evident from feedback from both students and colleagues, successful teaching and administrative roles. His approach to research and academic work did not fit the REF-model and the current values of university management; thus it was made clear to him that, unless he started complying with the system, he had no realistic future in academia. Despite his dedication and contribution to his university, he has unsurprisingly become disillusioned enough to wonder whether academia is an environment he wants to be in any longer.

The modern values of university management are such that a university will abandon plans for a new building to house a Human Rights Centre of worldwide reputation, replacing it instead with a business school; it will attempt to close down the history of art department; it will suggest putting the Latin American collection up for sale; it will not renew the visiting post of a Nobel laureate; and it will lose a renowned writer and chair of the Man Booker International Prize because it is not prepared to accommodate her roles (the prestige and reputation of which clearly mean nothing to the managers) with the rigid and constantly-monitored targets devised by management. All this at the University of Essex (as recounted by Marina Warner in the London Review of Books, volume 36 number 17, 11 September 2014, pp. 42-3).

It is hardly surprising that such a culture fosters bullying on the part of managers, and stress, anxiety and insecurity among academic staff. Some will argue that this is a recipe for ‘success’: Imperial College is ranked, after all, among the top few universities in the world (using, of course, ranking systems based on yet more absurd metrics). Others may wonder whether the price to be paid for this ‘success’ is worth it: the important research that does not get done because it does not fit the current business model; the excellent teachers who are dispensed with because their work does not fit with the performance metrics; the students who are squeezed for every penny, and the unsavoury scramble for international students who bring in the highest fees; the rewards of long and dedicated service in academia coming in the form of intimidation, humiliation and mass sackings; the human suffering of depression, stress and anxiety among academics that comes in the wake of the managerial culture; and, possibly, the death of Stefan Grimm.