Metaphor and simile can be used to create mood. In Karl Ove Knausgaard’s A Death in the Family, the narrator describes stepping outside for a cigarette break, in the midst of clearing out the house of his recently deceased father. There he sees, ‘plastic bottles lying on their sides on the brick floor dotted with raindrops. The bottlenecks reminded me of muzzles, as if they were small cannons with their barrels pointing in all directions.’ Knausgaard’s choice of language adds to the general deathly, angry aura of the passage by flicking unexpectedly at the reader’s models of guns.
Descriptive masters such as Charles Dickens manage to hit our associative models again and again, creating wonderful crescendos of meaning, with the use of extended metaphors. Here he is, at the peak of his powers, introducing us to Ebenezer Scrooge in A Christmas Carol.
The cold within him froze his old features, nipped his pointed nose, shrivelled his cheek, stiffened his gait; made his eyes red, his thin lips blue; and spoke out shrewdly in his grating voice. A frosty rime was on his head, and on his eyebrows, and his wiry chin. He carried his own low temperature always about with him; he iced his office in the dog-days; and didn’t thaw it one degree at Christmas. External heat and cold had little influence on Scrooge. No warmth could warm, nor wintry weather chill him. No wind that blew was bitterer than he, no falling snow was more intent upon its purpose, no pelting rain less open to entreaty.
The author and journalist George Orwell knew the recipe for a potent metaphor. In the totalitarian milieu of his novel Nineteen Eighty-Four, he describes the small room in which the protagonist Winston and his partner Julia could be themselves without the state spying on them as ‘a world, a pocket of the past where extinct animals could walk.’
It won’t come as much of a surprise to discover (#litres_trial_promo) the interminably correct Orwell was even right when he wrote about writing. ‘A newly invented metaphor assists thought by evoking a visual image,’ he suggested, in 1946, before warning against the use of that ‘huge dump of worn-out metaphors which have lost all evocative power and are merely used because they save people the trouble of inventing phrases for themselves.’
Researchers recently tested this idea that clichéd metaphors (#litres_trial_promo) become ‘worn-out’ by overuse. They scanned people reading sentences that included action-based metaphors (‘they grasped the idea’), some of which were well-worn and others fresh. ‘The more familiar the expression, the less it activated the motor system,’ writes the neuroscientist Professor Benjamin Bergen. ‘In other words, over their careers, metaphorical expressions come to be less and less vivid, less vibrant, at least as measured by how much they drive metaphorical simulations.’
1.8 (#ulink_9d05ef65-b069-5949-a9d2-e2ee8af35c71)
In a classic 1932 experiment, the psychologist Frederic Bartlett (#litres_trial_promo) read a traditional Native American story to participants and asked them to retell it, by memory, at various intervals. The War of the Ghosts was a brief, 330-word tale about a boy who was reluctantly compelled to join a war party. During the battle, a warrior warned the boy that he had been shot. But, looking down, the boy couldn’t see any wounds on his body. The boy concluded that all the warriors were actually just ghosts. The next morning the boy’s face contorted, something black came out of his mouth, and he dropped down dead.
The War of the Ghosts had various characteristics that were unusual, at least for the study’s English participants. When they recalled the tale over time, Bartlett found their brains did something interesting. They simplified and formalised the story, making it more familiar by altering much of its ‘surprising, jerky and inconsequential’ qualities. They removed bits, added other bits and reordered still more. ‘Whenever anything appeared incomprehensible, it was either omitted or explained,’ in much the same way that an editor might fix a confusing story.
Turning the confusing and random into a comprehensible story is an essential function of the storytelling brain. We’re surrounded by a tumult of often chaotic information. In order to help us feel in control, brains radically simplify the world with narrative. Estimates vary, but it’s believed the brain processes around 11 million bits (#litres_trial_promo) of information at any given moment, but makes us consciously aware of no more than forty (#litres_trial_promo). The brain sorts through an abundance of information and decides what salient information to include in its stream of consciousness.
There’s a chance you’ve been made aware of these processes when, in a crowded room, you’ve suddenly heard someone in a distant corner speaking your name. This experience suggests the brain’s been monitoring myriad conversations and has decided to alert you to the one that might prove salient to your wellbeing. It’s constructing your story for you: sifting through the confusion of information that surrounds you, and showing you only what counts. This use of narrative to simplify the complex is also true of memory. Human memory is ‘episodic’ (we tend to experience our messy pasts as a highly simplified sequences of causes and effects) and ‘autobiographical’ (those connected episodes are imbued with personal and moral meaning).
There’s no single part of the brain that’s responsible for such story making. While most areas have specialisms, brain activity is far more dispersed than scientists once thought. That said, we wouldn’t be the storytellers we are if it wasn’t for its most recently evolved region, the neocortex. It’s a thin layer, about the depth of a shirt collar, folded in such a way that fully three feet of it is packed into a layer beneath your forehead. One of its critical jobs is keeping track of our social worlds. It helps interpret physical gestures, facial expressions and supports theory of mind.
But the neocortex is more than just a people-processor. It’s also responsible for complex thought, including planning, reasoning and making lateral connections. When the psychologist Professor Timothy Wilson writes that one of the main differences between us and other animals is that we have a brain that’s expert at constructing ‘elaborate theories and explanations about what is happening in the world and why,’ he’s talking principally about the neocortex.
These theories and explanations often take the form of stories. One of the earliest we know of tells of a bear being chased by three hunters. The bear is hit. It bleeds over the leaves on the forest floor, leaving behind it all the colours of autumn, then manages to escape by climbing up a mountain and leaping into the sky, where it becomes the constellation Ursa Major. Versions of the ‘Cosmic Hunt’ myth (#litres_trial_promo) have been found in Ancient Greece, northern Europe, Siberia, and in the Americas, where this particular one was told by the Iroquois Indians. Because of this pattern of spread, it’s believed it was being told when there was a land bridge between what’s now Alaska and Russia. That dates it between 13,000 and 28,000 BC.
The Cosmic Hunt myth reads like a classic piece of human bullshit. Perhaps it originated in a dream or shamanistic vision. But, just as likely, it started when someone, at some point, asked someone else, ‘Hey, why do those stars look like a bear?’ And that person gave a sage-like sigh, leaned on a branch and said, ‘Well, it’s funny you should ask …’ And here we are, 20,000 years later, still telling it.
When posed with even the deepest questions about reality, human brains tend towards story. What is a modern religion if not an elaborate neocortical ‘theory and explanation about what’s happening in the world and why’? Religion doesn’t merely seek to explain the origins of life, it’s our answer to the most profound questions of all: What is good? What is evil? What do I do about all my love, guilt, hate, lust, envy, fear, mourning and rage? Does anybody love me? What happens when I die? The answers don’t naturally emerge as data or an equation. Rather, they typically have a beginning, a middle and an end and feature characters with wills, some of them heroic, some villainous, all co-starring in a dramatic, changeful plot built from unexpected events that have meaning.
To understand the basis of how the brain turns the superabundance of information that surrounds it into a simplified story is to understand a critical rule of storytelling. Brain stories have a basic structure of cause and effect. Whether it’s memory, religion, or the War of the Ghosts, it rebuilds the confusion of reality into simplified theories of how one thing causes another. Cause and effect is a fundamental of how we understand the world. The brain can’t help but make cause and effect connections. It’s automatic. We can test it now. BANANAS. VOMIT (#litres_trial_promo). Here’s the psychologist Professor Daniel Kahneman describing what just happened in your brain: ‘There was no particular reason to do so, but your mind automatically assumed a temporal sequence and a causal connection between the words bananas and vomit, forming a sketchy scenario in which bananas caused the sickness.’
As Kahneman’s test shows, the brain makes cause and effect connections even where there are none. The power of this cause and effect story-making was explored in the early twentieth century by the Soviet filmmakers (#litres_trial_promo) Vsevolod Pudovkin and Lev Kuleshov, who juxtaposed film of a famous actor’s expressionless face with stock footage of a bowl of soup, a dead woman in a coffin and a girl playing with a toy bear. They then showed each juxtaposition to an audience. ‘The result was terrific,’ recalled Pudovkin. ‘The public raved about the acting of the artist. They pointed out the heavy pensiveness of his mood over the forgotten soup, were touched and moved by the deep sorrow with which he looked on the dead woman, and admired the light, happy smile with which he surveyed the girl at play. But we knew that in all three cases the face was exactly the same.’
Subsequent experiments confirmed the filmmakers’ findings. When shown cartoons of simple moving shapes, viewers helplessly inferred animism and built cause-and-effect narratives about what was happening: this ball is bullying that one; this triangle is attacking this line, and so on. When presented with discs moving randomly on a screen, viewers imputed chase sequences where there were none.
Cause and effect is the natural language of the brain. It’s how it understands and explains the world. Compelling stories are structured as chains of causes and effects. A secret of bestselling page-turners and blockbusting scripts is their relentless adherence to forward motion, one thing leading directly to another. In 2005, the Pulitzer prizewinning playwright David Mamet was captaining a TV drama called The Unit. After becoming frustrated with his writers producing scenes with no cause and effect – that were, for instance, simply there to deliver expository information – he sent out an angry ALL CAPS memo, which leaked online (I’ve de-capped what follows to save your ears): ‘Any scene which does not both advance the plot and standalone (that is, dramatically, by itself, on its own merits) is either superfluous or incorrectly written,’ he wrote. ‘Start, every time, with this inviolable rule: the scene must be dramatic. It must start because the hero has a problem, and it must culminate with the hero finding him or herself either thwarted or educated that another way exists.’
The issue isn’t simply that scenes without cause and effect tend to be boring. Plots that play too loose with cause and effect risk becoming confusing, because they’re not speaking in the brain’s language. This is what the screenwriter of The Devil Wears Prada, Aline Brosh McKenna, suggested when she said, ‘You want all your scenes to have a “because” between (#litres_trial_promo) them, and not an “and then”.’ Brains struggle with ‘and then’. When one thing happens over here, and then we’re with a woman in a car park who’s just witnessed a stabbing, and then there’s a rat in Mothercare in 1977, and then there’s an old man singing sea shanties in a haunted pear orchard, the writer is asking a lot of people.
But sometimes this is on purpose. An essential difference between commercial and literary storytelling is its use of cause and effect. Change in mass-market story is quick and clear and easily understandable, while in high literature it’s often slow and ambiguous and demands plenty of work from the reader, who has to ponder and de-code the connections for themself. Novels such as Marcel Proust’s Swann’s Way are famously meandering and include, for example, a description of hawthorn blossom that lasts for well over a thousand words. (‘You are fond of hawthorns,’ one character remarks to the narrator, halfway through.) The art-house films of David Lynch are frequently referred to as ‘dreamlike’ because, like dreams, there’s often a dearth of logic to their cause and effect.
Those who enjoy such stories are more likely to be expert readers, those lucky enough to have been born with the right kinds of minds, and raised in learning environments that nurtured the skill of picking up the relatively sparse clues in meaning left by such storytellers. I also suspect they tend to be higher than average in the personality trait ‘openness to experience’, which strongly predicts an interest in poetry and the arts (#litres_trial_promo) (and also ‘contact with psychiatric services’). Expert readers understand that the patterns of change they’ll encounter in art-house films and literary or experimental fiction will be enigmatic and subtle, the causes and effects so ambiguous that they become a wonderful puzzle that stays with them months and even years after reading, ultimately becoming the source of meditation, re-analysis and debate with other readers and viewers – why did characters behave as they did? What was the filmmaker really saying?
But all storytellers, no matter who their intended audience, should beware of over-tightening their narratives. While it’s dangerous to leave readers feeling confused and abandoned, it’s just as risky to over-explain. Causes and effects should be shown rather than told; suggested rather than explained. Readers should be free to anticipate what’s coming next and able to insert their own feelings and interpretations into why that just happened and what it all means. These gaps in explanation are the places in story in which readers insert themselves: their preconceptions; their values; their memories; their connections; their emotions – all become an active part of the story. No writer can ever transplant their neural world perfectly into a reader’s mind. Rather, their two worlds mesh. Only by the reader insinuating themselves into a work can it create a resonance that has the power to shake them as only art can.
1.9 (#ulink_13efb68f-18f6-59f5-80cb-c0543a17d551)
So our mystery is solved. We’ve discovered where a story begins: with a moment of unexpected change, or with the opening of an information gap, or likely both. As it happens to a protagonist, it happens to the reader or viewer. Our powers of attention switch on. We typically follow the consequences of the dramatic change as they ripple out from the start of the story in a pattern of causes and effects whose logic will be just ambiguous enough to keep us curious and engaged. But while this is technically true, it’s actually only the shallowest of answers. There’s obviously more to storytelling than this rather mechanical process.
A similar observation is made by a story-maker near the start of Herman J. Mankiewicz and Orson Welles’s 1941 cinema classic Citizen Kane. The film opens with change and an information gap: the recent death of the mogul Charles Foster Kane, as he drops a glass globe that contains a little snow-covered house and utters a single, mysterious word: rosebud. We’re then presented with a newsreel that documents the raw facts of his seventy years of life: Kane was a well known yet controversial figure who was extraordinarily wealthy and once owned and edited the New York Daily Inquirer. His mother ran a boarding house and the family fortune came after a defaulting tenant left her a gold mine, the Colorado Lode, which had been assumed worthless. Kane was twice married, twice divorced, lost a son and made an unsuccessful attempt at entering politics, before dying a lonely death in his vast, unfinished and decaying palace that, we’re told, was, ‘since the pyramids, the costliest monument a man has built to himself’.
With the newsreel over, we meet its creators – a team of cigarette-smoking newsmen who, it turns out, have just finished their film and are showing it to their boss Rawlston for his editorial comments. And Rawlston is not satisfied. ‘It isn’t enough to tell us what a man did,’ he tells his team. ‘You’ve got to tell us who he was … How is he different from Ford? Or Hearst, for that matter? Or John Doe?’
That newsreel editor was right (as editors are with maddening regularity). We’re a hyper-social species with domesticated brains that have been engineered specifically to control an environment of humans. We’re insatiably inquisitive, beginning with our tens of thousands of childhood questions about how one thing causes another. Being a domesticated species, we’re most interested of all in the cause and effect of other people. We’re endlessly curious about them. What are they thinking? What are they plotting? Who do they love? Who do they hate? What are their secrets? What matters to them? Why does it matter? Are they an ally? Are they a threat? Why did they do that irrational, unpredictable, dangerous, incredible thing? What drove them to build ‘the world’s largest pleasure ground’ on top of a manmade ‘private mountain’ that contained the most populous zoo ‘since Noah’ and a ‘collection of everything so big it can never be catalogued’? Who is the person really? How did they become who they are?
Good stories are explorations of the human condition; thrilling voyages into foreign minds. They’re not so much about events that take place on the surface of the drama as they are about the characters that have to battle them. Those characters, when we meet them on page one, are never perfect. What arouses our curiosity about them, and provides them with a dramatic battle to fight, is not their achievements or their winning smile. It’s their flaws.
CHAPTER TWO: (#ulink_bb450f8f-bced-5262-aefa-23f7b75f609b)
THE FLAWED SELF (#ulink_bb450f8f-bced-5262-aefa-23f7b75f609b)
2.0 (#ulink_2d231d1c-a385-5862-ac34-e727d6f34cc7)
There’s something you should know about Mr B. He’s being watched by the FBI. They film him constantly and in secret, then cut the footage together and broadcast it to millions as ‘The Mr B Show’. This makes life rather awkward for Mr B. He showers in swimming trunks and dresses beneath bedsheets. He hates talking to others, as he knows they’re actors hired by the FBI to create drama. How can he trust them? He can’t trust anyone. No matter how many people explain why he’s wrong, he just can’t see it. He finds a way to dismiss each argument they present to him. He knows it’s true. He feels it’s true. He sees evidence for it everywhere.
There’s something else you should know about Mr B. He’s psychotic. One healthy part of his brain, writes the neuroscientist Professor Michael Gazzaniga (#litres_trial_promo), ‘is trying to make sense out of some abnormalities going on in another’. The malfunctioning part is causing ‘a conscious experience with very different contents than would normally be there, yet those contents are what constitute Mr B’s reality and provide experiences that his cognition must make sense of.’
Because it’s being warped by faulty signals being sent out by the unhealthy section of his brain, the story Mr B is telling about the world, and his place within it, is badly mistaken. It’s so mistaken he’s no longer able to adequately control his environment, so doctors and care staff have to do it on his behalf, in a psychiatric institution.
As unwell as he is, we’re all a bit like Mr B. The controlled hallucination inside the silent, black vault of our skulls that we experience as reality is warped by faulty information. But because this distorted reality is the only reality we know, we just can’t see where it’s gone wrong. When people plead with us that we’re mistaken or cruel and acting irrationally, we feel driven to find a way to dismiss each argument they present to us. We know we’re right. We feel we’re right. We see evidence for it everywhere.
These distortions in our cognition make us flawed. Everyone is flawed in their own interesting and individual ways. Our flaws make us who we are, helping to define our character. But our flaws also impair our ability to control the world. They harm us.
At the start of a story, we’ll often meet a protagonist who is flawed in some closely defined way. The mistakes they’re making about the world will help us empathise with them. We’ll warm to their vulnerability. We’ll become emotionally engaged in their struggle. When the dramatic events of the plot coax them to change we’ll root for them.
The problem is, in fiction and in life, changing who we are is hard. The insights we’ve learned from neuroscience and psychology begin to show us exactly why it’s hard. Our flaws – especially the mistakes we make about the human world and how to live successfully within it – are not simply ideas about this and that which we can identify easily and choose to shrug off. They’re built right into our hallucinated models. Our flaws form part of our perception, our experience of reality. This makes them largely invisible to us.
Correcting our flaws means, first of all, managing the task of actually seeing them. When challenged, we often respond by refusing to accept our flaws exist at all. People accuse us of being ‘in denial’. Of course we are: we literally can’t see them. When we can see them, they all too often appear not as flaws at all, but as virtues. The mythologist Joseph Campbell identified a common plot moment in which protagonists ‘refuse the call’ of the story. This is often why.
Identifying and accepting our flaws, and then changing who we are, means breaking down the very structure of our reality before rebuilding it in a new and improved form. This is not easy. It’s painful and disturbing. We’ll often fight with all we have to resist this kind of profound change. This is why we call those who manage it ‘heroes’.
There are various routes by which characters and selves become unique and uniquely flawed, and a basic understanding of them can be of great value to storytellers. One major route involves those moments of change. The brain constructs its hallucinated model (#litres_trial_promo) of the world by observing millions of instances of cause and effect then constructing its own theories and assumptions about how one thing caused the other. These micro-narratives of cause and effect – more commonly known as ‘beliefs’ – are the building blocks of our neural realm. The beliefs it’s built from feel personal to us because they help make up the world that we inhabit and our understanding of who we are. Our beliefs feel personal to us because they are us.
But many of them will be wrong. Of course the controlled hallucination we live inside is not as distorted as the one that Mr B lives inside. Nobody, however, is right about everything. Nevertheless, the storytelling brain wants to sell us the illusion that we are. Think about the people closest to you. There won’t be a soul among them with whom you’ve never disagreed. You know she’s slightly wrong about that, and he’s got that wrong, and don’t get her started on that. The further you travel from those you admire, the more wrong people become until the only conclusion you’re left with is that entire tranches of the human population are stupid, evil or insane. Which leaves you, the single living human who’s right about everything – the perfect point of light, clarity and genius who burns with godlike luminescence at the centre of the universe.
Hang on, that can’t be right. You must be wrong about something. So you go on a hunt. You count off your most precious beliefs – the ones that really matter to you – one by one. You’re not wrong about that and you’re not wrong about that and you’re certainly not wrong about that or that or that or that. The insidious thing about your biases, errors and prejudices is that they appear as real to you as Mr B’s delusions appear to him. It feels as if everyone else is ‘biased’ and it’s only you that sees reality as it actually is. Psychologists call this ‘naive realism’. Because reality seems clear and obvious and self-evident to you, those who claim to see it differently must be idiots or lying or morally derelict. The characters we meet at the start of story are, like most of us, living just like this – in a state of profound naivety about how partial and warped their hallucination of reality has become. They’re wrong. They don’t know they’re wrong. But they’re about to find out …
If we’re all a bit like Mr B then Mr B is, in turn, like the protagonist in Andrew Niccol’s screenplay, The Truman Show. It tells of thirty-year-old Truman Burbank, who’s come to believe his whole life is staged and controlled. But, unlike Mr B, he’s right. The Truman Show is not only real, it’s being broadcast, twenty-four hours a day, to millions. At one point, the show’s executive producer is asked why he thinks it’s taken Truman so long to become suspicious of the true nature of his world. ‘We accept the reality of the world with which we’re presented,’ he answers. ‘It’s as simple as that.’
Вы ознакомились с фрагментом книги.
Приобретайте полный текст книги у нашего партнера: