In school, I hated history. I had to memorise names of people and places and dates, which I never could, and ended up getting low marks in exams. I found history books boring, with the same old stuff written in every book year after year, and every book carrying the same old badly-printed black & white pictures. Those days, in India, good history books with colour photos were few and expensive. So, I dropped history as soon as I could and decided to take up science. Much later in life, and I don’t know how it happened, I began to take an interest in history, and even anthropology.
The fact about historians opening up to make their work more ‘fictional’ is an interesting happening. I’m not totally against this. In fact, remembering my history classes in school, I believe it’s a happy circumstance to be in. I feel this ‘opening up’ reaches out to more people and attracts them to a subject which is considered boring. And, I’m not one to complain about this. I remember my excitement when I read Jacob Bronowski’s ‘The Ascent of Man’ some 20 years ago. It was a wonderful presentation on the history of science; and, perhaps, that’s when my interest in history began.
Science has always adopted various routes in popularising difficult-to-understand subjects and new discoveries. Books (both non-fiction and sci-fi) and documentary films on, say, the theory of evolution, particle physics and nanotechnology are just a few good examples of this approach. Mind you, this doesn’t mean good hard work by historians and scientists should be discredited for being too factual and uninteresting – and, therefore, unpopular. Nor does it mean fiction should overrule historical and scientific facts for the sake of popularity.
30 December 2007
29 December 2007
History is not 100% fiction
There seems to be a debate in historical circles surrounding the idea that history is fiction. Particularly those historians who are influenced by the deconstructionist movement believe that a historical narrative is not much different from fiction. That, at the end of the day, what historians do is put together whatever evidence there is and try to make sense of it in a narrative form. Which is exactly what good storytellers do.
That’s why, these days, we see some change in the way that history is presented to us. Whereas in the past, history books were seen as having undisputed authority over the past (and were written in an authoritative tone), nowadays, we see historians openly working through their prejudices and ideologies and presenting history in a narrative form which is (given a little licence here and there) as entertaining as a story.
Of course, some writers of fiction – and filmmakers too – try to take this idea farther. They believe fiction is a better vehicle than history because they can get into the time and into the period with more depth than historians can. Because they – i.e. writers and filmmakers – can use their imagination, empathy, skills and techniques (of the trade) to create a better picture of the events, people and their lifestyles of that period.
Do I agree with this? Yes, I do. But then, I don’t assume that this fiction is 100% history. Nor do I assume that history is 100% fiction.
I understand that writers and filmmakers – as well as historians, biologists, physicists or fiction authors – bring something of themselves and their beliefs into their work. I understand that filmmakers and writers of historical fiction use their creative licence to present to us a more colourful and entertaining version of history than what we may have found in our history books earlier. And that’s where it all stands.
That’s why, these days, we see some change in the way that history is presented to us. Whereas in the past, history books were seen as having undisputed authority over the past (and were written in an authoritative tone), nowadays, we see historians openly working through their prejudices and ideologies and presenting history in a narrative form which is (given a little licence here and there) as entertaining as a story.
Of course, some writers of fiction – and filmmakers too – try to take this idea farther. They believe fiction is a better vehicle than history because they can get into the time and into the period with more depth than historians can. Because they – i.e. writers and filmmakers – can use their imagination, empathy, skills and techniques (of the trade) to create a better picture of the events, people and their lifestyles of that period.
Do I agree with this? Yes, I do. But then, I don’t assume that this fiction is 100% history. Nor do I assume that history is 100% fiction.
I understand that writers and filmmakers – as well as historians, biologists, physicists or fiction authors – bring something of themselves and their beliefs into their work. I understand that filmmakers and writers of historical fiction use their creative licence to present to us a more colourful and entertaining version of history than what we may have found in our history books earlier. And that’s where it all stands.
28 December 2007
Isn’t history another story?
The criticisms showered by historians on historical films (see my previous posts) is an interesting subject. Mostly, criticisms are directed at inaccuracies in representation of facts, as well as the interpretations and enhancements that film directors make liberal use of ‘as technique’ to tell their stories. Hence, I find Oliver Stone’s allusion (about himself as a historical filmmaker) to a dramatist along the lines of William Shakespeare a fascinating analogy.
William Shakespeare had taken ample liberties in writing his historical plays (Richard III, Henry V, etc.) and his tragedies (Macbeth, Hamlet, etc.) – changing facts, dramatising history and historical characters (even inventing a few) to make his plays more attractive to his audience. Nobody finds anything wrong in this, of course. After all, this is literature; this is fiction. And, we all like a good story, well told.
But, isn’t history fiction too? Isn’t history a narrative constructed by men and women to serve a particular academic/cultural/social purpose and, therefore, holding much in common with literature of the fictive kind?
Yes, it is true that what historians construct are based on material evidence from the past (archival sources, archeological pieces found at digs, etc.). But they also have to construct a narrative out of these diverse sources. And, while they try to be objective like scientists, these narratives are inevitably biased by their ideas, ideologies, opinions, and their desires to tell a good story.
In that case, are historians really all that different from authors and filmmakers who do historical research in order to construct a good era-based fiction? And present history to us as another story?
William Shakespeare had taken ample liberties in writing his historical plays (Richard III, Henry V, etc.) and his tragedies (Macbeth, Hamlet, etc.) – changing facts, dramatising history and historical characters (even inventing a few) to make his plays more attractive to his audience. Nobody finds anything wrong in this, of course. After all, this is literature; this is fiction. And, we all like a good story, well told.
But, isn’t history fiction too? Isn’t history a narrative constructed by men and women to serve a particular academic/cultural/social purpose and, therefore, holding much in common with literature of the fictive kind?
Yes, it is true that what historians construct are based on material evidence from the past (archival sources, archeological pieces found at digs, etc.). But they also have to construct a narrative out of these diverse sources. And, while they try to be objective like scientists, these narratives are inevitably biased by their ideas, ideologies, opinions, and their desires to tell a good story.
In that case, are historians really all that different from authors and filmmakers who do historical research in order to construct a good era-based fiction? And present history to us as another story?
17 December 2007
Contesting history
Commercial filmmakers of history – such as David Lean, Oliver Stone, Ridley Scott, Mel Gibson, among others – have always attracted controversy and criticism. They have been accused of distorting historical records, reconfiguring history to mean something else altogether. As a film viewer, this criticism has always bothered me.
As a film viewer, I know commercial filmmakers of history are not historians. They have never claimed to be so. I look to historical films for knowledge and inspiration, as much as for entertainment. Anybody who offers an interpretation of the past, breaking down myths, is welcome by me… so long as the depiction is not a total fantasy.
I had read somewhere that Oliver Stone views himself as a historical dramatist in the tradition of William Shakespeare and the Greeks. If I’ve understood it right, Stone says, like Shakespeare’s history plays like ‘Richard III’ and ‘Henry V’, his historical films like ‘JFK’ and ‘Platoon’ are a mix of fact and fiction.
In his films, Oliver Stone attempts to reveal larger truths by challenging the mainstream, i.e. generally-accepted, views of history. His films offer a meaning of the past, contesting traditional/official narratives of historians, critiquing dominant ideologies, in an effort to unravel some of the darker, questionable aspects of our past.
After all, history is not only about kings, queens, presidents, soldiers/warriors, wars, conspiracies and collapse of empires. History is also about ideology, corruption, abuse of power, breakdown of order, freedom, loss, uncertainty, fear, anxiety, confusion, despair, exhaustion, racism, brutality and regard/disregard for humanity.
We, as film viewers (and that includes historians), are often uncomfortable with some of the narratives and conclusions that filmmakers like Stone, Scott and Gibson advocate. They make us think not only about our past, but also about who we are. I remember being deeply moved by ‘No Man’s Land’, a film by Bosnian director Danis Tanovic, which questioned the meaninglessness of war. The film left me somewhat helpless.
Films like ‘No Man’s Land’, Scott’s ‘Kingdom of Heaven’ and Stone’s ‘Platoon’ not only represent history, they also influence our perception of life and question our moral standing.
As a film viewer, I know commercial filmmakers of history are not historians. They have never claimed to be so. I look to historical films for knowledge and inspiration, as much as for entertainment. Anybody who offers an interpretation of the past, breaking down myths, is welcome by me… so long as the depiction is not a total fantasy.
I had read somewhere that Oliver Stone views himself as a historical dramatist in the tradition of William Shakespeare and the Greeks. If I’ve understood it right, Stone says, like Shakespeare’s history plays like ‘Richard III’ and ‘Henry V’, his historical films like ‘JFK’ and ‘Platoon’ are a mix of fact and fiction.
In his films, Oliver Stone attempts to reveal larger truths by challenging the mainstream, i.e. generally-accepted, views of history. His films offer a meaning of the past, contesting traditional/official narratives of historians, critiquing dominant ideologies, in an effort to unravel some of the darker, questionable aspects of our past.
After all, history is not only about kings, queens, presidents, soldiers/warriors, wars, conspiracies and collapse of empires. History is also about ideology, corruption, abuse of power, breakdown of order, freedom, loss, uncertainty, fear, anxiety, confusion, despair, exhaustion, racism, brutality and regard/disregard for humanity.
We, as film viewers (and that includes historians), are often uncomfortable with some of the narratives and conclusions that filmmakers like Stone, Scott and Gibson advocate. They make us think not only about our past, but also about who we are. I remember being deeply moved by ‘No Man’s Land’, a film by Bosnian director Danis Tanovic, which questioned the meaninglessness of war. The film left me somewhat helpless.
Films like ‘No Man’s Land’, Scott’s ‘Kingdom of Heaven’ and Stone’s ‘Platoon’ not only represent history, they also influence our perception of life and question our moral standing.
14 December 2007
Are film directors dishonest?
“The only reason to make a historical film is because it illuminates the present.”
– Ken Loach, British filmmaker
David Lean’s ‘Lawrence of Arabia’; Ken Loach’s ‘Hidden Agenda’; Steven Spielberg’s ‘Amistad’; Shekhar Kapoor’s ‘Elizabeth’; Ridley Scott’s ‘Kingdom of Heaven’; Mel Gibson’s ‘Apocalypto’… these are a few among many films which have been hounded and criticised by historians for their inaccurate depiction of historical facts.
Why is there so much bad blood over inaccuracies in films that depict historic periods? Do historians feel that film directors are inept, or simply dishonest, in making historical films? How accurate should a historical film be?
I don’t have the answers to these questions, but I do have a question of my own: Should film directors compromise a good story in order to depict historical facts accurately?
My teenage niece (who was looking on as I was writing this blog post) tells me that a good story, any day, is better than rigidly sticking to the facts. She tells me that film directors usually add a twist of their own to make a good film with a good story more appealing to the viewers.
How big a twist should directors add to their films? I guess, to the extent the film does not compromise its entertainment value. After all, we watch films for entertainment, not for lessons in history. There’s no point in making a historically accurate film if the film becomes unwatchable.
More so because film is art. And art is more liberal in its interpretation and presentation.
However, once again, the issue of ‘fact or fiction’ raises its ugly head: Should film directors be allowed to change real (historical) events, or real characters for that matter, in the name of art? And, moreover, should such decisions be justified by them – or by film viewers like us?
– Ken Loach, British filmmaker
David Lean’s ‘Lawrence of Arabia’; Ken Loach’s ‘Hidden Agenda’; Steven Spielberg’s ‘Amistad’; Shekhar Kapoor’s ‘Elizabeth’; Ridley Scott’s ‘Kingdom of Heaven’; Mel Gibson’s ‘Apocalypto’… these are a few among many films which have been hounded and criticised by historians for their inaccurate depiction of historical facts.
Why is there so much bad blood over inaccuracies in films that depict historic periods? Do historians feel that film directors are inept, or simply dishonest, in making historical films? How accurate should a historical film be?
I don’t have the answers to these questions, but I do have a question of my own: Should film directors compromise a good story in order to depict historical facts accurately?
My teenage niece (who was looking on as I was writing this blog post) tells me that a good story, any day, is better than rigidly sticking to the facts. She tells me that film directors usually add a twist of their own to make a good film with a good story more appealing to the viewers.
How big a twist should directors add to their films? I guess, to the extent the film does not compromise its entertainment value. After all, we watch films for entertainment, not for lessons in history. There’s no point in making a historically accurate film if the film becomes unwatchable.
More so because film is art. And art is more liberal in its interpretation and presentation.
However, once again, the issue of ‘fact or fiction’ raises its ugly head: Should film directors be allowed to change real (historical) events, or real characters for that matter, in the name of art? And, moreover, should such decisions be justified by them – or by film viewers like us?
12 December 2007
Taking liberties
I am inspired by a comment on my blog. [Thanks Madhuri] The comment stated: “Fiction set against the backdrop of a history often gives a much better perspective of that history than a historical discourse/report.”
I agree. That’s the wonderful thing about fiction, particularly historical fiction. Whether a book or a film, the filling in of the details by the author/filmmaker adds colour and makes everything more interesting. Normally, responsible authors and filmmakers stick close to what is known and fill in the details between documented facts. However, on many occasions, authors and filmmakers present an alternative which disagree – or, is in conflict – with documented facts.
These interpretations of facts, these liberties that authors and filmmakers often take, can confuse readers and audiences. Although there is freedom of speech (in most countries in the world) that allows this, conservative schools of thought say that inaccurate representations of facts in the guise of fiction can be misunderstood by people. That fiction can often become fact. That, historical inaccuracies, particularly when well presented, can often change what we believe to be the truth.
A case in point is Oliver Stone’s 1991 Hollywood film ‘JFK’ about the controversy behind US President John F Kennedy’s assassination. Director Oliver Stone had used a cinematic technique in his film ‘JFK’, mixing actual B&W newsreel footage with his personal version of the mystery behind the Kennedy assassination by depicting these (i.e. the make-believe) portions in colour. On seeing the film, many viewers actually believed Stone’s version to be the real thing.
Thus, Oliver Stone’s ‘JFK’ created a huge controversy, and came under attack from the media as well as historians. As people lined up to see ‘JFK’ (which did rather well in the box office and later won two Academy Awards), the US media and several historians went overboard criticising the historical inaccuracies in Stone’s film. They all said that, in ‘JFK’, Oliver Stone has tricked his audience.
Criticism aimed at Stone derided him for taking liberties in interpreting historical facts; using pseudo-documentary footage to represent fiction as fact; using the fictional character ‘X’ (a colonel in the US Air Force played by actor Donald Sutherland) to explain/justify Stone’s myth of a conspiracy; and, more importantly, implying a conspiracy in the Kennedy assassination involving President Lyndon B Johnson and other senior US government officials.
Of course, Stone defended himself. In an interview with Mark C Carnes in ‘Cineaste’ (Fall 1996) Stone says: “…we as dramatists are undertaking a deconstruction of history, questioning some of the given realities. What you call ‘sneaky’ is, to me, an ambivalent and shifting style that makes people aware they are watching a movie and that reality itself is in question. JFK was the beginning of a new era for me in terms of filmmaking because it’s not just about a conspiracy to kill John Kennedy. It’s also about the way we look at our recent history.”
And, I think, therein lies the truth (and the beauty) that authors and filmmakers pursue in their craft.
[Citation: ‘Past Imperfect: History According to the Movies (interview with film director Oliver Stone)’, by Mark C Carnes, Cineaste v22 n4 (Fall 1996)]
I agree. That’s the wonderful thing about fiction, particularly historical fiction. Whether a book or a film, the filling in of the details by the author/filmmaker adds colour and makes everything more interesting. Normally, responsible authors and filmmakers stick close to what is known and fill in the details between documented facts. However, on many occasions, authors and filmmakers present an alternative which disagree – or, is in conflict – with documented facts.
These interpretations of facts, these liberties that authors and filmmakers often take, can confuse readers and audiences. Although there is freedom of speech (in most countries in the world) that allows this, conservative schools of thought say that inaccurate representations of facts in the guise of fiction can be misunderstood by people. That fiction can often become fact. That, historical inaccuracies, particularly when well presented, can often change what we believe to be the truth.
A case in point is Oliver Stone’s 1991 Hollywood film ‘JFK’ about the controversy behind US President John F Kennedy’s assassination. Director Oliver Stone had used a cinematic technique in his film ‘JFK’, mixing actual B&W newsreel footage with his personal version of the mystery behind the Kennedy assassination by depicting these (i.e. the make-believe) portions in colour. On seeing the film, many viewers actually believed Stone’s version to be the real thing.
Thus, Oliver Stone’s ‘JFK’ created a huge controversy, and came under attack from the media as well as historians. As people lined up to see ‘JFK’ (which did rather well in the box office and later won two Academy Awards), the US media and several historians went overboard criticising the historical inaccuracies in Stone’s film. They all said that, in ‘JFK’, Oliver Stone has tricked his audience.
Criticism aimed at Stone derided him for taking liberties in interpreting historical facts; using pseudo-documentary footage to represent fiction as fact; using the fictional character ‘X’ (a colonel in the US Air Force played by actor Donald Sutherland) to explain/justify Stone’s myth of a conspiracy; and, more importantly, implying a conspiracy in the Kennedy assassination involving President Lyndon B Johnson and other senior US government officials.
Of course, Stone defended himself. In an interview with Mark C Carnes in ‘Cineaste’ (Fall 1996) Stone says: “…we as dramatists are undertaking a deconstruction of history, questioning some of the given realities. What you call ‘sneaky’ is, to me, an ambivalent and shifting style that makes people aware they are watching a movie and that reality itself is in question. JFK was the beginning of a new era for me in terms of filmmaking because it’s not just about a conspiracy to kill John Kennedy. It’s also about the way we look at our recent history.”
And, I think, therein lies the truth (and the beauty) that authors and filmmakers pursue in their craft.
[Citation: ‘Past Imperfect: History According to the Movies (interview with film director Oliver Stone)’, by Mark C Carnes, Cineaste v22 n4 (Fall 1996)]
11 December 2007
Artful writing
Journalists report facts. They don’t cook up stories from their imagination. Cooking up stories is a novelist’s prerogative. Journalists state facts in impersonal tones, leaving out all emotion from their writing. They don’t personalise events as they happen, or happened. They never use ‘I’ in their narratives. That creative licence belongs to the novelist.
The journalist who turns to write novels requires a reversal of role – a change in discipline, a switching of personalities. The new path can be treacherous for those who are reluctant to leave their former profession behind. For, a journalist’s world is that of reality, while a novelist’s is that of fiction.
However, going by the number of journalists who have become novelists – some even managing to maintain both professions simultaneously – perhaps, the transformation isn’t difficult for the talented. Ernest Hemingway is a wonderful example of this model. Although Hemingway was in Spain to report on the Civil War (which he did, of course; and even joined forces with the militia against the fascists), he is better remembered by us for his novel ‘For Whom The Bell Tolls’, a fictional narrative from his experiences there.
However, Hemingway had kept his journalism and his fiction separate – unlike Javier Cercas, author of ‘Soldiers of Salamis’ (please read my previous post), who seems to artfully mix the two. In his book, Cercas, also a journalist- novelist like Hemingway, narrates a particular event from the Spanish Civil War and its investigation 60-odd years later by a journalist-novelist. In fact, in the book, the journalist-novelist character is also called ‘Javier Cercas’.
This writer’s trick/technique can be a little confusing (I remember novelist Paul Auster had used ‘Paul Auster’ as a character in his novel ‘City of Glass’ some 20 years ago which could have been Paul Auster himself), but what amazes me about Cercas is his skill in weaving in and out of fact and fiction with the greatest ease, using the ‘I’ as a sort of self-reflexive technique to narrate the story. So, in ‘Soldiers of Salamis’, we (the readers) often find it difficult to distinguish the reality from the fiction.
Is this artful writing or just a writer’s trick to make the fiction seem real? Perhaps, it’s a bit of both. In an interview with Richard Lea in Guardian Unlimited some six months ago, Cercas explains himself about his writing style (and I quote from that article, ‘Really intense tales’):
His self-reflexive technique, he explains, came out of a series of experimental columns for the Spanish newspaper, El Pais, which continues to this day. “I began to write some weird stuff in El Pais, using the ‘I’,” he says, “and then I became aware that this ‘I’ was fictional, even in a newspaper. They were experimental, crazy columns, and I began to write in a different way, that some people describe as ‘self-fiction’.”
So the books aren’t true tales? “Of course not,” he smiles. “These narrators in the books are not myself, even though in the case of Soldiers of Salamis the name is my name.” According to Cercas, the first person voice is the natural starting point for fiction. To reach the third person was a “conquest” for him, but having achieved that stage of detachment he realised that the first person offers the writer many possibilities.
“The idea of putting oneself in the novel, more or less explicitly, is not new,” he observes. “There’s a whole bunch of writers that use this kind of ‘I’.” He mentions Cervantes, Borges, Roth.
Javier Cercas’ interview with Richard Lea is an amazing read by itself – and I recommend reading it in full. Here’s the link.
[Citation: ‘Really intense tales’, Richard Lea, Guardian Unlimited, 15 June 2007]
The journalist who turns to write novels requires a reversal of role – a change in discipline, a switching of personalities. The new path can be treacherous for those who are reluctant to leave their former profession behind. For, a journalist’s world is that of reality, while a novelist’s is that of fiction.
However, going by the number of journalists who have become novelists – some even managing to maintain both professions simultaneously – perhaps, the transformation isn’t difficult for the talented. Ernest Hemingway is a wonderful example of this model. Although Hemingway was in Spain to report on the Civil War (which he did, of course; and even joined forces with the militia against the fascists), he is better remembered by us for his novel ‘For Whom The Bell Tolls’, a fictional narrative from his experiences there.
However, Hemingway had kept his journalism and his fiction separate – unlike Javier Cercas, author of ‘Soldiers of Salamis’ (please read my previous post), who seems to artfully mix the two. In his book, Cercas, also a journalist- novelist like Hemingway, narrates a particular event from the Spanish Civil War and its investigation 60-odd years later by a journalist-novelist. In fact, in the book, the journalist-novelist character is also called ‘Javier Cercas’.
This writer’s trick/technique can be a little confusing (I remember novelist Paul Auster had used ‘Paul Auster’ as a character in his novel ‘City of Glass’ some 20 years ago which could have been Paul Auster himself), but what amazes me about Cercas is his skill in weaving in and out of fact and fiction with the greatest ease, using the ‘I’ as a sort of self-reflexive technique to narrate the story. So, in ‘Soldiers of Salamis’, we (the readers) often find it difficult to distinguish the reality from the fiction.
Is this artful writing or just a writer’s trick to make the fiction seem real? Perhaps, it’s a bit of both. In an interview with Richard Lea in Guardian Unlimited some six months ago, Cercas explains himself about his writing style (and I quote from that article, ‘Really intense tales’):
His self-reflexive technique, he explains, came out of a series of experimental columns for the Spanish newspaper, El Pais, which continues to this day. “I began to write some weird stuff in El Pais, using the ‘I’,” he says, “and then I became aware that this ‘I’ was fictional, even in a newspaper. They were experimental, crazy columns, and I began to write in a different way, that some people describe as ‘self-fiction’.”
So the books aren’t true tales? “Of course not,” he smiles. “These narrators in the books are not myself, even though in the case of Soldiers of Salamis the name is my name.” According to Cercas, the first person voice is the natural starting point for fiction. To reach the third person was a “conquest” for him, but having achieved that stage of detachment he realised that the first person offers the writer many possibilities.
“The idea of putting oneself in the novel, more or less explicitly, is not new,” he observes. “There’s a whole bunch of writers that use this kind of ‘I’.” He mentions Cervantes, Borges, Roth.
Javier Cercas’ interview with Richard Lea is an amazing read by itself – and I recommend reading it in full. Here’s the link.
[Citation: ‘Really intense tales’, Richard Lea, Guardian Unlimited, 15 June 2007]
09 December 2007
A mix of the real and the imagined
I’m ashamed to say that, apart from an abridged version of Cervantes’ ‘Don Quixote’, poems and plays of Federico Garcia Lorca (during college), and (much later) some works of Camilo Jose Cela and Manuel Rivas, I’ve taken little interest in Spanish literature. Here, by ‘Spanish literature’ I mean literature from Spain, with European flavour, which excludes (Spanish) literature from Latin America.
Of course, I’ve read Ernest Hemingway, George Orwell and Graham Greene recounting their experiences in Spain, particularly about the Spanish Civil War and General Franco’s Spain. But, besides these tales, and some of Greene’s post-Franco observations, my knowledge of the Spanish Civil War and post-Civil War Spain is next to zero. Hence, it was with some trepidation that I picked up Javier Cercas’ 2001 novel ‘Soldiers of Salamis’ and read it.
I say ‘trepidation’ because, thanks to Hemingway, Orwell and Greene, I’ve had an Anglo-American observer’s point of view of Spain and the Spanish Civil War (1936-39). I did not know what to expect from a Spanish writer giving a Spanish insider’s ‘true tale’ view of what, historically, I knew nothing (and still know very little) of. The book ‘Soldiers of Salamis’ mentions many real-life people – in fact, it is based on a historical figure from that period – and was a little confusing for a neophyte like me.
Fortunately, the book (I have a Bloomsbury 2004 paperback edition) had two important sections at the back which helped. First, (Anne McLean’s) Translator’s Afterword, which gave a short political history of Spain during the Civil War. And second, Notes, which was a glossary of important people and terms from that period. These two sections are worth reading and consulting from time to time, and help tremendously in understanding the context of the tale.
The reason I categorise ‘Soldiers of Salamis’ as a ‘book’ and not a novel is because Cercas’ tale is not entirely fictional. It has such a clever mix of the real and the imagined that, at times, it seems like a true account of what may have happened during the Spanish Civil War (the past) and what Cercas himself may have gone through during his research for the book (the present context). Alas, some of all this is fabricated by the author and is, therefore, fiction. Cercas, apparently, labels it as a ‘true tale’.
‘Soldiers of Salamis’ is an account of Rafael Sanchez Mazas, a real-life character in Spain’s history. Sanchez Mazas was a fascist who played a lead role in the Falangist Movement and, later, became a minister in General Franco’s government. The story here is about Sanchez Mazas’ capture by the Republican Army during the Civil War, his failed execution by a firing squad, his miraculous escape when a Republican militiaman finds him and lets him live, his hiding in the forest under the protection of ‘forest friends’, his rescue when Franco’s army drives away the Republicans, his memory of this entire episode during his ministry in Franco’s government, and his compassion thereafter.
This tale forms the centre – the middle, really – of the book and is flanked by two important sections. The first section of the book is the story of a journalist/author, also named Cercas, who researches and writes this tale. The last section of the book is about this journalist’s pursuit of and meeting with an aged, but colourful, character called Miralles, who may have been the Republican militiaman who had spared Rafael Sanchez Mazas’ life and let him escape (after the firing squad fracas) during the Spanish Civil War… thereby changing Spain’s history of forever.
But ‘Soldiers of Salamis’ is much more than this. The book raises several questions on war, history, the recording of history, heroes and heroism, courage, loyalty, integrity and human compassion. But most of all, it is a discussion on truth, the quest for the truth, and how we remember our past. It is a discussion on how personal narratives in history, if left unchallenged or unverified, can be (mis)taken as the truth.
In a review of this book in The Independent, Boyd Tonkin mentions that Javier Cercas had described ‘Soldiers of Salamis’ not as a novel but as a ‘true tale’, ‘cut from the cloth of reality, concocted out of true events and characters’. Perhaps, ‘Soldiers of Salamis’ is exactly that. For those who enjoy historical fiction, this is a perfect story.
[Citation: ‘Boyd Tonkin salutes a commanding novel of war’s myths and memories’, The Independent on Sunday, 14 June 2003]
Of course, I’ve read Ernest Hemingway, George Orwell and Graham Greene recounting their experiences in Spain, particularly about the Spanish Civil War and General Franco’s Spain. But, besides these tales, and some of Greene’s post-Franco observations, my knowledge of the Spanish Civil War and post-Civil War Spain is next to zero. Hence, it was with some trepidation that I picked up Javier Cercas’ 2001 novel ‘Soldiers of Salamis’ and read it.
I say ‘trepidation’ because, thanks to Hemingway, Orwell and Greene, I’ve had an Anglo-American observer’s point of view of Spain and the Spanish Civil War (1936-39). I did not know what to expect from a Spanish writer giving a Spanish insider’s ‘true tale’ view of what, historically, I knew nothing (and still know very little) of. The book ‘Soldiers of Salamis’ mentions many real-life people – in fact, it is based on a historical figure from that period – and was a little confusing for a neophyte like me.
Fortunately, the book (I have a Bloomsbury 2004 paperback edition) had two important sections at the back which helped. First, (Anne McLean’s) Translator’s Afterword, which gave a short political history of Spain during the Civil War. And second, Notes, which was a glossary of important people and terms from that period. These two sections are worth reading and consulting from time to time, and help tremendously in understanding the context of the tale.
The reason I categorise ‘Soldiers of Salamis’ as a ‘book’ and not a novel is because Cercas’ tale is not entirely fictional. It has such a clever mix of the real and the imagined that, at times, it seems like a true account of what may have happened during the Spanish Civil War (the past) and what Cercas himself may have gone through during his research for the book (the present context). Alas, some of all this is fabricated by the author and is, therefore, fiction. Cercas, apparently, labels it as a ‘true tale’.
‘Soldiers of Salamis’ is an account of Rafael Sanchez Mazas, a real-life character in Spain’s history. Sanchez Mazas was a fascist who played a lead role in the Falangist Movement and, later, became a minister in General Franco’s government. The story here is about Sanchez Mazas’ capture by the Republican Army during the Civil War, his failed execution by a firing squad, his miraculous escape when a Republican militiaman finds him and lets him live, his hiding in the forest under the protection of ‘forest friends’, his rescue when Franco’s army drives away the Republicans, his memory of this entire episode during his ministry in Franco’s government, and his compassion thereafter.
This tale forms the centre – the middle, really – of the book and is flanked by two important sections. The first section of the book is the story of a journalist/author, also named Cercas, who researches and writes this tale. The last section of the book is about this journalist’s pursuit of and meeting with an aged, but colourful, character called Miralles, who may have been the Republican militiaman who had spared Rafael Sanchez Mazas’ life and let him escape (after the firing squad fracas) during the Spanish Civil War… thereby changing Spain’s history of forever.
But ‘Soldiers of Salamis’ is much more than this. The book raises several questions on war, history, the recording of history, heroes and heroism, courage, loyalty, integrity and human compassion. But most of all, it is a discussion on truth, the quest for the truth, and how we remember our past. It is a discussion on how personal narratives in history, if left unchallenged or unverified, can be (mis)taken as the truth.
In a review of this book in The Independent, Boyd Tonkin mentions that Javier Cercas had described ‘Soldiers of Salamis’ not as a novel but as a ‘true tale’, ‘cut from the cloth of reality, concocted out of true events and characters’. Perhaps, ‘Soldiers of Salamis’ is exactly that. For those who enjoy historical fiction, this is a perfect story.
[Citation: ‘Boyd Tonkin salutes a commanding novel of war’s myths and memories’, The Independent on Sunday, 14 June 2003]
08 December 2007
Interweaving fact with fiction
History, particularly personal history, is a great source of information for writers of fiction. Many writers, like Milan Kundera about whom I’ve been blogging recently, base their stories upon their own experiences, using a great deal of material, sometimes verbatim, from their own lives.
They interweave fact with fiction, creating make-believe worlds which appear to us truer than real life. In the hands of a skilful writer, fact, along with a dose of the writer’s imagination becomes fiction. Yet, to us, this fiction appears as fact. The reality of the story is what we believe in.
On this subject, I recently read an old review of one of Asian-American author Amy Tan’s books, ‘The Opposite of Fate’. The review, by Clea Simon, in the San Francisco Chronicle from 7 December 2003, describes how this make-believe world of the author can appear to her/his readers.
“Amy Tan has built a career exploring mother-daughter relationships, particularly the tensions between a Chinese immigrant and her American-born children. Because Tan is Asian American and has continued to revisit these themes in her fiction, her fans have made the easy assumption that she writes from experience. These fans, she notes in her new collection of essays, ‘The Opposite of Fate’, often take for granted that she has lifted fiction directly out of fact, going so far as to congratulate her on her chess mastery (she doesn’t play chess) or asking after the children that her apparently fictional stand-ins enjoy (and she doesn’t have children).”
Of course, the writer of fiction views the process differently (and I quote again from the article): “But while the author does acknowledge that art grows from life, it is how it changes on the journey into fiction that makes the telling worthwhile.”
[Citation: ‘Amy Tan explores the interweaving of fate, fact and fiction’, Clea Simon, San Francisco Chronicle, 7 December 2003]
They interweave fact with fiction, creating make-believe worlds which appear to us truer than real life. In the hands of a skilful writer, fact, along with a dose of the writer’s imagination becomes fiction. Yet, to us, this fiction appears as fact. The reality of the story is what we believe in.
On this subject, I recently read an old review of one of Asian-American author Amy Tan’s books, ‘The Opposite of Fate’. The review, by Clea Simon, in the San Francisco Chronicle from 7 December 2003, describes how this make-believe world of the author can appear to her/his readers.
“Amy Tan has built a career exploring mother-daughter relationships, particularly the tensions between a Chinese immigrant and her American-born children. Because Tan is Asian American and has continued to revisit these themes in her fiction, her fans have made the easy assumption that she writes from experience. These fans, she notes in her new collection of essays, ‘The Opposite of Fate’, often take for granted that she has lifted fiction directly out of fact, going so far as to congratulate her on her chess mastery (she doesn’t play chess) or asking after the children that her apparently fictional stand-ins enjoy (and she doesn’t have children).”
Of course, the writer of fiction views the process differently (and I quote again from the article): “But while the author does acknowledge that art grows from life, it is how it changes on the journey into fiction that makes the telling worthwhile.”
[Citation: ‘Amy Tan explores the interweaving of fate, fact and fiction’, Clea Simon, San Francisco Chronicle, 7 December 2003]
07 December 2007
Fate of the individual in modern society
A theme that often connects Czech writer Milan Kundera’s novels, and his short fiction, is the fate of the individual in modern society. It appears in almost all of his major works such as ‘The Joke’, ‘The Book of Laughter and Forgetting’ and ‘The Unbearable Lightness of Being’. Although these novels deal with the fate of the individual in a modern Communist society, I feel, the theme is equally applicable to any modern society.
However, Kundera’s use of this theme is not new. The fate of the individual in modern society is a common theme among fiction writers. Gustave Flaubert, Leo Tolstoy, Joseph Conrad, Virginia Woolf, D H Lawrence, James Joyce, and not to forget Franz Kafka, among many others, have all traversed this territory much before Kundera. But what I like about Kundera is that he presents his case, and his characters, in a light-hearted manner, interweaving fact with fiction.
Although Kundera’s writing lectures us on modern Czech history, this history is presented as the context in which his characters shuffle around and live their lives. This is the place that his characters occupy – a place created by Kundera’s imagination and the factual recorded history of Czechoslovakia. Added to this is Kundera’s play with philosophy, which he presents to us, almost as a discourse, through his ‘narrator’.
Kundera’s stories, and the characters within them, unfold in a sort of step-by-step manner – in an interweaving of fact and fiction and philosophy, within which his characters struggle to discover themselves and find joy (if only for a short while). All in all, a pretty interesting construction for a novel, don’t you think?
However, Kundera’s use of this theme is not new. The fate of the individual in modern society is a common theme among fiction writers. Gustave Flaubert, Leo Tolstoy, Joseph Conrad, Virginia Woolf, D H Lawrence, James Joyce, and not to forget Franz Kafka, among many others, have all traversed this territory much before Kundera. But what I like about Kundera is that he presents his case, and his characters, in a light-hearted manner, interweaving fact with fiction.
Although Kundera’s writing lectures us on modern Czech history, this history is presented as the context in which his characters shuffle around and live their lives. This is the place that his characters occupy – a place created by Kundera’s imagination and the factual recorded history of Czechoslovakia. Added to this is Kundera’s play with philosophy, which he presents to us, almost as a discourse, through his ‘narrator’.
Kundera’s stories, and the characters within them, unfold in a sort of step-by-step manner – in an interweaving of fact and fiction and philosophy, within which his characters struggle to discover themselves and find joy (if only for a short while). All in all, a pretty interesting construction for a novel, don’t you think?
05 December 2007
Meaninglessness
Relationships are not single-dimensional entities. Every relationship is made up of two or more persons, naturally engendering two or more points of view. There is no single truth, or happening; only versions of it.
This idea takes central meaning in most of Czech writer Milan Kundera’s fiction. His post-Czechoslovakia novel, ‘The Unbearable Lightness of Being’, completed in Paris (where he had moved with his wife sometime after the Soviet invasion) and published sometime in the mid-1980s, is a classic example of his thinking and his writing style. In most literary circles, at least from Western eyes, ‘The Unbearable Lightness of Being’ is acknowledged to be Kundera’s most famous work.
The narrative in ‘The Unbearable Lightness of Being’ is simple, telling the stories of two couples: Tomas and Tereza, Sabina and Franz.
The longer of the two stories is about a divorced womanising Czech neurosurgeon Tomas and Tereza, an ordinary country girl working in a restaurant. Tomas and Tereza marry, and Tereza becomes a talented photojournalist. When the Soviets invade and take over Czechoslovakia, the couple defects to Switzerland. However, on discovering Tomas’ infidelities, Tereza returns to Czechoslovakia. Tomas follows her, giving up a promising career.
In Czechoslovakia, Tomas is forced to leave his job at a hospital when he refuses to retract from an article he had written, which contained anti-Soviet sentiments. So, he takes up a menial job as a window cleaner, while continuing with his womanising. Tereza becomes a housewife, staying home with their dog. Later, to deter Tomas from his infidelities, as well as to escape from the State secret police, the couple moves to the country to live simple happy lives, until their death in a car accident.
The shorter story is a relationship between Sabina, a Czech émigré artist in Switzerland, and Franz, a married Swiss lecturer. Not only do the two persons in this relationship have different backgrounds, they have different minds as well – and the relationship is fraught with misunderstandings. They eventually part ways: Franz to live with a female student and then die a nonsensical death at a protest march in Thailand; Sabina to live to a ripe old age in the United States.
Although the narrative in ‘The Unbearable Lightness of Being’ is simple, Kundera continues to explore his favourite theme of different points of view that relationships encompass. He examines relationships from different angles, presenting different perspectives of the persons involved in the relationships. So different can be the perspectives in a relationship, Kundera believes, that he introduces a makeshift vocabulary of misunderstood expressions in the novel.
And, in quintessential Kundera style, he uses a narrator in the story to present another point of view, explaining matters to us rather philosophically. For, Kundera believes, since people experience relationships differently, they interpret relationships in their own respective ways… leaving very little margin for understanding each other. The concept of ‘lightness’ in this context is, perhaps, nothing but meaninglessness.
This idea takes central meaning in most of Czech writer Milan Kundera’s fiction. His post-Czechoslovakia novel, ‘The Unbearable Lightness of Being’, completed in Paris (where he had moved with his wife sometime after the Soviet invasion) and published sometime in the mid-1980s, is a classic example of his thinking and his writing style. In most literary circles, at least from Western eyes, ‘The Unbearable Lightness of Being’ is acknowledged to be Kundera’s most famous work.
The narrative in ‘The Unbearable Lightness of Being’ is simple, telling the stories of two couples: Tomas and Tereza, Sabina and Franz.
The longer of the two stories is about a divorced womanising Czech neurosurgeon Tomas and Tereza, an ordinary country girl working in a restaurant. Tomas and Tereza marry, and Tereza becomes a talented photojournalist. When the Soviets invade and take over Czechoslovakia, the couple defects to Switzerland. However, on discovering Tomas’ infidelities, Tereza returns to Czechoslovakia. Tomas follows her, giving up a promising career.
In Czechoslovakia, Tomas is forced to leave his job at a hospital when he refuses to retract from an article he had written, which contained anti-Soviet sentiments. So, he takes up a menial job as a window cleaner, while continuing with his womanising. Tereza becomes a housewife, staying home with their dog. Later, to deter Tomas from his infidelities, as well as to escape from the State secret police, the couple moves to the country to live simple happy lives, until their death in a car accident.
The shorter story is a relationship between Sabina, a Czech émigré artist in Switzerland, and Franz, a married Swiss lecturer. Not only do the two persons in this relationship have different backgrounds, they have different minds as well – and the relationship is fraught with misunderstandings. They eventually part ways: Franz to live with a female student and then die a nonsensical death at a protest march in Thailand; Sabina to live to a ripe old age in the United States.
Although the narrative in ‘The Unbearable Lightness of Being’ is simple, Kundera continues to explore his favourite theme of different points of view that relationships encompass. He examines relationships from different angles, presenting different perspectives of the persons involved in the relationships. So different can be the perspectives in a relationship, Kundera believes, that he introduces a makeshift vocabulary of misunderstood expressions in the novel.
And, in quintessential Kundera style, he uses a narrator in the story to present another point of view, explaining matters to us rather philosophically. For, Kundera believes, since people experience relationships differently, they interpret relationships in their own respective ways… leaving very little margin for understanding each other. The concept of ‘lightness’ in this context is, perhaps, nothing but meaninglessness.
04 December 2007
The volatility of relationships
“Marriages collapsed for all sorts of reasons, but presumably you never really knew why unless you were involved in one.”
The above quote from William Trevor’s novella ‘Reading Turgenev’ (in ‘Two Lives’) may explain why I, at 48 years of age, am still at a loss as to why my parents decided to separate after 29 years of married life. In those 29 years, something had gone wrong in their relationship, perhaps even brewed over many years, before my parents came to the conclusion that they were no longer compatible as a married couple, and decided to part ways.
This reality, this happening, this incompatibility in marriages and man-woman relationships has always haunted me. Although I have felt alone in my inability to understand it, I have often found similar sentiments in contemporary literature. For, I have found, much of today’s literary fiction deals with this very theme of incompatibility in marriages and in relationships between men and women. In fact, in the works of two of my favourite authors, this is a common occurrence.
Many novels and short stories of Irish author William Trevor are apt examples of this theme. Perhaps more so are the novels of Czech writer Milan Kundera. Whereas Trevor adopts a gentle style in his narrative (such as in ‘Reading Turgenev’), presenting bittersweet love stories or mismatched couples, Kundera’s novels (such as ‘The Unbearable Lightness of Being’) are hard-hitting existentialist tales built on egos and ignorance.
While Trevor narrates stories of ordinary people in unhappy marriages, in passive acceptance of the situations they are in, Kundera clinically lays bare every relationship, comparing and blending the philosophical aspects with the carnal. When Trevor offers us relationships marred by infidelities or deceit, we are wooed as much by the romance as we are by the grief experienced by the protagonists. With Kundera’s novels, where infidelities and misunderstandings abound, we are sucked in (and sometimes repelled) by the rigours of the man-woman relationships.
While Trevor’s novels and short stories appeal to our softer senses, touching upon tender emotions in a quiet manner, Kundera’s appeal is a churning of cold rationality with the raw sexuality that his characters experience.
Perhaps, marriages, and man-woman relationships, are a combination of all these traits and experiences – and are, therefore, more volatile than what they seem from the outside. Only those involved have knowledge of their relationship’s strength or frailty.
The above quote from William Trevor’s novella ‘Reading Turgenev’ (in ‘Two Lives’) may explain why I, at 48 years of age, am still at a loss as to why my parents decided to separate after 29 years of married life. In those 29 years, something had gone wrong in their relationship, perhaps even brewed over many years, before my parents came to the conclusion that they were no longer compatible as a married couple, and decided to part ways.
This reality, this happening, this incompatibility in marriages and man-woman relationships has always haunted me. Although I have felt alone in my inability to understand it, I have often found similar sentiments in contemporary literature. For, I have found, much of today’s literary fiction deals with this very theme of incompatibility in marriages and in relationships between men and women. In fact, in the works of two of my favourite authors, this is a common occurrence.
Many novels and short stories of Irish author William Trevor are apt examples of this theme. Perhaps more so are the novels of Czech writer Milan Kundera. Whereas Trevor adopts a gentle style in his narrative (such as in ‘Reading Turgenev’), presenting bittersweet love stories or mismatched couples, Kundera’s novels (such as ‘The Unbearable Lightness of Being’) are hard-hitting existentialist tales built on egos and ignorance.
While Trevor narrates stories of ordinary people in unhappy marriages, in passive acceptance of the situations they are in, Kundera clinically lays bare every relationship, comparing and blending the philosophical aspects with the carnal. When Trevor offers us relationships marred by infidelities or deceit, we are wooed as much by the romance as we are by the grief experienced by the protagonists. With Kundera’s novels, where infidelities and misunderstandings abound, we are sucked in (and sometimes repelled) by the rigours of the man-woman relationships.
While Trevor’s novels and short stories appeal to our softer senses, touching upon tender emotions in a quiet manner, Kundera’s appeal is a churning of cold rationality with the raw sexuality that his characters experience.
Perhaps, marriages, and man-woman relationships, are a combination of all these traits and experiences – and are, therefore, more volatile than what they seem from the outside. Only those involved have knowledge of their relationship’s strength or frailty.
22 October 2007
The value of relationships
I live in an apartment building of sixteen flats. My neighbours don’t seem to be very friendly. I’ve wished them on several occasions, but they’ve never wished back. I’ve opened the gate for their cars a couple of times when the security guy was not around and I was passing by, but they’ve never thanked me. Once when there was an electric power problem in the building, I came home at night to find that no one had bothered to switch on my ‘mains’ connection while they all enjoyed full electricity in their flats.
Gauging my neighbours’ response and rudeness as, perhaps, a big-city Mumbai syndrome, I’ve kept more or less to myself. So, imagine my surprise yesterday, when a neighbour of mine approached me out of the blue and asked me for a favour. Apparently, a wedding was soon to take place in her family and the neighbour wanted to shift some for her furniture to my flat, temporarily, for a week. She said, “Since you’re single, living alone, you’re bound to have extra space in your flat.”
In seconds, a montage of her husband ignoring my ‘good mornings’, her son not thanking me for opening the gate for his car on a rainy night, and her shutting the door on my face when I requested for help during the electric power problem danced through my mind. I replied, politely, that I had no space to accommodate her furniture in my flat. She got up and left, commenting that this was a dastardly (as I looked like a nice person) and a very un-neighbourly behaviour. She said, “Neighbours are expected to be nice to each other.”
In life, as it is in business, we often expect a return from others (our family, friends, colleagues and customers – not to mention neighbours) without, first, building a relationship with them. We handle our lives, our work and our businesses, one transaction at a time… without a sense of a rapport, a bond, a continuity. We expect others to accommodate us simply because we need something, right then and there. We, what we want, what we have to say, what we have to sell is all that matters.
The trouble with a transaction-based life, or business for that matter, is that we treat them in isolation. And, in the process, we consider others’ position, feelings and the need for affiliation irrelevant. The value of a transaction is not determined by itself, in isolation, but by what comes before and after it. It is determined by the relationship that governs and guides it.
Gauging my neighbours’ response and rudeness as, perhaps, a big-city Mumbai syndrome, I’ve kept more or less to myself. So, imagine my surprise yesterday, when a neighbour of mine approached me out of the blue and asked me for a favour. Apparently, a wedding was soon to take place in her family and the neighbour wanted to shift some for her furniture to my flat, temporarily, for a week. She said, “Since you’re single, living alone, you’re bound to have extra space in your flat.”
In seconds, a montage of her husband ignoring my ‘good mornings’, her son not thanking me for opening the gate for his car on a rainy night, and her shutting the door on my face when I requested for help during the electric power problem danced through my mind. I replied, politely, that I had no space to accommodate her furniture in my flat. She got up and left, commenting that this was a dastardly (as I looked like a nice person) and a very un-neighbourly behaviour. She said, “Neighbours are expected to be nice to each other.”
In life, as it is in business, we often expect a return from others (our family, friends, colleagues and customers – not to mention neighbours) without, first, building a relationship with them. We handle our lives, our work and our businesses, one transaction at a time… without a sense of a rapport, a bond, a continuity. We expect others to accommodate us simply because we need something, right then and there. We, what we want, what we have to say, what we have to sell is all that matters.
The trouble with a transaction-based life, or business for that matter, is that we treat them in isolation. And, in the process, we consider others’ position, feelings and the need for affiliation irrelevant. The value of a transaction is not determined by itself, in isolation, but by what comes before and after it. It is determined by the relationship that governs and guides it.
18 October 2007
On loyalty
I’ve often come across managers in the corporate world who talk about loyalty. They insist on it. They expect it from their team members and their consumers unconditionally. And, what are they prepared to give in return? If anything at all, it’s bad behaviour. These managers are usually mean and oppressive – and insecure to the core of their hearts.
People in corporate organisations, like their consumers in the wider world, and the rest of us, balk at mean and oppressive behaviour. What we look for everywhere is a relationship – a connection, a kinship. A giving and accepting. An opening of hearts. An exchange of information and appreciation… empathy and even kindness. Such relationships are based on trust, not on bad behaviour.
Yet, many corporate managers fail to understand this. They demand loyalty without, first, building a relationship with their team members or their consumers. Is it any wonder that loyalty eludes them?
People in corporate organisations, like their consumers in the wider world, and the rest of us, balk at mean and oppressive behaviour. What we look for everywhere is a relationship – a connection, a kinship. A giving and accepting. An opening of hearts. An exchange of information and appreciation… empathy and even kindness. Such relationships are based on trust, not on bad behaviour.
Yet, many corporate managers fail to understand this. They demand loyalty without, first, building a relationship with their team members or their consumers. Is it any wonder that loyalty eludes them?
16 October 2007
That elusive thing called loyalty
I have more faith in dogs than I have in human beings. Dogs offer their love and loyalty unwaveringly, and unconditionally… well, perhaps, for some food and a place to sleep. Humans are more demanding. They simply want more and more. With such precepts driving human behaviour, how can brand marketers ever expect loyalty from their consumers?
The task of garnering consumer loyalty makes most brand marketers shudder. Just when they think they’ve succeeded in getting consumers to stay loyal to their brand, consumers wander off, lusting after a competitive brand. Infidelity is quite common between consumers and brands; forcing many marketers to believe loyalty is either a myth or, at least, an elusive target.
What drives the consumers away? No one knows for sure. It could be a change in the product formulation, the packaging, or the advertising. It could be a price increase, or an attractive promotion by another brand. It could be the overall shopping experience, or the insensitivity of the sales staff. It could be the availability of a technologically superior product, or the novelty of a new brand entering the marketplace.
When my clients complain to me about a drop in consumer loyalty, I bowl them a googly. I ask them: should consumers be loyal to the brand or should the brand be loyal to its consumers? You see, because we are here to sell our brand, we become a little self-centred and myopic in our selling. The brand means everything to us.
We expect consumers to stay loyal to our brand, without finding out ways to make the brand more meaningful to consumers over the long run. Our best efforts end up in offering discounts and points to consumers, expecting consumers to stay loyal against cash pay-offs. The costs of such pay-offs are heavy, cutting into business profits. In such business models, as profits dwindle, so do cash pay-offs to consumers, eroding consumer loyalty.
The thing to remember is, loyalty is not a one-time execution of a marketing plan. It is life-long – and deserves such commitment from the brand marketers. Keeping consumers loyal over the years is the real big challenge. Being sensitive to consumer needs and wants over the years, and maintaining brand salience in accordance with changing consumer lifestyles against competitive forces, hold the key to consumer loyalty.
The task of garnering consumer loyalty makes most brand marketers shudder. Just when they think they’ve succeeded in getting consumers to stay loyal to their brand, consumers wander off, lusting after a competitive brand. Infidelity is quite common between consumers and brands; forcing many marketers to believe loyalty is either a myth or, at least, an elusive target.
What drives the consumers away? No one knows for sure. It could be a change in the product formulation, the packaging, or the advertising. It could be a price increase, or an attractive promotion by another brand. It could be the overall shopping experience, or the insensitivity of the sales staff. It could be the availability of a technologically superior product, or the novelty of a new brand entering the marketplace.
When my clients complain to me about a drop in consumer loyalty, I bowl them a googly. I ask them: should consumers be loyal to the brand or should the brand be loyal to its consumers? You see, because we are here to sell our brand, we become a little self-centred and myopic in our selling. The brand means everything to us.
We expect consumers to stay loyal to our brand, without finding out ways to make the brand more meaningful to consumers over the long run. Our best efforts end up in offering discounts and points to consumers, expecting consumers to stay loyal against cash pay-offs. The costs of such pay-offs are heavy, cutting into business profits. In such business models, as profits dwindle, so do cash pay-offs to consumers, eroding consumer loyalty.
The thing to remember is, loyalty is not a one-time execution of a marketing plan. It is life-long – and deserves such commitment from the brand marketers. Keeping consumers loyal over the years is the real big challenge. Being sensitive to consumer needs and wants over the years, and maintaining brand salience in accordance with changing consumer lifestyles against competitive forces, hold the key to consumer loyalty.
09 October 2007
Differential marketing
“All consumers are NOT created equal. Some are vastly more profitable than others, and the marketers who succeed in an increasingly brand-hostile and technology-driven environment will be those who know how to capitalize on the difference.”
Although this quote came from Garth Hallberg’s book All Consumers Are Not Created Equal from way back 1995, I’m sure its philosophy is applicable today. Perhaps, even more so, as there are many more brands in the marketplace today than there were in 1995, a great many of which are technology-driven. To many marketers, today’s markets are truly hostile places for their brands.
As most experienced marketers know, old mass-market strategies of ‘one strategy fits all consumer segments’ are ineffective today. Consumers are now showing their true colours – each colour demanding full attention from the brand marketer. A move towards customised strategies to win over individual consumers seems to be the only answer.
Needless to say, the job for marketers has become difficult and less reassuring… setting them off in a quest to find (a) why consumers buy what they buy; (b) why consumers are more flexible in their preferences for brands – and less brand loyal; and (c) how a brand could profit from such an overwhelming marketplace.
One way to do this is to adopt neuromarketing techniques by putting (representative samples of) consumers under MRI scanners while exposing them to various brand messages, designs and colour options. If the pre-frontal cortex of the brain (which is apparently responsible for eliciting/controlling excitement and happiness) lights up, there is a likelihood that consumers will respond favourably towards brands that carry similar messages, designs and colour options… generating future sales.
But, besides being expensive and inconclusive (no one is sure if its results are accurate), neuromarketing is cumbersome to implement. For starters, it needs an MRI lab – with high-end medical equipment and trained technicians. In India, this type of investment is a far cry from what market research agencies are prepared for. No wonder neuromarketing has yet to find a place among Indian marketers.
That’s why I’ve always felt that Garth Hallberg’s philosophy of treating consumers differentially through ‘differential marketing strategy for brand loyalty and profits’ is an excellent idea for Indian marketers.
Differential marketing, a carryover from DM (direct marketing) and a precursor to today’s CRM (customer relationship management), is both a meaningful and a profitable marketing practice. It is information-based, with a good deal of data analysis and mapping at its processing centre. It relies on data generated from consumer sales and is, therefore, based entirely on facts from consumer response to brands (as opposed to experiments in neuromarketing). You can’t get any more accurate than that.
Differential marketing is not difficult to implement and can be adopted easily by small businesses. It requires small investments in information technology, a few data-entry operators, a programmer at the most… apart from a scientific-minded marketer. If consumer data-capture mechanisms are in place, the job is really half-done.
Alas, Indian marketers have been slow in responding to differential marketing as most of them don’t have adequate consumer/brand data in their hands. Even if they did, they don’t seem to be applying enough analytical techniques to mine information from their data banks – a fact that surprises me as we have some of the best marketing talent in the world. Instead, many Indian marketers have jumped directly to CRM, burning their fingers in huge investments with little returns.
Indian advertising agencies have shunned the differential marketing philosophy entirely. As usual, they have preferred to rest on their creative laurels rather than on accountability for building consumer relationships with brands. Even Garth Hallberg’s own agency in India, Ogilvy & Mather, has been reluctant in using this philosophy.
Although this quote came from Garth Hallberg’s book All Consumers Are Not Created Equal from way back 1995, I’m sure its philosophy is applicable today. Perhaps, even more so, as there are many more brands in the marketplace today than there were in 1995, a great many of which are technology-driven. To many marketers, today’s markets are truly hostile places for their brands.
As most experienced marketers know, old mass-market strategies of ‘one strategy fits all consumer segments’ are ineffective today. Consumers are now showing their true colours – each colour demanding full attention from the brand marketer. A move towards customised strategies to win over individual consumers seems to be the only answer.
Needless to say, the job for marketers has become difficult and less reassuring… setting them off in a quest to find (a) why consumers buy what they buy; (b) why consumers are more flexible in their preferences for brands – and less brand loyal; and (c) how a brand could profit from such an overwhelming marketplace.
One way to do this is to adopt neuromarketing techniques by putting (representative samples of) consumers under MRI scanners while exposing them to various brand messages, designs and colour options. If the pre-frontal cortex of the brain (which is apparently responsible for eliciting/controlling excitement and happiness) lights up, there is a likelihood that consumers will respond favourably towards brands that carry similar messages, designs and colour options… generating future sales.
But, besides being expensive and inconclusive (no one is sure if its results are accurate), neuromarketing is cumbersome to implement. For starters, it needs an MRI lab – with high-end medical equipment and trained technicians. In India, this type of investment is a far cry from what market research agencies are prepared for. No wonder neuromarketing has yet to find a place among Indian marketers.
That’s why I’ve always felt that Garth Hallberg’s philosophy of treating consumers differentially through ‘differential marketing strategy for brand loyalty and profits’ is an excellent idea for Indian marketers.
Differential marketing, a carryover from DM (direct marketing) and a precursor to today’s CRM (customer relationship management), is both a meaningful and a profitable marketing practice. It is information-based, with a good deal of data analysis and mapping at its processing centre. It relies on data generated from consumer sales and is, therefore, based entirely on facts from consumer response to brands (as opposed to experiments in neuromarketing). You can’t get any more accurate than that.
Differential marketing is not difficult to implement and can be adopted easily by small businesses. It requires small investments in information technology, a few data-entry operators, a programmer at the most… apart from a scientific-minded marketer. If consumer data-capture mechanisms are in place, the job is really half-done.
Alas, Indian marketers have been slow in responding to differential marketing as most of them don’t have adequate consumer/brand data in their hands. Even if they did, they don’t seem to be applying enough analytical techniques to mine information from their data banks – a fact that surprises me as we have some of the best marketing talent in the world. Instead, many Indian marketers have jumped directly to CRM, burning their fingers in huge investments with little returns.
Indian advertising agencies have shunned the differential marketing philosophy entirely. As usual, they have preferred to rest on their creative laurels rather than on accountability for building consumer relationships with brands. Even Garth Hallberg’s own agency in India, Ogilvy & Mather, has been reluctant in using this philosophy.
26 September 2007
Tightwads and spendthrifts
Marketers are a restless lot. They’re never happy with the amount of buying that consumers do. To them, consumers never buy what marketers want them to. And, even if consumers do buy the marketer’s brand, consumers never seem to buy enough of it. On the contrary, consumers switch their brand preferences the moment a new brand enters their (mental) shopping space.
What’s worse, competitors, like sharks, are always lurking around the corner, ready to take a bite out of the market and the marketer’s business. Often, new brands enter the market with distinctly better benefits, plus heavy marketing spends, creating havoc with the dynamics of the marketplace – and, once again, with the marketer’s business.
To most marketers, the problem lies with consumers. Consumers never seem to respond well enough, quickly enough or repeatedly enough, to the marketer’s messages – created by great advertising minds – or the pricing and the placement that great strategists recommend. Even if consumers do respond to these messages, they never seem to keep their promise of buying more and more of the marketer’s brand, in tandem with the marketer’s brand investments.
Often, marketers feel consumers are a disloyal lot.
But, really, how should consumers behave? Is there a formula or a pattern that consumers should follow to please marketers for all the effort that marketers put in to create brands – and build demand for them? Why should consumers even provide information about their brand preferences and buying behaviour to marketers? And, even if they do, why should consumers stick to their preferences and buying behaviour when the moment to decide on a purchase actually arrives?
In reality, consumers behave in any way they wish. And, good marketers know this well. Predicting consumer behaviour is really what marketing is all about and marketers simply will not rest until they find some magic formula to make this happen. That’s why I was not surprised to read an article from The Wharton School last week about a research – and a hypothesis under test – on the consumer’s acceptance of pain when paying for a purchase.
It’s an interesting angle to the consumer behaviour discussion… and I’m sure neuromarketing will have something to say about it sometime soon. For the moment, the research – conducted by Scott I Rick from Wharton, and Cynthia E Cryder and George Loewenstein, both from Carnegie Mellon – proposes to test a measure of individual differences in the pain of paying on a ‘Spendthrift-Tightwad’ scale.
The Wharton article, titled Are You a Tightwad or a Spendthrift? And What Does This Mean for Retailers?, says, “while ‘tightwad’ and ‘spendthrift’ are not exactly labels that most people would welcome in a discussion of their spending habits, they are valuable as predictors of consumer behavior.” The researchers seem to believe that “an anticipatory pain of paying drives tightwads to spend less than they would ideally like to spend. Spendthrifts, by contrast, experience too little pain of paying and typically spend more than they would ideally like to spend.”
“According to the researchers,” explains the Wharton article, “tightwads are defined as people ‘who feel intense pain at the prospect of spending money, and therefore tend to spend less than they would ideally like to spend’. Spendthrifts ‘feel insufficient amounts of pain at the prospect of spending and therefore tend to spend more than they would ideally like to spend’.”
Apparently, the “Spending differences between tightwads and spendthrifts are greatest in situations that amplify the pain of paying and smallest in situations that diminish the pain of paying.”
I wonder what the marketers have to say about this. And, would the consumers agree?
The Wharton article Are You a Tightwad or a Spendthrift? And What Does This Mean for Retailers? can be found here.
What’s worse, competitors, like sharks, are always lurking around the corner, ready to take a bite out of the market and the marketer’s business. Often, new brands enter the market with distinctly better benefits, plus heavy marketing spends, creating havoc with the dynamics of the marketplace – and, once again, with the marketer’s business.
To most marketers, the problem lies with consumers. Consumers never seem to respond well enough, quickly enough or repeatedly enough, to the marketer’s messages – created by great advertising minds – or the pricing and the placement that great strategists recommend. Even if consumers do respond to these messages, they never seem to keep their promise of buying more and more of the marketer’s brand, in tandem with the marketer’s brand investments.
Often, marketers feel consumers are a disloyal lot.
But, really, how should consumers behave? Is there a formula or a pattern that consumers should follow to please marketers for all the effort that marketers put in to create brands – and build demand for them? Why should consumers even provide information about their brand preferences and buying behaviour to marketers? And, even if they do, why should consumers stick to their preferences and buying behaviour when the moment to decide on a purchase actually arrives?
In reality, consumers behave in any way they wish. And, good marketers know this well. Predicting consumer behaviour is really what marketing is all about and marketers simply will not rest until they find some magic formula to make this happen. That’s why I was not surprised to read an article from The Wharton School last week about a research – and a hypothesis under test – on the consumer’s acceptance of pain when paying for a purchase.
It’s an interesting angle to the consumer behaviour discussion… and I’m sure neuromarketing will have something to say about it sometime soon. For the moment, the research – conducted by Scott I Rick from Wharton, and Cynthia E Cryder and George Loewenstein, both from Carnegie Mellon – proposes to test a measure of individual differences in the pain of paying on a ‘Spendthrift-Tightwad’ scale.
The Wharton article, titled Are You a Tightwad or a Spendthrift? And What Does This Mean for Retailers?, says, “while ‘tightwad’ and ‘spendthrift’ are not exactly labels that most people would welcome in a discussion of their spending habits, they are valuable as predictors of consumer behavior.” The researchers seem to believe that “an anticipatory pain of paying drives tightwads to spend less than they would ideally like to spend. Spendthrifts, by contrast, experience too little pain of paying and typically spend more than they would ideally like to spend.”
“According to the researchers,” explains the Wharton article, “tightwads are defined as people ‘who feel intense pain at the prospect of spending money, and therefore tend to spend less than they would ideally like to spend’. Spendthrifts ‘feel insufficient amounts of pain at the prospect of spending and therefore tend to spend more than they would ideally like to spend’.”
Apparently, the “Spending differences between tightwads and spendthrifts are greatest in situations that amplify the pain of paying and smallest in situations that diminish the pain of paying.”
I wonder what the marketers have to say about this. And, would the consumers agree?
The Wharton article Are You a Tightwad or a Spendthrift? And What Does This Mean for Retailers? can be found here.
22 September 2007
Harvard’s Law
Everyday, billions of consumers across the globe buy and consume brands promoted by marketers through well-thought-out advertising and marketing campaigns. Advertisers and marketers claim that their campaigns work, and that their products and brands sell through this effort.
But, when we check on consumer behaviour through research, we find it difficult to correlate consumer preference and purchase for a brand matching brand campaigns and initial research leads. Why is this so? Why do consumers say one thing during research and do something else altogether in real life?
It’s tough to answer these questions logically, except, perhaps, to cite Harvard’s Law, as applied to human beings: “Under the most rigorously controlled conditions of pressure, temperature, volume, humidity and other variables, a human being will do as it damn well pleases.” So, where does that put marketing?
Marketing tries to determine precisely why consumers buy specific brands. To be able to do that, marketing needs to determine what and how consumers think. And then, influence that thinking to steer consumers into making a purchase in favour of the brand being marketed.
Two questions arise in my mind: One, how accurately can marketing read consumer minds and influence/control consumer thoughts? Two, can it be done at all?
From traditional consumer research using, say, focus group techniques or in-depth one-on-one interviews, marketing can – and does – determine generic consumer attitudes and feelings towards a brand. Then, it extrapolates these findings to project approximate buying behaviours. Based on this, advertising and marketing campaigns are developed.
At the moment, it’s an approximate science. Correlating consumer research to consumer insights to advertising and marketing campaigns in order to achieve desired (pre-determined) consumer behaviour is difficult. Modern tools like neuromarketing can perhaps help, but the efficacy of the campaigns must be questioned.
I believe, for marketers to claim that they know precisely what and how consumers think – and use this information to make consumers respond in a precise manner – is asking too much. All said and done, marketers have Harvard’s Law to contend with.
But, when we check on consumer behaviour through research, we find it difficult to correlate consumer preference and purchase for a brand matching brand campaigns and initial research leads. Why is this so? Why do consumers say one thing during research and do something else altogether in real life?
It’s tough to answer these questions logically, except, perhaps, to cite Harvard’s Law, as applied to human beings: “Under the most rigorously controlled conditions of pressure, temperature, volume, humidity and other variables, a human being will do as it damn well pleases.” So, where does that put marketing?
Marketing tries to determine precisely why consumers buy specific brands. To be able to do that, marketing needs to determine what and how consumers think. And then, influence that thinking to steer consumers into making a purchase in favour of the brand being marketed.
Two questions arise in my mind: One, how accurately can marketing read consumer minds and influence/control consumer thoughts? Two, can it be done at all?
From traditional consumer research using, say, focus group techniques or in-depth one-on-one interviews, marketing can – and does – determine generic consumer attitudes and feelings towards a brand. Then, it extrapolates these findings to project approximate buying behaviours. Based on this, advertising and marketing campaigns are developed.
At the moment, it’s an approximate science. Correlating consumer research to consumer insights to advertising and marketing campaigns in order to achieve desired (pre-determined) consumer behaviour is difficult. Modern tools like neuromarketing can perhaps help, but the efficacy of the campaigns must be questioned.
I believe, for marketers to claim that they know precisely what and how consumers think – and use this information to make consumers respond in a precise manner – is asking too much. All said and done, marketers have Harvard’s Law to contend with.
17 September 2007
Under the scanner
It’s common knowledge to advertisers and marketers that purchase decisions are made in the consumer’s mind. Whoever understands and controls the human mind is likely to own a fair share of the market – if not dominate the market entirely – with products and brands satisfying a plethora of human needs and wants. Not to mention increasing profitability in one’s business.
However, the human mind is complex. Understanding it is easily said than done. Traditional methods of consumer research using focus groups no longer seem to work as (and this, too, is common knowledge to advertisers and marketers) consumers are not always truthful in communicating their needs, wants and responses to advertising and marketing messages.
To complicate matters, consumers themselves are not always aware of their needs and wants – at least, to the extent of (not) being able to express themselves clearly. That’s because much of our desires are hidden in our subconscious mind.
With millions of brands competing with each other to capture consumer mind-space every day, exploring the consumer’s subconscious mind in order to seek out explicit brand preference and purchase triggers has now become critical to the advertising and marketing industries. The answer, many advertisers, marketers and social scientists feel, lies in understanding human emotional responses to advertising/marketing messages.
To this end, traditional techniques like focus groups don’t quite cut it; though in-depth one-on-one interviews, response latency measures, metaphor elicitation techniques, eye-movement tracking, or simply watching consumer behaviour through (hidden) video recorders have generated substantial information on consumers to advertisers and marketers. Unfortunately, behaviour studies of this nature are still unable to accurately identify consumer choice triggers.
That’s where neuromarketing comes in – with its promise of showing (i.e. capturing visual data in colourful graphics) how brands and their various components, including advertising messages, can have emotional impact in a consumer’s mind. Just how much impact the brand has is probably still difficult to measure, but neuromarketers are pretty confident of tracking emotional impact by showing ‘increased activity’ in the brain when the brain is under stimulation.
Apart from a compliant consumer who doesn’t mind being put under the scanner while lying prone without the freedom to move about and evaluate brands and options as a consumer does in real life, the technique relies heavily on expensive technology that scans the brain (or checks the body temperature and the pulse rate) at the instant the consumer is exposed to a specific advertising/marketing message.
Of course, it is difficult to assess if an ‘increased activity’ in the brain results in a consumer’s preference for a specific brand, and a brand purchase thereafter, but neuromarketers are certain that this, too, will be determined soon.
However, the human mind is complex. Understanding it is easily said than done. Traditional methods of consumer research using focus groups no longer seem to work as (and this, too, is common knowledge to advertisers and marketers) consumers are not always truthful in communicating their needs, wants and responses to advertising and marketing messages.
To complicate matters, consumers themselves are not always aware of their needs and wants – at least, to the extent of (not) being able to express themselves clearly. That’s because much of our desires are hidden in our subconscious mind.
With millions of brands competing with each other to capture consumer mind-space every day, exploring the consumer’s subconscious mind in order to seek out explicit brand preference and purchase triggers has now become critical to the advertising and marketing industries. The answer, many advertisers, marketers and social scientists feel, lies in understanding human emotional responses to advertising/marketing messages.
To this end, traditional techniques like focus groups don’t quite cut it; though in-depth one-on-one interviews, response latency measures, metaphor elicitation techniques, eye-movement tracking, or simply watching consumer behaviour through (hidden) video recorders have generated substantial information on consumers to advertisers and marketers. Unfortunately, behaviour studies of this nature are still unable to accurately identify consumer choice triggers.
That’s where neuromarketing comes in – with its promise of showing (i.e. capturing visual data in colourful graphics) how brands and their various components, including advertising messages, can have emotional impact in a consumer’s mind. Just how much impact the brand has is probably still difficult to measure, but neuromarketers are pretty confident of tracking emotional impact by showing ‘increased activity’ in the brain when the brain is under stimulation.
Apart from a compliant consumer who doesn’t mind being put under the scanner while lying prone without the freedom to move about and evaluate brands and options as a consumer does in real life, the technique relies heavily on expensive technology that scans the brain (or checks the body temperature and the pulse rate) at the instant the consumer is exposed to a specific advertising/marketing message.
Of course, it is difficult to assess if an ‘increased activity’ in the brain results in a consumer’s preference for a specific brand, and a brand purchase thereafter, but neuromarketers are certain that this, too, will be determined soon.
15 September 2007
Perhaps brand choices are emotional
Brands can make a difference in consumer preference and purchase. They can connect with consumers… in ways the marketing fraternity, today, is still not sure of. What marketing seems to be sure of is that the decision to buy happens in the consumer’s mind. Thereby, embarking on a relentless pursuit of the truth behind what makes consumers buy.
My career in sales, advertising and marketing has taken me through several points of view on a similar pursuit. From ‘man is a utility maximiser’ (and is, therefore, always expected to make logical choices) to regimental ‘awareness-interest-desire-action’ sales pitches to using ‘emotion in advertising’ in order to appeal to the consumer’s subconscious mind.
Some 20 years ago, I remember sitting through a Unilever presentation on emotional advertising. Audience reaction to TV commercials from across the globe was collected and matched against sales for each brand. Tragically, barring 2 cases out of the 30-odd commercials, audience reaction to emotional advertising didn’t match brand sales figures.
Something was wrong. There was uproar in the audience. How could this work? We were a group of Indians sitting in Mumbai, India, responding to advertising created in different countries for brands (some of which) we hadn’t even heard of. Surely, local culture, as well as prior exposure to these brands, had to influence our responses!
Yet, it was true, some of the commercials shown (and, thus, the brands advertised) appealed to our inner senses… perhaps within our subconscious mind. Some totally; some a great deal more than others. In fact, some of those commercials left us so cold that we, secretly, made promises never to buy those brands. And, for those commercials which enlightened us by sparking off positive feelings in the recesses of our minds, we wished we could own those brands.
Perhaps, brand choices are emotional.
My career in sales, advertising and marketing has taken me through several points of view on a similar pursuit. From ‘man is a utility maximiser’ (and is, therefore, always expected to make logical choices) to regimental ‘awareness-interest-desire-action’ sales pitches to using ‘emotion in advertising’ in order to appeal to the consumer’s subconscious mind.
Some 20 years ago, I remember sitting through a Unilever presentation on emotional advertising. Audience reaction to TV commercials from across the globe was collected and matched against sales for each brand. Tragically, barring 2 cases out of the 30-odd commercials, audience reaction to emotional advertising didn’t match brand sales figures.
Something was wrong. There was uproar in the audience. How could this work? We were a group of Indians sitting in Mumbai, India, responding to advertising created in different countries for brands (some of which) we hadn’t even heard of. Surely, local culture, as well as prior exposure to these brands, had to influence our responses!
Yet, it was true, some of the commercials shown (and, thus, the brands advertised) appealed to our inner senses… perhaps within our subconscious mind. Some totally; some a great deal more than others. In fact, some of those commercials left us so cold that we, secretly, made promises never to buy those brands. And, for those commercials which enlightened us by sparking off positive feelings in the recesses of our minds, we wished we could own those brands.
Perhaps, brand choices are emotional.
10 September 2007
Deciphering the human mind
One of the biggest challenges in my line of business (which is strategic marketing) is to understand what makes people buy the things they buy and consume. My clients – i.e. advertisers and marketers of various sizes – are on my back every day to find that magic formula for making people buy their brands.
So far, this magic formula has been elusive, ensuring that I have a tough time with my clients. Not because I’m bad at what I do, but alas, such is life in this business. No one can predict with any certainty that consumers will buy the products and brands that my clients – or any other marketers across the globe – produce and sell.
That’s because the decision to buy (anything) happens inside our minds – and neither creative directors in ad agencies nor marketing strategists have been able to decipher the human mind yet.
Research is conducted every day to assess consumer response to advertising messages and other stimuli (for instance colour, design, texture of materials, music), consumer habits and buying patterns, and consumer behaviour in general, but a look inside a consumer’s mind to determine a brand choice is giving the best of us the runaround.
So, advertisers and marketers abroad (to my knowledge, this hasn’t happened in India yet) have turned to medicine, science and technology for an answer. They have, specifically speaking, turned to ‘neuromarketing’ – a science which is supposed to look inside a consumer’s mind and check for responses to specific advertising messages and other marketing stimuli.
Neuromarketers (as that’s what they are called) rely on functional magnetic resonance imaging, or fMRI, which scans and maps the human brain when the (concerned) human is exposed to specific, or collective, stimuli created by advertisers and marketers. It is expected that a study of the brain’s internal behaviour will lead us to understand how the human being behaves as a consumer, preferring certain advertising messages, products and brands over others.
Apparently, organisations like Unilever, DaimlerChrysler and MTV Networks, among many others, have used neuromarketing in their research already, though it is reported that most organisations prefer to keep their neuromarketing initiatives a secret.
So far, this magic formula has been elusive, ensuring that I have a tough time with my clients. Not because I’m bad at what I do, but alas, such is life in this business. No one can predict with any certainty that consumers will buy the products and brands that my clients – or any other marketers across the globe – produce and sell.
That’s because the decision to buy (anything) happens inside our minds – and neither creative directors in ad agencies nor marketing strategists have been able to decipher the human mind yet.
Research is conducted every day to assess consumer response to advertising messages and other stimuli (for instance colour, design, texture of materials, music), consumer habits and buying patterns, and consumer behaviour in general, but a look inside a consumer’s mind to determine a brand choice is giving the best of us the runaround.
So, advertisers and marketers abroad (to my knowledge, this hasn’t happened in India yet) have turned to medicine, science and technology for an answer. They have, specifically speaking, turned to ‘neuromarketing’ – a science which is supposed to look inside a consumer’s mind and check for responses to specific advertising messages and other marketing stimuli.
Neuromarketers (as that’s what they are called) rely on functional magnetic resonance imaging, or fMRI, which scans and maps the human brain when the (concerned) human is exposed to specific, or collective, stimuli created by advertisers and marketers. It is expected that a study of the brain’s internal behaviour will lead us to understand how the human being behaves as a consumer, preferring certain advertising messages, products and brands over others.
Apparently, organisations like Unilever, DaimlerChrysler and MTV Networks, among many others, have used neuromarketing in their research already, though it is reported that most organisations prefer to keep their neuromarketing initiatives a secret.
06 September 2007
A means to an end
Culture is not only a function of what happens inside our minds, but is also a function of our interactions with people and what we say to each other.
Since these interactions and messages are multidirectional – i.e. they happen up and down the social hierarchy, as well as across the social spectrum, and even obliquely cut through social orders and structures – and their assimilation in our minds is invisible to us, culture is never a smooth flow of various parts of our lives (past and present) coming together to make a flawless whole.
We get the feeling that something is happening gradually, almost invisibly, spreading through our populations, reaching a point which then becomes noticeably visible to us. Immediately, sociologists, trend-spotters, advertisers and marketers (myself included), and a few others (Malcolm Gladwell included) pounce on it as if great discoveries are made in human understanding.
Articles and books are written, the media steps in to popularise the concepts, and great advertising and marketing campaigns are created for brands. Human behaviour, which is at the centre of our culture, apparently, changes as people begin to realise that they can’t live without a certain idea, product or brand. New lifestyles emerge almost from nowhere, and are much discussed and written about. Now, our ‘new and improved’ life goes on...
This, in turn, gives economic consumption a shot in the arm – leading to the creation and marketing of more, and better, products and brands.
Sometimes, I feel, the whole idea of culture in our modern world is nothing more than an interpretation of the human mind to find ways of increasing consumption of economic goods and services. It is no longer perceived, nor studied, as something that plays a crucial role in human evolution – helping us to understand who we are, what makes us different from others, and why we do the things we do. Culture now serves a means to achieve a business end.
I wonder if this is what Claude Lévi-Strauss, famous social anthropologist, had in mind when he had said, “Culture uses and transforms life to realise a synthesis of a higher order.”
Since these interactions and messages are multidirectional – i.e. they happen up and down the social hierarchy, as well as across the social spectrum, and even obliquely cut through social orders and structures – and their assimilation in our minds is invisible to us, culture is never a smooth flow of various parts of our lives (past and present) coming together to make a flawless whole.
We get the feeling that something is happening gradually, almost invisibly, spreading through our populations, reaching a point which then becomes noticeably visible to us. Immediately, sociologists, trend-spotters, advertisers and marketers (myself included), and a few others (Malcolm Gladwell included) pounce on it as if great discoveries are made in human understanding.
Articles and books are written, the media steps in to popularise the concepts, and great advertising and marketing campaigns are created for brands. Human behaviour, which is at the centre of our culture, apparently, changes as people begin to realise that they can’t live without a certain idea, product or brand. New lifestyles emerge almost from nowhere, and are much discussed and written about. Now, our ‘new and improved’ life goes on...
This, in turn, gives economic consumption a shot in the arm – leading to the creation and marketing of more, and better, products and brands.
Sometimes, I feel, the whole idea of culture in our modern world is nothing more than an interpretation of the human mind to find ways of increasing consumption of economic goods and services. It is no longer perceived, nor studied, as something that plays a crucial role in human evolution – helping us to understand who we are, what makes us different from others, and why we do the things we do. Culture now serves a means to achieve a business end.
I wonder if this is what Claude Lévi-Strauss, famous social anthropologist, had in mind when he had said, “Culture uses and transforms life to realise a synthesis of a higher order.”
03 September 2007
A cumulative process
In evolution, one thing seems to be clear: genes are passed on from one generation to another. From parents to offspring. This means, genetic evolution is, necessarily, an inter-generational process.
However, if you’ve read my previous post on the discovery, and the subsequent application, of ‘linear perspective’ in art and geometry, you’d have noticed that learning and culture are not only passed on from one generation to the next, but they are also passed on from one person to another within the same generation.
A case in point is peer-to-peer social interaction where preferences and behaviours are adopted and modelled by people, typically, in the same age group. Examples would include use of language and mannerisms, choices in fashion and music, food habits, and lifestyles such as hanging out in coffee shops or blogging.
What I mean to say is, culture rubs off onto people. Not only for those who are in direct social contact, but also for those to whom culture is transmitted through media like books, TV and the Internet. And, in doing so, cultural transmission breaks both the age and the distance barrier.
This means, essentially, the transmission of culture is both inter- as well as intra-generational. From this perspective, cultural evolution is distinct from genetic evolution.
On the one hand, if there is no learning from the preceding generation (say, from parents to offspring), there can be no improvements from the past. On the other hand, when people learn from each other within the same generation (say, peer-to-peer), they cut short the learning process and become quick to adapt to new environments.
I guess, the best results come from a combination of both inter- and intra-generational processes. After all, evolution is a cumulative process.
However, if you’ve read my previous post on the discovery, and the subsequent application, of ‘linear perspective’ in art and geometry, you’d have noticed that learning and culture are not only passed on from one generation to the next, but they are also passed on from one person to another within the same generation.
A case in point is peer-to-peer social interaction where preferences and behaviours are adopted and modelled by people, typically, in the same age group. Examples would include use of language and mannerisms, choices in fashion and music, food habits, and lifestyles such as hanging out in coffee shops or blogging.
What I mean to say is, culture rubs off onto people. Not only for those who are in direct social contact, but also for those to whom culture is transmitted through media like books, TV and the Internet. And, in doing so, cultural transmission breaks both the age and the distance barrier.
This means, essentially, the transmission of culture is both inter- as well as intra-generational. From this perspective, cultural evolution is distinct from genetic evolution.
On the one hand, if there is no learning from the preceding generation (say, from parents to offspring), there can be no improvements from the past. On the other hand, when people learn from each other within the same generation (say, peer-to-peer), they cut short the learning process and become quick to adapt to new environments.
I guess, the best results come from a combination of both inter- and intra-generational processes. After all, evolution is a cumulative process.
29 August 2007
A matter of perspective
Learning and culture in human evolution, and the transmission of that learning/culture, can be mystifying. At one moment we have nothing; in the next, it’s a whole new life. There have been specific instances in human history at (or, more correctly, right after) which our learning and culture has literally grown in leaps and bounds.
For early man, discovering fire by rubbing two stones together was one such instance. Everything changed after that. More recently, another such discovery has changed our world forever. This discovery, though scientific in nature, has been in the field of art, as well as geometry, and has influenced our thinking and culture immensely.
This discovery is the concept of ‘perspective’ – that, to the human eye, things/images in the distance look smaller than things/images which are closer (i.e. in the foreground). Although this is common knowledge today, and a child can draw it, there was no concept of perspective – at least, as far as its representation goes – in our culture as late as 1400 AD.
The concept of perspective (or, more accurately, linear perspective) was ‘introduced’ in art during the Renaissance, some 600 years ago. The person credited with this introduction, or invention as some define it, was painter, sculptor, architect and artisan-engineer Filippo Di Ser Brunellesco (1377-1446) – better known as Brunelleschi.
Until the early 1400s, every element/object in a drawing or painting was represented in equal proportion to another (i.e. equal in size) irrespective of whether the element/object was in the foreground or at a distance in the picture or the model. In other words, the art was created without depth… without a linear perspective.
It was Brunelleschi, noted for building the Duomo of Santa Maria del Fiore at Florence and the Sagrestia Vecchia of San Lorenzo, among other famous works of architecture, who first painted the Baptistery of Florence (stated to be in 1415) in linear perspective. Thus, introducing an illusion of depth in his painting which changed art, and geometry, forever.
Soon, other Italian painters of the Renaissance period began using linear perspective in their art. And, thereafter, the rest of the world followed. The idea caught on like wildfire; its application made visible in everything people painted or designed. At one moment we had nothing; in the next, it was a whole new life. Today, a life without perspective is unthinkable.
For early man, discovering fire by rubbing two stones together was one such instance. Everything changed after that. More recently, another such discovery has changed our world forever. This discovery, though scientific in nature, has been in the field of art, as well as geometry, and has influenced our thinking and culture immensely.
This discovery is the concept of ‘perspective’ – that, to the human eye, things/images in the distance look smaller than things/images which are closer (i.e. in the foreground). Although this is common knowledge today, and a child can draw it, there was no concept of perspective – at least, as far as its representation goes – in our culture as late as 1400 AD.
The concept of perspective (or, more accurately, linear perspective) was ‘introduced’ in art during the Renaissance, some 600 years ago. The person credited with this introduction, or invention as some define it, was painter, sculptor, architect and artisan-engineer Filippo Di Ser Brunellesco (1377-1446) – better known as Brunelleschi.
Until the early 1400s, every element/object in a drawing or painting was represented in equal proportion to another (i.e. equal in size) irrespective of whether the element/object was in the foreground or at a distance in the picture or the model. In other words, the art was created without depth… without a linear perspective.
It was Brunelleschi, noted for building the Duomo of Santa Maria del Fiore at Florence and the Sagrestia Vecchia of San Lorenzo, among other famous works of architecture, who first painted the Baptistery of Florence (stated to be in 1415) in linear perspective. Thus, introducing an illusion of depth in his painting which changed art, and geometry, forever.
Soon, other Italian painters of the Renaissance period began using linear perspective in their art. And, thereafter, the rest of the world followed. The idea caught on like wildfire; its application made visible in everything people painted or designed. At one moment we had nothing; in the next, it was a whole new life. Today, a life without perspective is unthinkable.
27 August 2007
Complex evolving systems
Mimicry plays a key role in our development. It has been through mimicry – or imitation – that we humans have learnt and evolved over 50,000 years. It was probably by imitating, i.e. following and repeating another person’s movements and actions, that we first developed a language of gestures, which then evolved into an early verbal language (a protolanguage). And, finally, into a fully functional language of the kind we use today.
Mind you, all these changes in human beings had taken tens of thousands of years… something we may not give credence to today when we see a baby imitating its mother and learning from her in a matter of days and weeks. There are many theories on how this happened, one of which, based on the discovery of mirror neurons (see my previous post), suggests that mirror neurons in our brains make complex imitation possible in humans, resulting in our ability to learn and use language.
As it said in the Beth Azar article, ‘How mimicry begat culture’, from which I had quoted in my previous post: “Then, once we had the ability to imitate, and learn through imitation, transmission of culture could continue by leaps and bounds.” Still, it was something that happened gradually, over tens of thousands of years, through a cumulative mixing of the cognitive processes in the brain with the processes of human social interaction and communication.
What also contributed to human evolution in a big way was the transmission of these continuing processes from one individual to another, from one group to another, and from one generation to another, in the form of culture. What seems commonplace today, and is taken for granted, is really a network of complex evolving systems in our brains, developed and transmitted over tens of thousands of years. And, it is happening even as you read this post.
Mind you, all these changes in human beings had taken tens of thousands of years… something we may not give credence to today when we see a baby imitating its mother and learning from her in a matter of days and weeks. There are many theories on how this happened, one of which, based on the discovery of mirror neurons (see my previous post), suggests that mirror neurons in our brains make complex imitation possible in humans, resulting in our ability to learn and use language.
As it said in the Beth Azar article, ‘How mimicry begat culture’, from which I had quoted in my previous post: “Then, once we had the ability to imitate, and learn through imitation, transmission of culture could continue by leaps and bounds.” Still, it was something that happened gradually, over tens of thousands of years, through a cumulative mixing of the cognitive processes in the brain with the processes of human social interaction and communication.
What also contributed to human evolution in a big way was the transmission of these continuing processes from one individual to another, from one group to another, and from one generation to another, in the form of culture. What seems commonplace today, and is taken for granted, is really a network of complex evolving systems in our brains, developed and transmitted over tens of thousands of years. And, it is happening even as you read this post.
24 August 2007
Mimicry
The Monarch butterfly is poisonous. Its body contains cardenolides which are toxic. This biological (i.e. genetic) make-up protects it from being eaten by predators like frogs, lizards, birds and other insects. Its predators avoid it by noticing the very-visible orange and black patterns on the wings of the Monarch butterfly.
The Viceroy butterfly is not poisonous (though it’s known to cause its predators a little tummy upset), but similar looking. Its wings have orange and black patterns similar to the Monarch butterfly. This look confuses its predators, thereby saving it from being eaten. Some biologists say the Viceroy butterfly mimics the Monarch butterfly, using its mimicking capability as deception.
According to Wikipedia, “In ecology, mimicry describes a situation where one organism, the mimic, has evolved to share common outward characteristics with another organism, the model, through the selective action of a signal-receiver.” In human culture, it’s no different. People mimic people (e.g. babies and mothers); cultures mimic cultures (e.g. India and the West).
For us, as it is in the animal kingdom, sometimes, mimicry (or the act of mimicking) is for self-preservation. Sometimes, mimicry is for progress – a way of moving a nation, or the human race, forward. And why not? Mimicry, say social scientists, is a common method of learning: from languages to customs to skills to art. Not to mention our ability to empathise with each other and share our knowledge.
It’s something we have been developing for the last 50,000 years or so. For, it was around this time that human culture experienced a sudden explosion of technological sophistication, art, clothes and dwellings. According to Vilayanur Ramachandran of University of California, San Diego, this ‘big bang’ of human development happened due to our mirror-neurons.
As reported in an October 2005 article, ‘How mimicry begat culture’ by Beth Azar, in ‘Monitor’ from American Psychology Association, “it was Ramachandran’s provocative big bang essay published in 2000 on the Edge Web site (www.edge.org) – an online salon for scientists and other intellectuals – that really got people’s attention.
“I predict that mirror neurons will do for psychology what DNA did for biology,” wrote Ramachandran. “They will provide a unifying framework and help explain a host of mental abilities that have hitherto remained mysterious and inaccessible to experiments.”
In particular, he says, mirror neurons primed the human brain for the great leap forward by allowing us the ability to imitate, read others’ intentions and thereby learn from each other.
You can read Beth Azar’s ‘How mimicry begat culture’ here.
[Citation: Wikipedia – Monarch Butterfly, Mimicry.]
The Viceroy butterfly is not poisonous (though it’s known to cause its predators a little tummy upset), but similar looking. Its wings have orange and black patterns similar to the Monarch butterfly. This look confuses its predators, thereby saving it from being eaten. Some biologists say the Viceroy butterfly mimics the Monarch butterfly, using its mimicking capability as deception.
According to Wikipedia, “In ecology, mimicry describes a situation where one organism, the mimic, has evolved to share common outward characteristics with another organism, the model, through the selective action of a signal-receiver.” In human culture, it’s no different. People mimic people (e.g. babies and mothers); cultures mimic cultures (e.g. India and the West).
For us, as it is in the animal kingdom, sometimes, mimicry (or the act of mimicking) is for self-preservation. Sometimes, mimicry is for progress – a way of moving a nation, or the human race, forward. And why not? Mimicry, say social scientists, is a common method of learning: from languages to customs to skills to art. Not to mention our ability to empathise with each other and share our knowledge.
It’s something we have been developing for the last 50,000 years or so. For, it was around this time that human culture experienced a sudden explosion of technological sophistication, art, clothes and dwellings. According to Vilayanur Ramachandran of University of California, San Diego, this ‘big bang’ of human development happened due to our mirror-neurons.
As reported in an October 2005 article, ‘How mimicry begat culture’ by Beth Azar, in ‘Monitor’ from American Psychology Association, “it was Ramachandran’s provocative big bang essay published in 2000 on the Edge Web site (www.edge.org) – an online salon for scientists and other intellectuals – that really got people’s attention.
“I predict that mirror neurons will do for psychology what DNA did for biology,” wrote Ramachandran. “They will provide a unifying framework and help explain a host of mental abilities that have hitherto remained mysterious and inaccessible to experiments.”
In particular, he says, mirror neurons primed the human brain for the great leap forward by allowing us the ability to imitate, read others’ intentions and thereby learn from each other.
You can read Beth Azar’s ‘How mimicry begat culture’ here.
[Citation: Wikipedia – Monarch Butterfly, Mimicry.]
22 August 2007
The Butterfly Effect
A theory called the ‘Butterfly Effect’ suggests that even a tiny change somewhere in our ecosystem can bring in gigantic changes everywhere. The theory uses the analogy of the flapping of a butterfly’s wings (an almost-insignificant action) building up a momentum and force through a chain of actions and events to cause storms or other catastrophes elsewhere.
I’m not sure if this is the law of the Universe, but some people swear by it, saying that science can actually prove it. Apart from experiments in physics with application of force, there seems to be enough evidence from history which justifies this theory. I mean, take a look at any human revolution or uprising, and you’ll know this theory is true. The evidence is all around us.
Like history, the social sciences also seem to prove this theory over and over – from gossip to fashion trends to viral marketing to religion. All seem to be saying the same thing: a tiny change can result in a momentous event. As the tagline to the film ‘The Butterfly Effect’, a fantasy thriller by directors Eric Bress and J Mackye Gruber, starring Ashton Kutcher and Amy Smart, says: ‘Change one thing. Change everything.’
The point of this is, when it comes to human evolution, even in biological and cultural systems, “small changes in one component can have cascading implications.” So agreed Stephen Jay Gould of Harvard University, in a 2000 interview with Leader to Leader Institute, titled ‘The Spice of Life’, while discussing similarities and differences between the natural and business worlds. Yet, he had cautioned, predicting human evolution or the success of business models is not easy.
“I can tell you to the minute when the next eclipse is going to occur, because it’s a simple system with limited interactions. I can’t tell you where human evolution is going. Also, the mathematical analysis of complex systems – systems composed of multiple, independent parts – shows that a small perturbation can produce profound effects, because of the way it cascades through the nonlinear interactions of the system. If you then add a little bit of randomness you get profound and unpredictable effects.”
According to Professor Gould, applying Darwinian laws of natural selection directly to cultural systems may be wrong in principle. That’s because, “The mechanics of change in human cultural institutions are quite different from those in nature… Natural selection is not a very efficient system because it works by elimination. You get to goodness by eliminating the bad. Why don’t you just go to good? The problem is, you don’t know what good is. You have to let a system operate and find itself. That kind of modeling is counterintuitive to the way in which humans generally try to run their institutions.”
‘The Spice of Life’ interview with Stephen Jay Gould can be found here.
I’m not sure if this is the law of the Universe, but some people swear by it, saying that science can actually prove it. Apart from experiments in physics with application of force, there seems to be enough evidence from history which justifies this theory. I mean, take a look at any human revolution or uprising, and you’ll know this theory is true. The evidence is all around us.
Like history, the social sciences also seem to prove this theory over and over – from gossip to fashion trends to viral marketing to religion. All seem to be saying the same thing: a tiny change can result in a momentous event. As the tagline to the film ‘The Butterfly Effect’, a fantasy thriller by directors Eric Bress and J Mackye Gruber, starring Ashton Kutcher and Amy Smart, says: ‘Change one thing. Change everything.’
The point of this is, when it comes to human evolution, even in biological and cultural systems, “small changes in one component can have cascading implications.” So agreed Stephen Jay Gould of Harvard University, in a 2000 interview with Leader to Leader Institute, titled ‘The Spice of Life’, while discussing similarities and differences between the natural and business worlds. Yet, he had cautioned, predicting human evolution or the success of business models is not easy.
“I can tell you to the minute when the next eclipse is going to occur, because it’s a simple system with limited interactions. I can’t tell you where human evolution is going. Also, the mathematical analysis of complex systems – systems composed of multiple, independent parts – shows that a small perturbation can produce profound effects, because of the way it cascades through the nonlinear interactions of the system. If you then add a little bit of randomness you get profound and unpredictable effects.”
According to Professor Gould, applying Darwinian laws of natural selection directly to cultural systems may be wrong in principle. That’s because, “The mechanics of change in human cultural institutions are quite different from those in nature… Natural selection is not a very efficient system because it works by elimination. You get to goodness by eliminating the bad. Why don’t you just go to good? The problem is, you don’t know what good is. You have to let a system operate and find itself. That kind of modeling is counterintuitive to the way in which humans generally try to run their institutions.”
‘The Spice of Life’ interview with Stephen Jay Gould can be found here.
20 August 2007
Parallel evolutions
As it is with most things in life, there are turning points in the evolution of the human species. For instance, the point at which humans became different from chimpanzees (in spite of acquiring 98.5% of their genes), and/or the point at which the modern man became different from the Neanderthal.
Of course, this evolution has taken 50,000 years or so. Maybe longer. As long as six million years, if we are to believe what evolutionary biologists and anthropologists say. During this period, the human brain has been ‘rewired’ millions and billions of times. Who knows at which instances the turning points occurred?
The moral of the story is that, this rewiring of the brain has made us do things completely differently from the way other animals do things. And has, eventually, made us what we are today: a life-form which is capable of finding cures to diseases, travelling in outer space, telecommunicating, and figuring out solutions to complex problems.
Our survival and our evolution have both depended on our ability to invent, to create things, using our brain. According to Prof Steven Pinker of Harvard University, our success has been a result of three things which have co-evolved (see my previous post): our intelligence about our world (i.e. cause and effect as apparent in nature), our social intelligence (i.e. coordinating our behaviour with others to bring in collective benefits), and our language (i.e. communicating, sharing and exchanging information).
In simple terms, this means human evolution has been due to a ‘cognitive’ evolution – an evolution in the powers of the human brain.
This has resulted in the creation of various methods, tools, and technologies… all of which has helped us survive and evolve further. These ‘products’ of the human brain have also made a tremendous contribution to human evolution – perhaps as important a contribution (some scientists believe a greater contribution) as has been made by our genes.
For 50,000 years or more, humans have passed on their accumulated learning – their discoveries, inventions, tools, methods, languages and customs – generation after generation, not through genes but through language, actions, behaviour and other forms of communication. Thereby, carrying on a ‘cultural’ evolution parallel to the cognitive one… making us what we are today.
Of course, this evolution has taken 50,000 years or so. Maybe longer. As long as six million years, if we are to believe what evolutionary biologists and anthropologists say. During this period, the human brain has been ‘rewired’ millions and billions of times. Who knows at which instances the turning points occurred?
The moral of the story is that, this rewiring of the brain has made us do things completely differently from the way other animals do things. And has, eventually, made us what we are today: a life-form which is capable of finding cures to diseases, travelling in outer space, telecommunicating, and figuring out solutions to complex problems.
Our survival and our evolution have both depended on our ability to invent, to create things, using our brain. According to Prof Steven Pinker of Harvard University, our success has been a result of three things which have co-evolved (see my previous post): our intelligence about our world (i.e. cause and effect as apparent in nature), our social intelligence (i.e. coordinating our behaviour with others to bring in collective benefits), and our language (i.e. communicating, sharing and exchanging information).
In simple terms, this means human evolution has been due to a ‘cognitive’ evolution – an evolution in the powers of the human brain.
This has resulted in the creation of various methods, tools, and technologies… all of which has helped us survive and evolve further. These ‘products’ of the human brain have also made a tremendous contribution to human evolution – perhaps as important a contribution (some scientists believe a greater contribution) as has been made by our genes.
For 50,000 years or more, humans have passed on their accumulated learning – their discoveries, inventions, tools, methods, languages and customs – generation after generation, not through genes but through language, actions, behaviour and other forms of communication. Thereby, carrying on a ‘cultural’ evolution parallel to the cognitive one… making us what we are today.
18 August 2007
Evolution of the mind
“If you took a bunch of human babies from anywhere around the world – from Australia, New Guinea, Africa, Europe – and scrambled the babies at birth and brought them up in any society, they’d all be able to learn the same languages, learn how to count, learn how to use computers, learn how to make and use tools. It suggests that the distinctively human parts of our intelligence were in place before our ancestors split off into the different continents.”
– Steven Pinker in an interview on PBS ‘Evolution’.
Along with geneticist Steve Jones of University College London (about whom I’ve blogged earlier this month), Steven Pinker of Harvard University’s Dept of Psychology feels human biological evolution is over. That it was over some 50,000 years ago (give or take 10,000 years) before humans migrated from Africa to Europe and Australia. That, today, Africans, Europeans, Americans, Australians and Asians are all the same species of humans, with indistinguishable cognitive abilities.
These cognitive abilities and their application which resulted in greater intelligence in humans didn’t appear at a moment’s notice, brought about by divine intervention, but evolved over tens, maybe hundreds, of thousands of years. The learning from previous application of a certain ability – or a combination of abilities – led to greater intelligence which, in turn, was used in another application. Thus, expanding our minds (i.e. our brains) with more and more intelligence and knowledge.
The trick with human evolution vis-à-vis evolution of animal or plant life forms, says Steven Pinker, lies in the application of the brain. The human brain is capable of understanding the world around us and finding solutions to problems encountered in much faster speeds in order to adapt and evolve in our ecosystem. In doing so, the human brain figures out not only how to use more of the ecosystem to our advantage, but also to invent methods, tools and technologies (which are external to the human body) to get things done.
In theory, several things are likely to have happened to develop our cognitive abilities so quickly and bring us where we are today in our evolutionary timeline. In the same PBS ‘Evolution’ broadcast (transcript) I’ve quoted from in the beginning of this post, Steven Pinker suggests what these ‘several things’ may be: “So, each one of these abilities – intelligence about the world, social intelligence, and language – I think reinforces the other two, and it’s very likely that the three of them coevolved like a ratchet, each one setting the stage for the other two to be incremented a bit.”
Armed with such cognitive abilities, I’m sure we’ll be able to find a way to a better life where threats of diseases like AIDS and cancer are a thing of the past.
– Steven Pinker in an interview on PBS ‘Evolution’.
Along with geneticist Steve Jones of University College London (about whom I’ve blogged earlier this month), Steven Pinker of Harvard University’s Dept of Psychology feels human biological evolution is over. That it was over some 50,000 years ago (give or take 10,000 years) before humans migrated from Africa to Europe and Australia. That, today, Africans, Europeans, Americans, Australians and Asians are all the same species of humans, with indistinguishable cognitive abilities.
These cognitive abilities and their application which resulted in greater intelligence in humans didn’t appear at a moment’s notice, brought about by divine intervention, but evolved over tens, maybe hundreds, of thousands of years. The learning from previous application of a certain ability – or a combination of abilities – led to greater intelligence which, in turn, was used in another application. Thus, expanding our minds (i.e. our brains) with more and more intelligence and knowledge.
The trick with human evolution vis-à-vis evolution of animal or plant life forms, says Steven Pinker, lies in the application of the brain. The human brain is capable of understanding the world around us and finding solutions to problems encountered in much faster speeds in order to adapt and evolve in our ecosystem. In doing so, the human brain figures out not only how to use more of the ecosystem to our advantage, but also to invent methods, tools and technologies (which are external to the human body) to get things done.
In theory, several things are likely to have happened to develop our cognitive abilities so quickly and bring us where we are today in our evolutionary timeline. In the same PBS ‘Evolution’ broadcast (transcript) I’ve quoted from in the beginning of this post, Steven Pinker suggests what these ‘several things’ may be: “So, each one of these abilities – intelligence about the world, social intelligence, and language – I think reinforces the other two, and it’s very likely that the three of them coevolved like a ratchet, each one setting the stage for the other two to be incremented a bit.”
Armed with such cognitive abilities, I’m sure we’ll be able to find a way to a better life where threats of diseases like AIDS and cancer are a thing of the past.
16 August 2007
A promiscuous minority
Not a large chunk of the global population has AIDS, or is a transmitter of the virus. Yet, the threat of the spread of the AIDS virus is a global concern. That’s because the AIDS virus spreads mainly through sexual contact. And sex is something we all engage in.
According to data from international AIDS research (surveyed and released by agencies from the developed world), Africa alone is not a concentration of the AIDS virus. It’s the Third World in general. Besides Africa (and there, there seems to be a higher concentration in the south of the continent), countries like India, China, Thailand and Myanmar are in precarious stages of launching AIDS epidemics.
Of course, it’s not everyone in the Third World who is responsible for spreading the AIDS virus. Just a promiscuous minority of the population.
Apparently, studies of sexual behaviour of people in the Third World show that a small portion of the population – prostitutes, and truck drivers and migrant workers who patronise them – are responsible for the spread of the AIDS virus. This lifestyle of rampant promiscuity seems to be commonplace in poor countries, and has been driving the spread of AIDS.
Culturally, in the Third World, although a married woman’s fidelity is well-guarded, her husband often strays, having sex with multiple partners (such as prostitutes). This increases the chances of AIDS infection and transmission, including the possibility of the men infecting their wives. Since Islamic cultures allow multiple wives for a man, this introduces a multiplier effect in the spread of the AIDS infection.
Moreover, most people in the Third World either do not have access to condoms, or are not educated enough to follow the practice of using condoms during sex, or are not allowed by their religion or customs to use preventive methods during sex. This further enhances the possibilities of infection and transmission of the AIDS virus.
Add to this list the number of women raped by soldiers (infected with the AIDS virus) during wars and civil wars, typically in Africa, and you have another source of AIDS transmission.
Anyway, by now you must have got the basic drift of this post. That, when it comes to spreading the AIDS virus, and perhaps influencing human evolution in the Third World this millennium, it’s a minority of promiscuous men who are responsible.
According to data from international AIDS research (surveyed and released by agencies from the developed world), Africa alone is not a concentration of the AIDS virus. It’s the Third World in general. Besides Africa (and there, there seems to be a higher concentration in the south of the continent), countries like India, China, Thailand and Myanmar are in precarious stages of launching AIDS epidemics.
Of course, it’s not everyone in the Third World who is responsible for spreading the AIDS virus. Just a promiscuous minority of the population.
Apparently, studies of sexual behaviour of people in the Third World show that a small portion of the population – prostitutes, and truck drivers and migrant workers who patronise them – are responsible for the spread of the AIDS virus. This lifestyle of rampant promiscuity seems to be commonplace in poor countries, and has been driving the spread of AIDS.
Culturally, in the Third World, although a married woman’s fidelity is well-guarded, her husband often strays, having sex with multiple partners (such as prostitutes). This increases the chances of AIDS infection and transmission, including the possibility of the men infecting their wives. Since Islamic cultures allow multiple wives for a man, this introduces a multiplier effect in the spread of the AIDS infection.
Moreover, most people in the Third World either do not have access to condoms, or are not educated enough to follow the practice of using condoms during sex, or are not allowed by their religion or customs to use preventive methods during sex. This further enhances the possibilities of infection and transmission of the AIDS virus.
Add to this list the number of women raped by soldiers (infected with the AIDS virus) during wars and civil wars, typically in Africa, and you have another source of AIDS transmission.
Anyway, by now you must have got the basic drift of this post. That, when it comes to spreading the AIDS virus, and perhaps influencing human evolution in the Third World this millennium, it’s a minority of promiscuous men who are responsible.
14 August 2007
Conspiracy and denial
Why Africa? Why, of all the places on this planet, was Africa alone afflicted with the AIDS virus? Why was Africa its only source, now that we know that the AIDS virus has spread across the globe?
No one knows for sure. But conspiracy theories have done their time in order to answer these questions.
One conspiracy theory suggests the CIA’s involvement. That, during the Cold War (a period between World War II and the early nineties), the CIA had targeted Black Africans and infected them with the AIDS virus, surreptitiously, through vaccination campaigns. What had the Cold War – i.e. political tension between the United States and the (former) Soviet Union – to do with Black Africans in Africa? I’m not sure, but some say the CIA was evil enough to have done the damage.
Another theory suggests the involvement of the South African secret service. That, during Apartheid (coincidentally, the same period as the Cold War), the South African secret service targeted Blacks and infected them with the AIDS virus. This conspiracy theory seems to make some sense as, during Apartheid, there was a systematic ethnic separation of the Blacks from the Whites by the South African government. The theory may hold water because the South African secret service was known to have committed many an evil deed during Apartheid.
Whether these conspiracies are true, no one knows for sure. And, as you can guess, both the CIA and the South African secret service have denied charges against them.
Perhaps there are other theories we don’t know about. But there does seem to be a more accepted theory regarding the spread of the AIDS virus.
This theory involves the promiscuous nature of the African population. That, the AIDS virus was – and still is – spread through what are called ‘core transmitters’ in the sex trade: prostitutes, and the truck drivers and migrant workers who patronised them. What’s more, according to Malcolm Gladwell’s book ‘The Tipping Point: How Little Things Can Make A Big Difference’, this ‘core transmitters’ theory in spreading the AIDS virus seems to be true not just in dark Africa, but also in the more civilised Western world.
Of course, most African leaders have denied the ‘AIDS-virus-spread-through-core-transmitters’ theory as another Western conspiracy against Black Africans. But medical data does suggest that the AIDS virus is spread through unsafe sex. Whatever be the case, it can’t be denied that the threat of AIDS to our evolution is a real one.
No one knows for sure. But conspiracy theories have done their time in order to answer these questions.
One conspiracy theory suggests the CIA’s involvement. That, during the Cold War (a period between World War II and the early nineties), the CIA had targeted Black Africans and infected them with the AIDS virus, surreptitiously, through vaccination campaigns. What had the Cold War – i.e. political tension between the United States and the (former) Soviet Union – to do with Black Africans in Africa? I’m not sure, but some say the CIA was evil enough to have done the damage.
Another theory suggests the involvement of the South African secret service. That, during Apartheid (coincidentally, the same period as the Cold War), the South African secret service targeted Blacks and infected them with the AIDS virus. This conspiracy theory seems to make some sense as, during Apartheid, there was a systematic ethnic separation of the Blacks from the Whites by the South African government. The theory may hold water because the South African secret service was known to have committed many an evil deed during Apartheid.
Whether these conspiracies are true, no one knows for sure. And, as you can guess, both the CIA and the South African secret service have denied charges against them.
Perhaps there are other theories we don’t know about. But there does seem to be a more accepted theory regarding the spread of the AIDS virus.
This theory involves the promiscuous nature of the African population. That, the AIDS virus was – and still is – spread through what are called ‘core transmitters’ in the sex trade: prostitutes, and the truck drivers and migrant workers who patronised them. What’s more, according to Malcolm Gladwell’s book ‘The Tipping Point: How Little Things Can Make A Big Difference’, this ‘core transmitters’ theory in spreading the AIDS virus seems to be true not just in dark Africa, but also in the more civilised Western world.
Of course, most African leaders have denied the ‘AIDS-virus-spread-through-core-transmitters’ theory as another Western conspiracy against Black Africans. But medical data does suggest that the AIDS virus is spread through unsafe sex. Whatever be the case, it can’t be denied that the threat of AIDS to our evolution is a real one.
12 August 2007
Africa and the human evolution
When we talk of human evolution, we talk of our ancestors migrating from Africa to Europe some 100,000 years ago. Everything we are today happened from that instance; if ‘instance’ is the term we can use to label that epic moment on our human evolutionary timeline.
This ought to be a cause for celebration, if it weren’t for another instance on our evolutionary timeline when a similar migration took place from Africa to Europe, and other parts of the world. This time, the humans carried the AIDS virus with them.
Nobody is really sure when that specific instance really began. But we all know that, because of that instance, we have a crisis on our hands today. Even if you live in places far off from Africa, the rate at which the AIDS virus is transmitted and spreads across migrating populations, it won’t be long before it catches up with you.
For some strange reason, the source of the virus seems to be Africa. Scientists and doctors have identified the virus in a species of monkeys in Africa. But what they can’t explain is how the AIDS virus jumped from monkeys to humans; or, when did it jump and why. What scientists and doctors can neither explain is why the AIDS virus was unheard of before the 1980s; or, how and why it reached epidemic proportions thereafter.
Is this a call for alarm? It sure is. According to an UNAIDS December 2006 report, 25 million Africans are HIV-positive, 2.1 million die from AIDS every year, and 2.8 million are newly infected each year.
This ought to be a cause for celebration, if it weren’t for another instance on our evolutionary timeline when a similar migration took place from Africa to Europe, and other parts of the world. This time, the humans carried the AIDS virus with them.
Nobody is really sure when that specific instance really began. But we all know that, because of that instance, we have a crisis on our hands today. Even if you live in places far off from Africa, the rate at which the AIDS virus is transmitted and spreads across migrating populations, it won’t be long before it catches up with you.
For some strange reason, the source of the virus seems to be Africa. Scientists and doctors have identified the virus in a species of monkeys in Africa. But what they can’t explain is how the AIDS virus jumped from monkeys to humans; or, when did it jump and why. What scientists and doctors can neither explain is why the AIDS virus was unheard of before the 1980s; or, how and why it reached epidemic proportions thereafter.
Is this a call for alarm? It sure is. According to an UNAIDS December 2006 report, 25 million Africans are HIV-positive, 2.1 million die from AIDS every year, and 2.8 million are newly infected each year.
10 August 2007
Do we even care?
It’s interesting to note how the evolution of the human species, with an eye on the future, is a much-talked-about topic in the developed world. Even in India, I can walk into a bookstore and find several books on evolutionary biology by Western scientists, which is almost insignificant compared to the endless information I can get on the Internet… once again, all of it from, and on, Western scientists.
However, I can’t find any information on the future of the human population in the Third World, apart from volumes of population data, which seems to be growing by the day.
This makes me wonder if the evolution of the human species is of any concern in the Third World. Considering the facts that (a) the Third World contains 2 out of every 3 inhabitants of this planet Earth, and (b) much of this population is suffering from poverty, disease and illiteracy, my guess would be that the evolution of the human species is a top-of-mind topic for Third World governments and scientists.
And yet, when I search for information on this topic, I draw a complete blank. Not even an iota of thought from the Third World on what our future is going to be. When I asked a few friends, they (along with me) couldn’t name a single evolutionary biologist in India. When we talked of genetics, we could talk of genetically-modified food and crops. And, in a connected field, we thought of professor Jagadish Chandra Bose and his experiments with plants (he proved that plants responded to stimuli) some 80 years ago.
But, zero thoughts on the future of the human species! Living in India, amongst 1.2 billion people, I found this ignorance mind-boggling. Whatever we remembered were things we had read by Western scientists in books and in the media, or seen on TV on Discovery Channel or National Geographic or BBC World. Nothing Indian came to our minds.
Perhaps we, Indians, will not die out of starvation (though many farmers and peasants still are). Perhaps we’ll battle it out with all the diseases and eradicate them as the West has done (though AIDS is a looming threat in India at the moment). Perhaps, soon, all Indians will be literate, educated, economically well-placed and enjoying a comfortable lifestyle. Perhaps, that’s achievable in the next 20 or 200 years.
What about our evolution from the perspective of the next 2,000 or 5,000 years? Do we even care?
However, I can’t find any information on the future of the human population in the Third World, apart from volumes of population data, which seems to be growing by the day.
This makes me wonder if the evolution of the human species is of any concern in the Third World. Considering the facts that (a) the Third World contains 2 out of every 3 inhabitants of this planet Earth, and (b) much of this population is suffering from poverty, disease and illiteracy, my guess would be that the evolution of the human species is a top-of-mind topic for Third World governments and scientists.
And yet, when I search for information on this topic, I draw a complete blank. Not even an iota of thought from the Third World on what our future is going to be. When I asked a few friends, they (along with me) couldn’t name a single evolutionary biologist in India. When we talked of genetics, we could talk of genetically-modified food and crops. And, in a connected field, we thought of professor Jagadish Chandra Bose and his experiments with plants (he proved that plants responded to stimuli) some 80 years ago.
But, zero thoughts on the future of the human species! Living in India, amongst 1.2 billion people, I found this ignorance mind-boggling. Whatever we remembered were things we had read by Western scientists in books and in the media, or seen on TV on Discovery Channel or National Geographic or BBC World. Nothing Indian came to our minds.
Perhaps we, Indians, will not die out of starvation (though many farmers and peasants still are). Perhaps we’ll battle it out with all the diseases and eradicate them as the West has done (though AIDS is a looming threat in India at the moment). Perhaps, soon, all Indians will be literate, educated, economically well-placed and enjoying a comfortable lifestyle. Perhaps, that’s achievable in the next 20 or 200 years.
What about our evolution from the perspective of the next 2,000 or 5,000 years? Do we even care?
08 August 2007
Why Steve Jones has his fingers crossed
“Of course what’s happened is there’s been an enormous amount of evolution in the general sense since the earliest modern humans, but it hasn’t been biological evolution; it’s been social and cultural evolution. We’re the creature that’s evolved not in our genes but in our mind, and I think that’s what makes us genuinely unique. We do of course share 98.8% of our DNA with chimps, everybody knows that, but we’re not 98.8% chimp, we’re 100% human.”
– Steve Jones, Professor of Genetics, University College London, in a ‘Truth Will Out’ (an Open University & BBC project) interview.
Like most evolutionary biologists and geneticists, Professor Steve Jones is an avid follower of Charles Darwin. In 1999, Prof Jones had written a book called ‘Almost Like A Whale’ (released in the US, a little too literally perhaps, as ‘Darwin’s Ghost’), where he had explained various Darwinian concepts on evolution. Naturally, ‘natural selection’ had taken center-stage in his book… as, I guess, Darwin would have expected.
In ‘Almost Like A Whale’, Prof Jones had actually taken chunks of text and examples from Darwin’s own seminal work, ‘The Origin of Species’ (perhaps that explains the US desire for naturally selecting their title), mixing them up with his own writing and modern-day examples of evolution and technology. In doing so, Prof Jones had accepted a lot of criticism from his detractors, but for a layman like me, who hadn’t read Darwin’s original work, ‘Almost Like A Whale’ made the theory of evolution more interesting.
However, what’s important about Prof Steve Jones is his strong stand against the ‘creationists’ of Christian faith (he had once given a no-nonsense presentation on ‘why creationism is wrong and evolution is right’ at The Royal Society of Edinburgh) and his belief that human evolution, from the perspective of genetic improvement through natural selection in the developed world, may have come to an end… or is, at least, slowing down. Is this good or bad? What does it mean?
Here’s what Prof Jones says in the same ‘Truth Will Out’ interview I quoted from earlier:
“All animals to some extent construct their environment, in that they choose to live where they feel comfortable. However, we can do it much better than anything else because we can alter the environment so we’re comfortable in all kinds of places. By virtue of that we’ve managed, at least in the developed world, to put to one side the challenges which most animals have to face. Many of us now live to the end of our biological lives, as long as we possibly could live, and that’s really very rare. I do think though there’s a severe danger of optimism. We are living absolutely on a knife edge – we can be comfortable for a long time, but it’s a very risky thing to be.
I mean, everybody’s heard about the AIDS epidemic and it’s probably more severe than most people realise, it’s really a large part of the human species is now going through the testing fire of natural selection, a Darwinian crucible. People are becoming infected with this illness, some of them have genes which render them able to deal with it, some can’t, those with the appropriate genes will pass them on, and that’s very much evolution by natural selection. Now, in the West we’re persuading ourselves that that’s not going to happen to us. I’m afraid I’m less optimistic, it may not be the AIDS virus because we understand that, but it may be something else. So when I say that evolution has stopped, I have my fingers crossed. It’s stopped for now.”
– Steve Jones, Professor of Genetics, University College London, in a ‘Truth Will Out’ (an Open University & BBC project) interview.
Like most evolutionary biologists and geneticists, Professor Steve Jones is an avid follower of Charles Darwin. In 1999, Prof Jones had written a book called ‘Almost Like A Whale’ (released in the US, a little too literally perhaps, as ‘Darwin’s Ghost’), where he had explained various Darwinian concepts on evolution. Naturally, ‘natural selection’ had taken center-stage in his book… as, I guess, Darwin would have expected.
In ‘Almost Like A Whale’, Prof Jones had actually taken chunks of text and examples from Darwin’s own seminal work, ‘The Origin of Species’ (perhaps that explains the US desire for naturally selecting their title), mixing them up with his own writing and modern-day examples of evolution and technology. In doing so, Prof Jones had accepted a lot of criticism from his detractors, but for a layman like me, who hadn’t read Darwin’s original work, ‘Almost Like A Whale’ made the theory of evolution more interesting.
However, what’s important about Prof Steve Jones is his strong stand against the ‘creationists’ of Christian faith (he had once given a no-nonsense presentation on ‘why creationism is wrong and evolution is right’ at The Royal Society of Edinburgh) and his belief that human evolution, from the perspective of genetic improvement through natural selection in the developed world, may have come to an end… or is, at least, slowing down. Is this good or bad? What does it mean?
Here’s what Prof Jones says in the same ‘Truth Will Out’ interview I quoted from earlier:
“All animals to some extent construct their environment, in that they choose to live where they feel comfortable. However, we can do it much better than anything else because we can alter the environment so we’re comfortable in all kinds of places. By virtue of that we’ve managed, at least in the developed world, to put to one side the challenges which most animals have to face. Many of us now live to the end of our biological lives, as long as we possibly could live, and that’s really very rare. I do think though there’s a severe danger of optimism. We are living absolutely on a knife edge – we can be comfortable for a long time, but it’s a very risky thing to be.
I mean, everybody’s heard about the AIDS epidemic and it’s probably more severe than most people realise, it’s really a large part of the human species is now going through the testing fire of natural selection, a Darwinian crucible. People are becoming infected with this illness, some of them have genes which render them able to deal with it, some can’t, those with the appropriate genes will pass them on, and that’s very much evolution by natural selection. Now, in the West we’re persuading ourselves that that’s not going to happen to us. I’m afraid I’m less optimistic, it may not be the AIDS virus because we understand that, but it may be something else. So when I say that evolution has stopped, I have my fingers crossed. It’s stopped for now.”
05 August 2007
The best it’s going to get
Here’s a thought:
“…human populations are now being constantly mixed, again producing a blending that blocks evolutionary change. This increased mixing can be gauged by calculating the number of miles between a person's birthplace and his or her partner’s, then between their parents’ birthplaces, and finally, between their grandparents’.
In virtually every case, you will find that the number of miles drops dramatically the more that you head back into the past. Now people are going to universities and colleges where they meet and marry people from other continents. A generation ago, men and women rarely mated with anyone from a different town or city. Hence, the blending of our genes which will soon produce a uniformly brown-skinned population.”
The above paragraphs are from a 2002 Observer article, titled ‘Is human evolution finally over?’, by Robin McKie, commenting on the future that awaits us all.
“However,” adds McKie quickly to his comment, “such arguments affect only the Western world – where food, hygiene and medical advances are keeping virtually every member of society alive and able to pass on their genes. In the developing world, no such protection exists.”
In the article about human evolution (in the Western world), Robin McKie explores some of the points of view expressed by scientists.
One, according to Professor Steve Jones of University College London, this is the best it’s going to get. Human evolution has, apparently, reached stagnation. “Things have simply stopped getting better, or worse, for our species.”
Another, by Professor Chris Stringer, of the Natural History Museum, London, states, “You simply cannot predict evolutionary events… Who knows where we are headed?”
Still one more, from biologist Christopher Wills of the University of California, San Diego, argues, “There is a premium on sharpness of mind and the ability to accumulate money. Such people tend to have more children and have a better chance of survival.”
“In other words,” clarifies McKie, “intellect – the defining characteristic of our species – is still driving our evolution.”
In the end, writes McKie, “Some scientists believe humans are becoming less brainy and more neurotic; others see signs of growing intelligence and decreasing robustness, while some, like Jones, see evidence of us having reached a standstill. All base their arguments on the same tenets of natural selection.”
So, where does that leave us? And, who can tell what evolutionary changes will dominate the developing world?
[Citation: ‘Is human evolution finally over?’, Robin McKie, Observer, 3 February 2002, Guardian Unlimited archive.]
“…human populations are now being constantly mixed, again producing a blending that blocks evolutionary change. This increased mixing can be gauged by calculating the number of miles between a person's birthplace and his or her partner’s, then between their parents’ birthplaces, and finally, between their grandparents’.
In virtually every case, you will find that the number of miles drops dramatically the more that you head back into the past. Now people are going to universities and colleges where they meet and marry people from other continents. A generation ago, men and women rarely mated with anyone from a different town or city. Hence, the blending of our genes which will soon produce a uniformly brown-skinned population.”
The above paragraphs are from a 2002 Observer article, titled ‘Is human evolution finally over?’, by Robin McKie, commenting on the future that awaits us all.
“However,” adds McKie quickly to his comment, “such arguments affect only the Western world – where food, hygiene and medical advances are keeping virtually every member of society alive and able to pass on their genes. In the developing world, no such protection exists.”
In the article about human evolution (in the Western world), Robin McKie explores some of the points of view expressed by scientists.
One, according to Professor Steve Jones of University College London, this is the best it’s going to get. Human evolution has, apparently, reached stagnation. “Things have simply stopped getting better, or worse, for our species.”
Another, by Professor Chris Stringer, of the Natural History Museum, London, states, “You simply cannot predict evolutionary events… Who knows where we are headed?”
Still one more, from biologist Christopher Wills of the University of California, San Diego, argues, “There is a premium on sharpness of mind and the ability to accumulate money. Such people tend to have more children and have a better chance of survival.”
“In other words,” clarifies McKie, “intellect – the defining characteristic of our species – is still driving our evolution.”
In the end, writes McKie, “Some scientists believe humans are becoming less brainy and more neurotic; others see signs of growing intelligence and decreasing robustness, while some, like Jones, see evidence of us having reached a standstill. All base their arguments on the same tenets of natural selection.”
So, where does that leave us? And, who can tell what evolutionary changes will dominate the developing world?
[Citation: ‘Is human evolution finally over?’, Robin McKie, Observer, 3 February 2002, Guardian Unlimited archive.]
02 August 2007
As good as Gould
We often use cultural stereotypes in everyday conversations to make a point. Say, for instance, when using a commonplace concept like ‘bigger is better’, meaning a bigger size is a symbol of dominance, when describing social or political or economic superiority. We use such concepts even from an evolutionary point of view, suggesting that the bigger we are, the better able we are in fighting for food and mates. After all, human society is a living proof of that, right?
Not so, said (the late) paleontologist and evolutionary biologist Stephen Jay Gould, Harvard professor, curator for invertebrate paleontology at Harvard University’s Museum of Comparative Zoology, and author of such famous books as ‘The Panda’s Thumb’, ‘The Mismeasure of Man’ and ‘The Flamingo’s Smile’. To Professor Gould, who passed away in 2002, the notion of cultural stereotypes had a completely different meaning.
Since he took a somewhat macro view of life on our planet – a perspective that looked at millions of years and a thousand times as many life forms – Professor Gould believed that ‘bigger is better’ is not only incorrect, it is in fact a prejudicial view that humans flaunted to honour themselves. To prove his point, he used to give the example of bacteria which is far superior to humans in terms of biochemical diversity or potential environments in which it can survive.
From my reading about him and his books, what I value most about Professor Gould is his fairness in recognising, and evaluating, all life forms on our planet equally. A quality so few of us have.
In a Biography magazine interview with Curt Schleier in March 1998, Stephen Jay Gould had this to say: “After all, there are about a million named species of animals, of which humans are only one. There are only 4,000 species of mammals. There are almost a million species of insects. We’re just not a very prominent group. We’ve had a very great impact on the planet, but you can’t confuse impact with the status of an organism.”
[Citation: Curt Schleier, “Stephen Jay Gould: Was it Survival of the Luckiest?” Biography magazine, March 1998.]
Not so, said (the late) paleontologist and evolutionary biologist Stephen Jay Gould, Harvard professor, curator for invertebrate paleontology at Harvard University’s Museum of Comparative Zoology, and author of such famous books as ‘The Panda’s Thumb’, ‘The Mismeasure of Man’ and ‘The Flamingo’s Smile’. To Professor Gould, who passed away in 2002, the notion of cultural stereotypes had a completely different meaning.
Since he took a somewhat macro view of life on our planet – a perspective that looked at millions of years and a thousand times as many life forms – Professor Gould believed that ‘bigger is better’ is not only incorrect, it is in fact a prejudicial view that humans flaunted to honour themselves. To prove his point, he used to give the example of bacteria which is far superior to humans in terms of biochemical diversity or potential environments in which it can survive.
From my reading about him and his books, what I value most about Professor Gould is his fairness in recognising, and evaluating, all life forms on our planet equally. A quality so few of us have.
In a Biography magazine interview with Curt Schleier in March 1998, Stephen Jay Gould had this to say: “After all, there are about a million named species of animals, of which humans are only one. There are only 4,000 species of mammals. There are almost a million species of insects. We’re just not a very prominent group. We’ve had a very great impact on the planet, but you can’t confuse impact with the status of an organism.”
[Citation: Curt Schleier, “Stephen Jay Gould: Was it Survival of the Luckiest?” Biography magazine, March 1998.]
01 August 2007
Nature, nurture
Even as I write this blog post, biologists, neuroscientists, psychologists, sociologists, religious leaders, parents and others across the globe argue to establish their points of view on gender differences and why they occur. Some provide biological explanations of human behaviour; some propose cultural stereotyping; some talk about evolutionary programming as God’s will and testament.
Whatever the real reasons may be, I can’t deny the fact that men and women are different. That, when it comes to men and women, and boys and girls, their abilities, their responses to everyday stimuli, and the applications of their bodies and minds differ from one another according to their genders.
It’s still a mystery to me if the reason women have a nurturing nature is because they learn to play with dolls as girls earlier in life. Or, the reason men take to fast cars and spaceships is because, as boys, they grow up playing with toys which they keep throwing around their rooms or playgrounds or at each other. Not to mention the fact that boys and rough play is as common as girls and tenderness.
There are, of course, undeniable natural biological differences between the genders. After all, in any given population, on an average, men are taller, built heavier and more powerful (in, say, throwing a rock, or lifting heavier loads, or running faster) than women. I guess these abilities naturally define some of the job roles that men and women take up in any society.
I also believe we are defined by our environment and our upbringing (and, not by nature alone). In that order, cultural stereotyping does define many gender roles in our society. Which means, thinking and speaking positively, nurturing certain abilities in us can prepare us for many job roles which have, traditionally, been attributed to a specific gender… and break the myths associated with cultural stereotyping.
An example of this would be space travel, a domain which had been exclusive to men as they were believed to possess ‘the right stuff’ but has now welcomed women astronauts like Sally Ride (first American woman in space, 1983) and, more recently, Sunita Williams. It’s interesting to note here that the Russians had broken this myth of stereotyping way back in 1963 when they had sent Valentina Tereshkova on a space flight aboard Vostok 1. So, perhaps, nurturing can make a difference we are normally unaware of.
Regardless of our beliefs, the nature versus nurture debate still continues. I remember reading somewhere that internationally-renowned paleontologist and evolutionary biologist Stephen Jay Gould had always despaired over the question of nature versus nurture. Professor Gould believed that biology and environment are so inextricably linked that such debates were meaningless.
Whatever the real reasons may be, I can’t deny the fact that men and women are different. That, when it comes to men and women, and boys and girls, their abilities, their responses to everyday stimuli, and the applications of their bodies and minds differ from one another according to their genders.
It’s still a mystery to me if the reason women have a nurturing nature is because they learn to play with dolls as girls earlier in life. Or, the reason men take to fast cars and spaceships is because, as boys, they grow up playing with toys which they keep throwing around their rooms or playgrounds or at each other. Not to mention the fact that boys and rough play is as common as girls and tenderness.
There are, of course, undeniable natural biological differences between the genders. After all, in any given population, on an average, men are taller, built heavier and more powerful (in, say, throwing a rock, or lifting heavier loads, or running faster) than women. I guess these abilities naturally define some of the job roles that men and women take up in any society.
I also believe we are defined by our environment and our upbringing (and, not by nature alone). In that order, cultural stereotyping does define many gender roles in our society. Which means, thinking and speaking positively, nurturing certain abilities in us can prepare us for many job roles which have, traditionally, been attributed to a specific gender… and break the myths associated with cultural stereotyping.
An example of this would be space travel, a domain which had been exclusive to men as they were believed to possess ‘the right stuff’ but has now welcomed women astronauts like Sally Ride (first American woman in space, 1983) and, more recently, Sunita Williams. It’s interesting to note here that the Russians had broken this myth of stereotyping way back in 1963 when they had sent Valentina Tereshkova on a space flight aboard Vostok 1. So, perhaps, nurturing can make a difference we are normally unaware of.
Regardless of our beliefs, the nature versus nurture debate still continues. I remember reading somewhere that internationally-renowned paleontologist and evolutionary biologist Stephen Jay Gould had always despaired over the question of nature versus nurture. Professor Gould believed that biology and environment are so inextricably linked that such debates were meaningless.
30 July 2007
Gender in the brain
There are fewer women physicists or architects or neurosurgeons in the world than their male counterparts. And, women chess grandmasters or video-game fanatics are hard to come by. Why is it so? Well, according to research, one reason is that male and female brains differ genetically.
According to a Discover Magazine article (‘He Thinks, She Thinks’) I had referred in my previous post, the human brain functions differently for men and women. For instance, while men have greater ability in focusing intently on work and tuning out distractions (ideal for winning at chess), women are more capable than men in handling languages and at verbal and memory tasks.
Although this may explain aptitudes in specific professions that men and women excel in, genetic coding in the brain is not the only reason for gender bias or stereotypes. According to psychologists and social scientists, environmental – i.e. societal and cultural – coding is as important as our genetic make-up. In fact, our culture actually sensitises us to certain belief systems which reappear at the workplace.
Consider, for instance, belief systems which we have all grown up with: ‘a woman’s place is at home’ or ‘men are better at maths and science than women’. Or, take the matter which was commented upon in one of my earlier posts: women losing out in job negotiations due to their non-aggressive nature. These belief systems actually affect our behaviour and performance even in benign situations.
Here’s a case in point as discussed in an article, ‘Think Again: Men and Women Share Cognitive Skills’ from www.psychologymatters.org:
“In a 1999 study, Steven Spencer and colleagues reported that merely telling women that a math test usually shows gender differences hurt their performance. This phenomenon of “stereotype threat” occurs when people believe they will be evaluated based on societal stereotypes about their particular group. In the study, the researchers gave a math test to men and women after telling half the women that the test had shown gender differences, and telling the rest that it found none. Women who expected gender differences did significantly worse than men. Those who were told there was no gender disparity performed equally to men. What's more, the experiment was conducted with women who were top performers in math.
Because “stereotype threat” affected women even when the researchers said the test showed no gender differences – still flagging the possibility -- Spencer et al. believe that people may be sensitized even when a stereotype is mentioned in a benign context.”
The moral of the story is, wherever these belief systems exist, and are predominant, we tend to, automatically, go on the defensive. Particularly, in situations in which we are being evaluated: job interviews, appraisals, school and college exams, or the sports field. Sometimes, we are so sensitised by these belief systems that, although untrue, we prefer to conform to these systems than to challenge them. It is as if our brains are hardwired to perform in a predicated manner.
According to a Discover Magazine article (‘He Thinks, She Thinks’) I had referred in my previous post, the human brain functions differently for men and women. For instance, while men have greater ability in focusing intently on work and tuning out distractions (ideal for winning at chess), women are more capable than men in handling languages and at verbal and memory tasks.
Although this may explain aptitudes in specific professions that men and women excel in, genetic coding in the brain is not the only reason for gender bias or stereotypes. According to psychologists and social scientists, environmental – i.e. societal and cultural – coding is as important as our genetic make-up. In fact, our culture actually sensitises us to certain belief systems which reappear at the workplace.
Consider, for instance, belief systems which we have all grown up with: ‘a woman’s place is at home’ or ‘men are better at maths and science than women’. Or, take the matter which was commented upon in one of my earlier posts: women losing out in job negotiations due to their non-aggressive nature. These belief systems actually affect our behaviour and performance even in benign situations.
Here’s a case in point as discussed in an article, ‘Think Again: Men and Women Share Cognitive Skills’ from www.psychologymatters.org:
“In a 1999 study, Steven Spencer and colleagues reported that merely telling women that a math test usually shows gender differences hurt their performance. This phenomenon of “stereotype threat” occurs when people believe they will be evaluated based on societal stereotypes about their particular group. In the study, the researchers gave a math test to men and women after telling half the women that the test had shown gender differences, and telling the rest that it found none. Women who expected gender differences did significantly worse than men. Those who were told there was no gender disparity performed equally to men. What's more, the experiment was conducted with women who were top performers in math.
Because “stereotype threat” affected women even when the researchers said the test showed no gender differences – still flagging the possibility -- Spencer et al. believe that people may be sensitized even when a stereotype is mentioned in a benign context.”
The moral of the story is, wherever these belief systems exist, and are predominant, we tend to, automatically, go on the defensive. Particularly, in situations in which we are being evaluated: job interviews, appraisals, school and college exams, or the sports field. Sometimes, we are so sensitised by these belief systems that, although untrue, we prefer to conform to these systems than to challenge them. It is as if our brains are hardwired to perform in a predicated manner.
27 July 2007
Gender differences
“Like many-handed Hindu goddesses, women are better jugglers (in my experience), sweeping through their lives performing several tasks at once, while men seemingly do things sequentially—a division of labor that certainly prevails in our household.”
– Linda Marsa, in her article, ‘He Thinks, She Thinks’, Discover Magazine, July 2007
Genders differ – not just physiologically, but mentally and emotionally as well. This, as you can guess, is not a profound statement by me; but, rather, a reflection of all the scientific research that is available and my experiences in life. While reading up on gender differences in the workplace, earlier this month, I came across an article on the Internet from Discover Magazine which deals with the issue of gender differences from a scientific perspective.
The article, ‘He Thinks, She Thinks’ by Linda Marsa says, for instance, “men process their strong emotions differently from women and tend to act out and take action…” Ms Marsa, then, goes on to examine why this is so. Apparently, “findings offer tantalizing hints that even gender behavior differences once attributed solely to nurture—women are more emotionally attuned, while men are more physically aggressive—stem in part from variations in our neural circuitry.”
I quote from the article:
“The brain is divided into two hemispheres that play different roles in perception and behavior. The right side is relatively more involved with visual and spatial control, while the left is the seat of language. There is evidence that the male brain uses either one hemisphere or the other and relies on specialized brain regions when performing a task. Women, meanwhile, call on both hemispheres regardless of the task, resulting in greater communication between the two; they also enlist more brain regions to process information. When at rest, male minds appear to be more attuned to the “external world,” while there may be a “differential tilt toward the internal world” in female brains, says Larry Cahill, a neurobiologist at the University of California at Irvine.”
Got all that? Why not read the entire article online on Discover Magazine?
– Linda Marsa, in her article, ‘He Thinks, She Thinks’, Discover Magazine, July 2007
Genders differ – not just physiologically, but mentally and emotionally as well. This, as you can guess, is not a profound statement by me; but, rather, a reflection of all the scientific research that is available and my experiences in life. While reading up on gender differences in the workplace, earlier this month, I came across an article on the Internet from Discover Magazine which deals with the issue of gender differences from a scientific perspective.
The article, ‘He Thinks, She Thinks’ by Linda Marsa says, for instance, “men process their strong emotions differently from women and tend to act out and take action…” Ms Marsa, then, goes on to examine why this is so. Apparently, “findings offer tantalizing hints that even gender behavior differences once attributed solely to nurture—women are more emotionally attuned, while men are more physically aggressive—stem in part from variations in our neural circuitry.”
I quote from the article:
“The brain is divided into two hemispheres that play different roles in perception and behavior. The right side is relatively more involved with visual and spatial control, while the left is the seat of language. There is evidence that the male brain uses either one hemisphere or the other and relies on specialized brain regions when performing a task. Women, meanwhile, call on both hemispheres regardless of the task, resulting in greater communication between the two; they also enlist more brain regions to process information. When at rest, male minds appear to be more attuned to the “external world,” while there may be a “differential tilt toward the internal world” in female brains, says Larry Cahill, a neurobiologist at the University of California at Irvine.”
Got all that? Why not read the entire article online on Discover Magazine?
Subscribe to:
Posts (Atom)