historical stuff by Gareth Millward

Search

2003 – The Iraq War Protests

20/07/2015

15 February 2015 – Various

Despite the numbers, the war went ahead anyway. The images over the following years became almost as iconic as those of the millions marching through London and other cities. Bush in front of the “Mission Accomplished” sign; the toppling of the Saddam statue; the subsequent trial and execution. The question is, then – what was the fucking point?

The protest failed to achieve its main goal, but it is beginning to be historicised into a wider narrative of mass protest and voluntary action. It was in many ways one of the first “internet” demonstrations, with millions of protesters brought together through digital technologies such as e-mail and websites. (This is before Facebook and Twitter. But more on these in upcoming weeks). Movements such as Occupy may have had similar headline “failures”, but they have acted as a focal point for protest against the dominant neo-liberal political framework in the Western world.

Indeed, the breakdown of the party systems in Britain and America especially has made this sort of extra-Parliamentary form of protest increasingly potent and necessary. For while the Labour Party and Conservative Party differ on a number of ideological points, many of the key decisions about how to run foreign and domestic affairs have converged. Neither supports nationalisation of key public services; both believe in a strong military, including a nuclear arsenal; both play the realpolitik game of getting close to dictatorships in various parts of the world in return for good arms contracts and a steady supply of oil. Crucially, both supported the Iraq War, even if there were dissenting members from the parties at the time and subsequently.

This has been going on for a while, however. Voluntary organisations and charities have always been politically driven – you cannot set out to solve a social problem without doing so. While many of the larger institutions have, in the past, steered well clear of party politics, there has often been a direct or indirect moral cajoling of those in local and national government to enact policies that will help charities get on with their vital work.

In the 1960s, however, we began to see more assertive groups coming forward. Charities that did not necessarily provide services themselves, but deliberately spent their money on researching the social problems of the day and lobbying the government to fix it. The Child Poverty Action Group, Disability Income Group, Shelter and many others appeared during this time. They were willing and able to use the growing mass media to present their cases in ever-increasingly sophisticated ways. And, to varying degrees, they have had success with governments of both parties right across the late-twentieth and into the twenty-first century.

The growing professionalism of those groups in this new political climate, however, meant that they became specialised. Social campaigners may have had many concerns, but the charities themselves were often very narrowly-focused. The big questions – traditionally the preserve of the political parties – were beginning to be diffused through a range of institutions and organisations, few of whom would ever hold direct power in Westminster or City Hall.

The Iraq protest, then, represented more than just the war. For many, it was the first time in a generation that people had been able to broadly agree on a particular action and – crucially – had the tools to mobilise quickly and effectively across the world. Occupy, and the struggles of the 99% have been similarly branded. They represent growing disquiet on, predominantly, the political left with the party system and the post-war levers and apparatus that are supposed to impose democratic will on the law of the land. That they have been unsuccessful may say more about the increasing distance between the machinery of government and the people than it does about the protesters themselves.

Print Friendly

2002 – The Salt Lake City Winter Olympics

13/07/2015

My mother once told me of a place,
With waterfalls an unicorns flying.
Where there was no suffering, no pain.
Where there was laughter instead of dying.
I always thought she’d made it up,
To comfort me in times of pain.
But now I know that place is real,
Now I know its name.

~ The Book of Mormon

Why wouldn’t you want to hold a Winter Olympics in Salt Lake City, Utah? Where the warlords are friendly and the goat meat is plentiful? Well, we know why you would hold an Olympics there – flagrant corruption.

The Salt Lake City Olympics bidding process opened up the lid on the systemic nepotism and bribery within the International Olympic Committee (IOC), and the systems for awarding Games to host cities. It resulted in a number of reforms to clean up the system and the IOC’s reputation. Thankfully, nothing like this will ever happen again…

In a completely and utterly unrelated story:

It can be difficult sometimes to justify to people who don’t like sport just why I spend so much of my time watching it. Even if you can explain the attraction of watching people compete and the narratives that flow from that, how exactly do you explain away the rampant commercialism, corruption, sexism, racism, homophobia and various other unsavoury aspects of so many sports and their governing organisations?

Corruption in sport is – shockingly – not new. The “old boys’ networks” from the old British Public School system helped establish the rules for a number of sports in the late nineteenth century, from various versions of football to tennis and beyond. This was part of a Victorian desire to rationalise and standardise sport across the globe, so that everyone was playing by the same rule books. Sport was part of a masculine1 and Christian ideal, supposedly requiring and encouraging self discipline and athletic prowess. By the end of that century and the beginning of the twentieth, international sporting organisations popped up to express these ideals through nationalistic competition.

That nationalism was a key tool for regimes across the twentieth century, some authoritarian, some democratic. Italy “borrowed” a number of soccer players from Argentina to win the 1934 and 1938 FIFA World Cups (and may have slipped a coin or two in the pockets of the referees for good measure). The Nazis made a big play to host the 1936 Olympics. After 1945, the USSR and USA used a number of sports, mostly Olympic, to play out the Cold War in the sports stadium. Soccer teams have been integral to the post-1991 independent states in Eastern Europe and their national identities.

This explains why, say, Salt Lake City was keen to have a Winter Olympics. The eyes of the world would be on them, giving the place legitimacy. It’s why Qatar wanted a FIFA World Cup. It’s why the English were so angry when they didn’t get it.

There is another side to the corruption coin, however, which makes uncomfortable reading for countries who were part of that initial elite 100 years or so ago. Because the rest of the world has had to fight hard to be taken seriously alongside the traditional nations of Western Europe and North America. It took until 2002 before a FIFA World Cup went to Asia; 2010 before one went to Africa. We’re yet to have an African Olympics, Summer or Winter. And we’re a year away from the first ever South American one.

In the case of FIFA, the English/European oligarchy was swept aside in the 1970s under the leadership of João Havelange from Brazil. He appealed to the traditionally neglected nations, and built a large power base outside Europe. In some ways, this was a way to finally wrest control from the closed-shop in Europe. But it was built on giving bonuses to officials with… questionable ethics. No doubt, football investment has improved dramatically in Africa and Asia. But how much would have it have improved if officials weren’t trousering so much of the money?

Now, of course, Sal Tlay Ka Siti was in the United States – not exactly a sporting backwater. But it had been repeatedly overlooked. They believed the reason for this was that they weren’t wining and dining officials as well as their rivals. They may have been right. Though their solution wasn’t. They decided to bribe their way to the Olympics.

Perhaps it was right. They got the games, after all. But it nearly brought down the IOC and resulted in widespread reform.

There’s a question that international sport really needs to tackle, then. It doesn’t want corruption. At the same time, the West isn’t willing to give up its power. The arguments that other nations are not mature enough to be involved, economically or in terms of their business practices, can only go so far. How can they get better if they are never invited to the table?

Similarly, we cannot condone corruption simply because it allows non-traditional nations a shot at the big time. Qatar shouldn’t have a World Cup for the same reason it shouldn’t have a Winter Olympics. The climate isn’t right, the human rights record should see it kicked out of the international community, and it will help none of their citizens; it’s a vanity project for an absolute monarchy trying to buy credibility and prestige with the West.

People in the non-traditional countries deserve more from FIFA, the IOC and their ilk. More than that, though, they deserve more from the people who supposedly represent them.

Title image from Wikicommons.
  1. Although women were similarly encouraged into sport for the good of the race, especially with eugenic theories running wild through the period.
Print Friendly

2001 – September 11

06/07/2015

xkcd (Source)

xkcd (Source)

The terrorist attacks on “9/11″ were horrific. The sheer scale of the damage, the cultural significance of the targets, and the fact that this exposed the vulnerability of “the most powerful nation on Earth” made most of the Western world uneasy. Whatever the geopolitical position in the Middle East, whatever the West’s role in the historical and continued instability in the region, the hijackings were barbaric acts.

I have already discussed terrorism and racialised attitudes towards it in this series. And while I could probably go on at length on the subject, there is another aspect of the attacks that piques my interest. The endurance of the conspiracy theory.

Of course, 9/11 wasn’t the first event to generate an industry of writers and enthusiasts spreading their own particular hypotheses to explain major events. JFK and the Apollo XI Moon Landing come to mind immediately. Then there are various “interesting” positions on vaccination or aircraft vapour trails. And we still find people who believe the Jews run the world, or the Catholics run the world, or the Illuminati run the world, or the Stonemasons run the world, or (that’s enough, Ed.)

My own work recently has had to deal with the vaccination issue. And this has been fascinating, partially because it involves so many different positions on what is largely the same base of evidence. It includes everyone from the hardcore “anti-vaxxers” to the hardcore “pro-vaxxers” – and somewhere in between individuals and parents who actively or passively do not get their children vaccinated without really expressing an opinion that survives in the historical record.

So this isn’t about 9/11. It’s a rant.

One of the reasons that the vaccination conspiracies have attracted so much opinion recently is because they have very tangible results. We can see the vaccination rate go up or down; we can see the disease notification rates fluctuate. And it is one of those group behaviour which, we believe, might affect us. Whether another person (or group of people) choose to vaccinate can lead to a greater risk of disease for another. Or so the “pro-vaxxers” would have you believe. (At this point the author dons his tin-foil hat.)

Ben Goldacre, a science writer, has written about “vaccine scares” in his popular books on Bad Science.1 He notes that these theories about, for example, hepatitis vaccines causing multiple sclerosis, worries over mercury or the realtionship between MMR and autism have tended to respect national boundaries.2 And while for the most part he is correct, these ideas did spread (albeit more slowly) in the pre-internet age. The scare over pertussis (whooping cough) vaccination, for example, had pretty much burnt out in England before it flared in the United States; although there was a contemporary (yet completely separate) issue with the vaccine in Japan.3 It took a number of years for Australia and the US to catch onto Wakefield and MMR (despite how his work had been discredited), and the UK has never really got interested in thimerosal (mercury compounds). In the internet age, however, ideas are spreading quicker as people are more quickly able to find individuals with similar views, and in turn are able to share information which confirms these beliefs.

Let’s be clear, however – pro-vaccine people do similar things. The vast majority of those commenting on vaccination (like the vast majority of the world’s population) are not medically trained in the areas of epidemiology and immunisation. This doesn’t make their opinions invalid, but it does make claims about scientific “truth” very difficult to trust. Vaccines are complicated. The science underpinning them is complicated. I would have to contextualise the opinion of, say, a brain surgeon when opining on MMR. Much like – even though I have a PhD in history – my views and pronouncements on pre-industrial Oviedo should probably be taken with a pinch of salt. The difference is that overwhelming medical opinion supports vaccination as safe and effective. The real questions is – how “safe” is “safe”; and how “effective” is “effective”?

No vaccine is 100% effective. It significantly reduces your chances of getting a disease; which in turn significantly reduces your chances of passing it on. Especially if everyone around you is also vaccinated. After a few generations of the disease, theoretically, a population can rid itself of the disease as it has fewer hosts and fewer people to infect. This concept of “herd immunity” is a well established one, even if it is only in recent (historically speaking) times that we have been able to develop statistical and epidemiological models to predict the impact of vaccines on a population.

And, no vaccine is 100% safe. Any procedure – indeed, anything carries risk. This goes from opening a can, to driving a car. As a proportion of the billions of vaccines administered, a tiny fraction have been injured. Health authorities know many of the contra-indicators which might cause this, and attempt to avoid the complications. But mistakes happen. That is no comfort to the families affected, but it has meant that over the course of the twentieth century the death and injury toll of TB, polio, smallpox, diphtheria, tetanus, measles, whooping cough, mumps, rubella and others has been significantly reduced.

This gives the conspiracy theorists their “in”. Because there are thousands of cases of vaccine damage to point to. Each individual one is a devastating tragedy to that family. There are millions who have been vaccinated, yet caught the disease anyway. Each one makes an individual wonder whether the pain was worth it. And, of course, medical and public health science had become more effective at preventing and curing illnesses from the late nineteenth century. Who is to say that vaccines have caused the final drop in cases? Couldn’t it just be coincidence? Aren’t we just exposing children to unnecessary risk?

The answer is, of course, peer-reviewed data and analysis. It’s the mass trials conducted by many countries over the course of the past hundred years. It’s the data we have of disease rates in areas where vaccination rates drop. It’s the control experiments and analyses that seek to eliminate other causal factors. Nobody serious would claim vaccination was the only reason for the elimination of smallpox. Nobody serious would claim that it wasn’t a significant factor.

There are two aces up the sleeve of the conspiracy theorists, however, which keep the debate alive. The first is to lie, or seriously misrepresent data. To make claims like “nobody has ever tested vaccine safety or efficacy”. They have – just look at the Medical Research Council’s trials of pertussis,4 polio and TB5 vaccines as a starting point. While none is without its problems, it is a flat out lie to suggest they never happened.

The second is to deny the relevance of these findings on the basis that “Big Pharma”™ makes money off the back of vaccines, or that the government wants to exert control. This seems to suggest that if people have ulterior motives, what they say cannot be true, regardless of their evidence. This would be enough to discredit those selling alternative therapies to protect people from disease, who have a vested interest in making you doubt biomedicine. But that’s a debate for another time.

This seems to fall under its own logic. For a start, vaccines are not a big money maker for pharmaceutical companies compared to their overall turnover. While they are becoming a bigger part of pharmaceutical company’s strategy – due to emerging markets in the developing world and increased difficulties bringing new drugs to the market – in 2013 the vaccine industry was worth an estimated $24 billion.6 Yet the industry is valued at over $980 billion.7 Besides – why would a drug company want to cure you of a disease that can cause complications for which it could sell you drugs to treat?

These argument build on those made by historians of medicine over the past few decades about the need to question scientific truth claims by authorities. There are countless examples of the power of the medical profession and the increasingly medicalised state interfering in the lives of its citizens. But there is a fundamental flaw in using this work – meant as a critique of the social, political and economic structures of power – and applying it simply to back up another faulty truth claim on the material reality of the universe. Just because science, scientists and the scientific method are bound up in power structures and the interests of the capitalist state doesn’t make all (or even most) of their conclusions invalid. As humanities scholars we can debate their relevance, but we don’t have the tools to deny their scientific merits. Especially if you are going to appeal to rational science as your basis for anti-vaccinationism.

Credit: Wellcome Library, London. Wellcome Images. (Source)

Credit: Wellcome Library, London. Wellcome Images. (Source)

But if you’re going to lean on my discipline, let’s try this. Let’s assume vaccination is part of a Foucauldian panopticon. It monitors citizens, bringing all children under the surveillance of the state, where their behaviour is controlled through the universal administration of drugs designed to increase the productivity of the whole. It’s purpose is at once to exert state power and to discipline the individual into believing that she has an obligation to herself and others to maintain her health for the good of the nation state. Let’s just say we buy that (and I probably do, on an academic level).

Why would the state continue to support a system that (supposedly) injures its citizens, rendering them and their parents’ nuclear family economic unit less productive? The state has a vested interest in supporting a system it helped create in order to save face. But it has a bigger vested interest in finding the most efficient and safest vaccines so that its citizens grow up to be net producers, not drains on the system.

There are legitimate questions to be raised here on the moral and political level. Is one death from a vaccine one death too many? It it right that the state should compel people through law or societal pressure to undergo a medical procedure? Fine. We can sit and have that debate. But you don’t get to make up scientific data or ignore the mountain of evidence which contextualises or repudiates your claims.

  1. In the interests of transparency, Goldacre is, like me, a research fellow at the London School of Hygiene and Tropical Medicine. Unlike me, he has medical qualifications and is far more successful. I have never met the guy. I’m sure he’s lovely/a corporate schill (delete as applicable according to personal taste). http://evaluation.lshtm.ac.uk/people/members/ (accessed 5 July 2015).
  2. Ben Goldacre, ‘How vaccine scares respect local cultural boundaries’, Bad Science(24 April 2013) http://www.badscience.net/2013/04/how-vaccine-scares-respect-local-cultural-boundaries/ (accessed 5 July 2015).
  3. Jeffrey P. Baker, ‘The pertussis vaccine controversy in Great Britain, 1974–1986‘, Vaccine 21(25-26), 4003-10.
  4. ‘Vaccination against whooping-cough’, BMJ 4990 (25 August 1956), 454-62.
  5. B.C.G. and vole bacillus vaccines in the prevention of tuberculosis in adolescents‘, BMJ 4964 (25 February 1956), 413-27.
  6. Miloud Kaddar, ‘Global vaccine market features and trends’ World Health Organization http://who.int/influenza_vaccines_plan/resources/session_10_kaddar.pdf (accessed 5 July 2015).
  7. Statista, ‘Revenue of the worldwide pharmaceutical market from 2001 to 2013 (in billion U.S. dollars)’ http://www.statista.com/statistics/263102/pharmaceutical-market-worldwide-revenue-since-2001/ (accessed 5 July 2015).
Print Friendly

2000 – The Millennium Bug

29/06/2015

1 January 1900 – Earth

Technology can become obselete. That in itself doesn’t seem to be too contentious a statement. Just ask coopers, smiths and thatchers about how business has been going lately. But the furore over the “millennium bug” or “Y2K” showed just how dangerous this can be when every major administrative system in the world relies on an outmoded date format.

In early computing, both memory and processing power were at a premium. It is estimated that one bit cost one dollar in the 1960s.1 There are 8 bits to a byte. Storing a four digit number requires two bytes; a two digit number only needs one. That’s an $8 per date. Since most calculations would be for dates within the 20th century, adding the “19” to the beginning of the date seemed redundant. It became convention in a lot of software to simply write my birthday as 10/10/85. Which is fine.

However, software used by air traffic control, international banking and national security took these dates to calculate a number of things. Something simple such as the day of the week become complicated once you move over the century boundary. 12 June 1914, for example, was a Friday. 12 June 2014 was a Thursday. And in 1814, it fell on a Sunday.

There are other things that could go wrong, too. Say I wanted to book an appointment with someone seven days after Friday 25 February 2000. No problem – I’ll see them on Friday 4 March. But what if the computer sees that I want to book an appointment seven days after 25/02/00, and thinks it’s 1900? Well, that’s a problem. Because it will want me to be there on Monday 5th March. 1900, unlike 2000, wasn’t a leap year.

Retrofitting old software and data to include the correct date formats and calculations was not a trivial exercise. We spent an estimated $300 billion to fix the problem. In the end, it may not have even been that big a deal.2 But it shows how decisions made out of convenience or financial necessity could come to create problems for future generations who get locked into a particular format.

The most famous example of this theory has come from Paul David, who described how the QWERTY keyboard came to dominate the Anglophonic world. First, it was used for mass produced typewriters. As typists became trained to work quickly on these machines, any potential benefits from a more ergonomic layout were negated by the massive short-term costs of retraining everybody to use the new system.3

If you’ve ever tried typing an e-mail back to your parents using your French exchange family’s AZERTY monstrosity, you’ll know just how this feels.

Human rights abuse. (Source)

Human rights abuse. (Source)

Historiographically, the idea of path dependence is an interesting one. Certainly, you could apply it to bureaucratic and operational decisions made by businesses, government or the collective action of societies.4 The recent furore over Universal Credit, for example, shows that while there may have been the political will to produce a more rational and consistent benefits system, the existing web of payments and tax credits is unfathomably costly to untangle.5

The current social security system is an accident of history. After being overhauled in the 1940s, new schemes have been added as society has changed and holes in the safety net have been identified. No doubt, a single benefit, administered using a overarching computer system and bureaucratic machinery, would be more efficient that what we have now. But if the cost of change – to both the government in terms of infrastructure and claimants in terms of lost income – is higher than the potential gain, it can cause a political crisis. One might argue there is a reason why no other government has been so… “brave”… in their overhaul of the Department of Work and Pensions.

Despite being a massive part of the late 1990s news cycle, Y2K never really caused that many problems. Like much else with the dawning of the new millennium, the really exciting part was that Western society was approaching a nice round number. It’s like watching your odometer tick over to 100,000, or getting 100 lines in Tetris. Objectively, it means little. But there’s something nice about hitting a milestone.

Still. It’s a helpful reminder. However you design any system, eventually it will start to creak and groan under its own contradictions. But fixing it properly may end up being more costly than patching it up with string and sticky back plastic.

  1. ‘Year 2000 problem’, Wikipedia < https://en.wikipedia.org/wiki/Year_2000_problem > (accessed 28 June 2015).
  2. ‘Y2K: Overhyped and oversold?’, BBC News 6 January 2000 < http://news.bbc.co.uk/1/hi/talking_point/586938.stm > (accessed 28 June 2015).
  3. Paul A. David, ‘Clio and the economics of QWERTY’, The American Economic Review 75(2) (1985), 332-7. (Copy available – http://www.econ.ucsb.edu/~tedb/Courses/Ec100C/DavidQwerty.pdf – (as of 28 June 2015).
  4. Paul Pierson, ‘Increasing Returns, Path Dependence, and the Study of Politics’, The American Political Science Review 94 (2000), 251-67.
  5. Asa Bennett, ‘Iain Duncan Smith’s Universal Credit Could Cost More Than Planned, Warns Think-Tank’, Huffington Post (9 September 2014) < http://www.huffingtonpost.co.uk/2014/09/09/ids-universal-credit-welfare_n_5789200.html > (accessed 28 June 2015).
Print Friendly
Hope_Columbine_Memorial_Library_thin

1999 – The Columbine High School Massacre

22/06/2015

20 April 1999 – Columbine

Last week, it was fortunate coincidence that I had planned to write about Google the weekend after getting back from a conference on the history of the Web. This week, it’s utterly depressing in the wake of the Charleston shootings that I should be writing about Columbine.

School shootings are – thankfully – a rare event in Europe. The Dunblane shooting in 1996 was a reminder that these things do happen, though not with the regularity that they have plagued the United States in recent years. When Dunblane happened, I was 10 years old; with Columbine I was 13. Of course, the chance of being caught up in one of these tragedies was infinitesimally small. But that didn’t stop kids of my age (and, probably, more our parents) worrying about this sort of thing.

The memorial library at Columbine High School, built after the massacre. (Source)

The memorial library at Columbine High School, built after the massacre. (Source)

Columbine was now over 15 years ago, and yet mass shootings continue across the United States. Colorado had another incident only as recently as 2012 when a gunman attacked a screening of the final Chirstopher Nolan Batman movie. Sadly, as Jon Stewart put it:

I honestly have nothing other than just sadness once again that we have to peer into the abyss of the depraved violence that we do to each other and the nexus of a just gaping racial wound that will not heal, yet we pretend doesn’t exist. And I’m confident, though, that by acknowledging it, by staring into that and seeing it for what it is, we still won’t do jack shit.1

I could take this opportunity to look at what it is historically about American culture that allows this to keep happening. That particular topic has been wrung to death over recent days. But we can revisit the greatest hits! Easy access to guns gives people opportunity. The glorification of fire arms creates a fantasy culture that those with dangerous personalities get lost in. Not enough support networks exist for those with mental health issues. Racism is still rife. A large section of the community celebrates the names and flags of white supremacists from the nineteenth century. Others hide behind a constitutional amendment designed to allow the country to defend itself. All good stuff, to be repeated ad nauseam next time this happens.

Because, let’s not kid ourselves. There will be many next times.

The historical attachment to gun ownership in America makes sense within a narrow historical interpretation. The country was founded as a modern experiment in liberalism. Individual property was to be protected from the intrusion of the state, and individuals were free to follow their own religious and political lives providing these did not impinge on the rights of others to do the same.

One of the important pillars of this concept was the Constitution – a set of rules governing the government. The state would not be allowed to interfere in the private lives of others apart from within strict laws of conduct. In order for that to happen, the appropriate levers needed to be created to allow the people to kick out those who would abuse the system.

In the ideal world, of course, this would happen with regular elections to the Senate, House of Representatives and the Presidency. But if those failed, the people needed to be able to defend themselves against injustice: hence, the Second Amendment. The right for people to form militias to protect against tyranny, domestic and foreign. Especially foreign. Those dirty Brits might come back one day…

The idea that firearms will protect the individual from tyranny continues in US politics. See this cartoon, reproduced on NaturalNews.com. (Source)

The idea that firearms will protect the individual from tyranny continues in US politics. See this cartoon, reproduced on NaturalNews.com. (Source)

Within the context of the late eighteenth century, this made perfect sense. This was a new country, just starting to form the sort of economic and political infrastructure required to defend itself and to provide a mature, accountable democracy. As time has gone on, however, history has left this world behind. On the one hand, the United States as a wide network of military and police bases, with a highly developed justice system and supreme court. While these institutions do not work anywhere near as well as Americans would like, there are myriad constitutional forms of protection and of law enforcement. If these fail, there are also myriad avenues for challenging this power.

Second, fire arms are far more deadly and far more prevalent than anyone in the eighteenth century could have assumed. Individuals can, realistically, pull together the fire power to form a militia that would rival that of many middle-income nations.

To many in the States, however, that paranoia – a vital defence mechanism 200 years ago – remains. And it has blended with a fetishisation of guns and a deep mistrust of the federal government. Many believe a gun makes them safer, even though you are more likely to be killed by a gun if you own one.2 In the Southern states, you can combine all this with a nagging suspicion that the economic and political dominance of Northern states and California means that “liberals” are trying to impose a political way of life upon the states that once tried to secede from the Union.

On this historical reading, then, guns are justified because governments can never be trusted to operate within the law. At any moment, the Federal government could seize your property (including your guns).

To Europeans, this sounds like utter nonsense. And, increasingly, it is being ridiculed by middle America too. But just because the Second Amendment is an anachronism doesn’t make it any less potent to many. In fact, when one of the major forces in American politics is named after an act of sabotage in Boston harbour, its roots in the eighteenth century make it even more relevant.

Columbine will happen again. And it will keep happening until those most wedded to gun culture understand that they are being manipulated far more by the arms industry and vested capital interests than they are by the Federal government. For it is the legal framework and protection offered by a government – constrained by the rule of law – that will ultimately make America a safer place.

That’s going to take a long time. Because such a collectivist attitude, relatively common in Europe, is an anathema to the individual rights approach at the heart of American politics and history. And we should be honest – collective trust in government hasn’t always worked out so well this side of the pond.

Until then, America will continue to tell the rest of the world – mass shootings are the price we pay for freedom.

Addendum

Last night, a friend posted this to Facebook. If you want a more sweary and entertaining version of the above, see below:

  1. Jon Stewart on The Daily Show, broadcast on Comedy Central. Transcript from ‘Read Jon Stewart’s blistering monologue about race, terrorism and gun violence after Charleston church massacre’, Washington Post, 19 June 2015 < http://www.washingtonpost.com/blogs/style-blog/wp/2015/06/19/read-jon-stewarts-blistering-monologue-about-race-terrorism-and-gun-violence-after-charleston-church-massacre/ > (accessed 21 June 2015).
  2. Linda L Dahlberg, Robin M Ikeda and Marcie-jo Kresnow, ‘Guns in the home and risk of violent death in the home: Findings from a national study’, American Journal of Epidemiology 160(10) (2004), 929-36.
Print Friendly

1998 – Google, Inc.

15/06/2015

4 September 1998 – Menlo Park

Hoover, Xerox and Coke have come to mean vacuum cleaner, photocopier and cola in colloquial English. Such was the success of those brands, either as inventors or popularisers of day-to-day products that we use their trademarks more than the more generic terms; even when Vax, Epson and Pepsi are available.

Google is another of those brands. It is the world’s most used search engine, accounting for 88% of the planet’s searches.1 Yet that isn’t primarily what Google does any more. It offers a range of services and collects mind-blowing amounts of data, leading many to criticise its dominant position on the internet and World Wide Web.

Google, November 1998

The Google search page as it looked around the time Google was incorporated.
(http://google.stanford.edu/ [The Internet Archive, site captured 11 November 1998])

The Web is full of information, but it’s relatively useless if you can’t find anything. There are a number of ways you can find stuff, but it generally boils down to access to one of three things:

  • someone recommends a site to go to;
  • you follow a link on an existing site; or
  • you search for a particular topic in a search engine.

In the late nineties, search wasn’t particularly sophisticated. The main providers would maintain large databases of websites, and then would provide the user with results based on how often a search term appeared. (To very crudely summarise the technology.) Students at Stanford, however, wondered whether an algorithm could deduce relevance, but monitoring how often authoritative websites linked to each other on a specific topic. Using these and other metrics, they developed a search engine that eventually became Google. It launched in 1997, and the company was incorporated in September 1998.2

My family got the internet in 2000. Google has been practically a constant in my experience of the web since then, as it has been for many others. But there was a Web before Google. And there was an internet before the Web. So the question is – how did we ever find anything?

Yahoo had a search option, but also gave prominence to its directory. Much like a telephone directory or book index, it sorted sites by category and was maintained by the site itself. (ww2.yahoo.com [The Internet Archive, captured 17 October 1996].

Yahoo had a search option, but also gave prominence to its directory. Much like a telephone directory or book index, it sorted sites by category and was maintained by the site itself.
(www2.yahoo.com [The Internet Archive, captured 17 October 1996].

One form of “discovery” came through directories. The example above was on the Yahoo front page in late 1996. Google also maintained a directory in its early years, before neglecting it in favour of other services. While these were helpful, they were also at the mercy of those maintaining them. Humans simply could not read and categorise every single page on the Web.

Another way of maintaining links for like-minded people, then, was to gather in one place. These sorts of internet communities have existed for many years, even before the invention of the Web. At the recent RESAW conference in Aarhus, Kevin Driscoll spoke about the history of Bulletin Board Systems. Much like the modern “internet forum”, people could exchange messages on the internet equivalent of the community bulletin board in the local parish church or community centre. Access came through dialling up BBS providers using a modem and transferring data over the phone line. This is essentially how modern Internet Service Providers work, but in the days before the Web as we know it. Indeed, a number of BBS providers went on to become ISPs.3

These boards provided information not just on the “real” world goings on in the cities in which they were based. They also gave people recommendations for other places on the internet to visit.

Other messaging systems such as Usenet provided similar information amongst the core service of facilitating conversations on a particular topic. This was brought out in William McCarthy’s paper on the history of troll communities.4

Some users took the geographical analogy and ran with it, contributing to the rise of GeoCities in the late 1990s. Ian Milligan‘s research showed that the owners of GeoCities pages tended to keep themselves in the city which most reflected their interests. In doing so, they were able to join communities of like-minded people, share information, and then “go” to other cities to learn about other topics. This was a web constructed by amateurs rather than professional content generating sites, but it allowed people to discover and – crucially – be discovered by their fellow Web consumers.5

xkcd nails it again... (Source)

xkcd nails it again… (Source)

Google has become a valuable tool for web “discovery”. Alongside the rise in social media, we have been able to share our own content and that of others in ways that would have been difficult or impossible in the 1980s and 1990s. Finding people, places and things has never been easier.

Aside from the political concerns and debates over the “right to forget“, it has also made things tricky for documentary researchers. The intricate details of that are probably worth explaining elsewhere (such as in this paper I gave with Richard Deswarte and Peter Webster last year). Suffice to say, the sheer volume of data available through Google is overwhelming. So too is the gnawing suspicion that it is almost too easy, leading us to do research on the sorts of topics that serve only to confirm our own prejudices of the world and ignoring wider and important outside context.

In any case, Google looks like it’s here to stay. Given the way it has revolutionised the Web, which has in turn revolutionised the twenty-first century, the company had to go into my 30-for-30.

  1. ‘Worldwide market share of leading search engines from January 2010 to April 2015′, Statista < http://www.statista.com/statistics/216573/worldwide-market-share-of-search-engines/ > (accessed 14 June 2015).
  2. ‘Google’, Wikipedia < https://en.wikipedia.org/wiki/Google > (accessed 14 June 2015).
  3. Kevin Driscol, ‘Shaping the social web: Recovering the contributions of bulletin board system operators’ (unpublished conference paper at Web Archives as Scholarly Sources: Issues, Practices and Perspectives, 9 June 2015, Aarhus).
  4. William McCarthy, ‘The Advent of the Online Troll Community: A Digital Ethnography of alt.flame’ (unpublished conference paper at Web Archives as Scholarly Sources: Issues, Practices and Perspectives, 9 June 2015, Aarhus).
  5. Ian Milligan, ‘Welcome to the GeoHood: Using the GeoCities Web Archive to Explore Virtual Communities’ (unpublished conference paper at Web Archives as Scholarly Sources: Issues, Practices and Perspectives, 9 June 2015, Aarhus).
Print Friendly

1997 – The 1997 General Election

08/06/2015

1 May 1997 – United Kingdom

TRIGGER WARNING: The images and video associated with this post may be “too nineties” for young children, and may also be unacceptable to other viewers. None of this should be recreated at home, at school, or anywhere. Even in ten years time, when people will think it’s cool to be all “retro”. No. Just don’t.

The 1997 General Election was the high point for “New Labour” and Tony Blair. It marked the beginning of 13 years of continuous Labour government, a feat that had never been achieved before. In the wake of the 2010 Election, however, it has come to signify much more. While there were traditional Labour supporters at the time who were worried about the party’s rightward march towards neoliberalism and Thatcherism, today the party appears to be fiercely divided between those who wish to continue Blair’s legacy, and those who believe the Labour Movement needs to re-connect with its democratic socialist roots.

In 1979, the Callaghan Government fell following a vote of “No Confidence”. Margaret Thatcher’s Conservative Party won the proceeding election and, off the back of the economic recovery, ushered in an era of 18 years of Tory rule.

Thatcher offered an alternative to the “consensus” politics of the post-war era. The “Butskellism” of the 1950s was largely agreed on by the leadership of the two main parties. By contemporary standards, high taxation and public spending would buttress the economy. Planning departments would ensure that prices were controlled, and wages would be set through corporatist negotiations between the powerful unions, Parliament and business leaders. This formed the economic basis for the foundation of the modern welfare state in the 1940s, and supervised its expansion during the 1960s and 1970s. The narrative of “consensus” has been challenged by a number of historians1 – including this one2 – but there was enough accord in the public mind for it to act as a reasonably useful narrative.

By the 1970s, however, there was a sense that the post-war agreement was beginning to crumble. “Stagflation” (where the economy stagnates and inflation soars) was becoming more prevalent. Cycles of boom and bust were more frequent. Wilson faced a currency crisis in the 1960s in which the pound was devalued.3 Heath was forced to “U-turn” his economic plans in the face of mounting unemployment and union dissatisfaction in the early 1970s.4 And the death knell – The Winter of Discontent – saw bodies and rubbish bins piled up in the street as local council workers went on strike.5

Margaret Thatcher’s policies moved the country away from this. The unions were sidelined. Wages were to be a matter for market forces alone. Government planning and services were seen as inefficient, and where private companies could do the job state functions were privatised. The rhetoric was one of rolling back the state to allow the market to flourish. A new consensus emerged, in which nationalisation was seen as wasteful, and low taxation was seen as an end in itself to allow individuals and businesses to create wealth.

Whether or not Thatcher managed this – state expenditure increased under her tenure, and a host of new advisory bodies and standards agencies were created – the scene was set. The Labour party of Benn and Foot was a dinosaur. Any Labour leader committed to high taxation and nationalisation would fail.

And, indeed, they did fail. So unpopular was Thatcher and the Conservative Party as a whole that she was removed in 1990, replaced with the less-than-inspiring John Major. Labour, under Neil Kinnock, had already undergone a number of changes to make itself more relevant to the post-consensus political landscape. He still offered higher taxation and spending on services, however. And somehow – despite the polls going into the 1992 election – he managed to lose.

Neil Kinnock, Labour Leader in 1992. (Source)

Neil Kinnock, Labour Leader in 1992. (Source)

Kinnock was replaced with John Smith. But Smith was only leader for a short time before he died in 1994. In amidst the crisis, a younger generation of politicians vied for power. The two front-runners were Tony Blair and Gordon Brown.

A lot has been written about the rivalry between (arguably) the two greatest Labour Prime Ministers of the twenty-first century.6 What was important was that they pulled the party towards Thatcher’s new consensus. Whilst still favouring increased public expenditure on services that would provide opportunity and protection for the working classes, this was to be done within a Thatcherite framework.

This included privatisation of services to allow the market to provide more efficiency. Maintaining a low income tax and corporate tax rate. Increasing spending gradually, rather than succumbing to the “boom and bust” issues faced by Wilson and Callaghan. In short, it tried to position itself to the mortgage owning, white collar, middle-income families of the south rather than the traditional unionised blue collar north.

Mandelson: "They have nowhere else to go." (Source)

Mandelson: “They have nowhere else to go.” (Source)

Peter Mandelson, one of the architects of this “New” Labour borrowed another tactic from the Conservatives – pragmatism. For the Tories, the most important political outcome was that the party wins elections. Because without political power, no change can be effected. This requires politicians to bend towards the “common sense” of the electorate.7 For Labour, this meant appealing to those who had left Labour for Thatcher in the 1970s, and hoping the traditional support would not leave. At the time, as Mandelson put it, they had “nowhere else to go”. In 1997, there was no Green Party pulling at them from the social-democracy angle; no UKIP claiming to defend working class jobs.

The 1997 Election was the first I was old enough to truly grasp (though it would be another 6 years before I was eligible to vote). Therefore, I grew up under Tony Blair. This version of the Labour party is the only I have really known; and until recently I had not really experienced what living under a Conservative one was like. For all the political arguments Blair caused – and trust me, guys, there’s going to be some doozies coming up over the next few weeks – he and his New Labour Party had a profound impact upon the political world in which we now live.

  1. Richard Toye, ‘From “Consensus” to “Common Ground”: The rhetoric of the postwar settlement and its collapse’, Journal of Contemporary History, 48(1) (2013)
  2. Gareth Millward, ‘Invalid definitions, invalid responses: Disability and the welfare state, 1965-1995′ (Unpublished PhD thesis, University of London, 2013)
  3. Michael J. Oliver, ‘The management of sterling 1964-1967′, English Historical Review 126(520) (2011), 582-613.
  4. Anthony Seldon, ‘The Heath government in history’ in Stuart Ball and Anthony Seldon (eds) The Heath Government, 1-20.
  5. Colin Hay,’The Winter of Discontent thirty yeas on’, Political Quarterly 80(4) (2009), 545-52.
  6. At the reader’s discretion, replace “greatest” with “only”.
  7. See: Philip Norton and Arthur Aughey, Conservatives and Conservatism (London: Temple Smith, 1981).
Print Friendly

1996 – BSE

01/06/2015

20 March 1996 – London

While the “BSE crisis” was not a specific event that can be pinned down to a particular day or year, it was one of the stories on the news that I remember vividly as a kid. Concerns had been raised about the safety of food in the past – such as salmonella in eggs1 – but “Mad Cow” seemed to tick all the boxes for a juicy medical scandal that the tabloids in Britain love so much. The subject for 1996, then, is Bovine Spongiform Encephalopathy and its human counterpart New Variant Creutzfeldt–Jakob Disease. BSE and vCJD, for short.

Moo. (Source)

Moo. (Source)

BSE had been detected in cattle ten years earlier. It was later found to be caused by a particular protein which was not destroyed through cooking, heat treatment or other traditional forms of food processing. Although cows are usually herbivores, British farms had begun a practice whereby the useless parts of other cattle were ground up and added to their feed to reduce costs. Cows which had died with or of the disease therefore passed on the condition to live cattle – and the cycle continued.2 The government continued to downplay the risk until the mid-90s, when the scientific evidence seemed to point to a link between BSE in cows and vCJD in humans.

On 20 March 1996, Minister for Health Stephen Dorrell announced to Parliament that, contrary to reassurances across the 1990s from the government, BSE could be passed to humans through eating beef. There was public outcry, summed up quite clearly by Harriet Harman’s response to the statement from the opposition benches.

Is it not the case that the time has passed for false reassurance? There must be no more photo-calls of Ministers feeding beefburgers to their children. The question whether there is a link between BSE and CJD is an issue, is it not, of immense importance to consumers, and particularly for parents of young children. Does the Secretary of State acknowledge, as I do, that it is also of immense importance for hundreds of thousands of people who work in farming and the meat industry? Does he acknowledge that the situation remains uncertain and that it is now apparent that there has been too much reassurance and too little action?3

Private Eye cover - Gummer

John Gummer fed his daughter a burger in front of the press in order to “prove” that British beef was safe. He received condemnation and ridicule for using his child as a PR stunt. Private Eye satirised this on their cover. (Source)

What makes the story remarkable is that vCJD was never widespread. While its effects were devastating, to date only 226 cases of vCJD have been confirmed in humans.4 The idea that Britain’s national dish could be a killer, however, was too good for the press to resist.5 For historians and social scientists, BSE has become an example of how public and media attitudes towards health, risk and uncertainty have played out, often with little bearing on the aetiology of the disease or the “real” danger that the average citizen was ever in.6

BSE was everywhere in the 1990s – or rather, stories about BSE were. It was the first real health scare that I remember, and since then we have been flooded with tabloid headlines about what will or will not give you cancer. Some have been of “genuine” public concern, such as the horse meat scandal.7 (Although even here the issue wasn’t the health risk, rather the concern was over honest labelling of food sources and the British distaste for eating cute animals). But they better reflect public anxieties than genuine approaches to epidemiological risk. Shocking headlines sell newspapers – and they wouldn’t if people weren’t willing to buy them. At the same time, the areas of doubt over BSE allowed fear to grow and provided a flame to be fanned. As public health issues have become more complicated, it has become more difficult to provide definitive answers and risk-free advice to citizens.

These events are still very recent, which can be problematic when trying to assess the historical impact of BSE. Already, however, some are beginning to question whether this was a real “crisis” of policy making and of government ineptitude, or a “crisis” in the media sense – i.e., it was a crisis because that was the label given to it by journalists reporting on events at the time. Ian Forbes, for example, has argued that the process was more a public ‘drama’, which said more about the government’s ability to communicate ‘risk and trust’ than it did scientific or political negligence.8 Sheila Jasanoff has also questioned the British public’s ‘trust’ in the politicians it elects, arguing that the UK puts more stock in the character of the person making the statement than the rationality or scientific strength of her statements.9 In other words, by whipping up the drama and casting doubt on the integrity and competence of those in charge, BSE could be framed as a fundamental failing of the British political system, and a direct assault on the health of the average citizen. Somewhat ironically, as James Meikle wrote in The Guardian, by 1996 the decision taken “in secret” to ban the feeding of spinal cords and brain matter to cows in the late 1980s had meant that cases of BSE were already in retreat.10

Stephen Dorrell (Source)

Stephen Dorrell (Source)

As a historian, though, I’m also interested in the political shifts that we see in the crisis. They weren’t caused by it, but the political rhetoric employed by Harman in her response to Dorrell is fascinating, and a reflection on how the Labour Party had changed since the election of Margaret Thatcher. For CJD was, apparently, ‘of immense importance to consumers’. This suggests that people took risks with food and with their health as part of some sort of rational “choice”, expressed through market forces. In much the same way as obesity is linked to “choices” in diet and lifestyle. In this specific context, it appears that the government’s duty (through its scientific advisers) is to provide adequate guidance and information to allow people to make these rational choices. It was also a parent’s responsibility to make these choices on behalf of their children. This was another recurring theme in the late twentieth century, seen with vaccination crises and a growing anxiety over “bad parenting”.

The market was also important with relation to the meat industry. “Confidence” was crucial if agricultural workers and retailers were to survive. Somewhat ironically, the government had spent the previous years downplaying the tentative evidence of the BSE-vCJD risk precisely because they wanted to protect the agricultural sector.11 Presumably the argument was that if the science was forthrightly stated and well-explained, that the market would sort these issues out itself.

Harriet Harman (Source)

Harriet Harman (Source)

This represents quite a shift, of course – a Labour shadow minister making neo-liberal appeals to the market as a way of protecting citizens and workers. New Labour had certainly come into force, and in these arguments we see a very particular way of viewing public health. At no point does she attack the power of the meat industry or the vested interests of the medical establishment, as might be expected from earlier generations.

The BSE crisis, then, represents a number of aspects of late-twentieth and early-twenty-first-century Britain. The neo-liberalisation of health and the Labour Party. Growing mistrust of public health advice, fuelled by the tabloid press and scientific uncertainty amongst the lay population. It may not be the sexiest topic, but it is certainly one that resonates in the world I live in today. Meikle argues (although historians may need to test this) that governments are now far less willing to sit on potentially bad news, communicating risk to the public at an earlier stage than they would have done in the past.12 The diphtheria-tetanus-pertussis scandal would be a good example from the 1970s of when the government was more cautions, more worried about causing “panic” than providing clear information.13 By the time of the H1N1 “bird flu” drama, however, the government was accused of overreacting, given how few people died from it.14

That relationship between the public and authority remains tense. Mass cynicism is, unfortunately, a facet of political discourse in our times. For all these reasons, BSE is number 11 of 30.

  1. Martin J. Smith, ‘From policy community to issue network: Salmonella in eggs and the new politics of food’ in Policy Administration (69) (1991), 235-55.
  2. See the official report on the enquiry, also known as the Phillips Report: HC 887-I (2000-01).
  3. Parliamentary Debates (Commons) 274, 20 March 1996, 366.
  4. Wikipedia contains a summary table on its entry on “BSE”. See also: The National Creutzfeldt-Jakob Disease Research & Surveillance Unit, ‘Variant Creutzfeldt-Jakob disease, Current data’ (University of Edinburgh, July 2012) < https://web.archive.org/web/20120721234746/http://www.cjd.ed.ac.uk/vcjdworld.htm > (The Internet Archive, site captured 21 July 2012); Wikipedia, ‘Bovine spongiform encephalopathy’ < http://en.wikipedia.org/wiki/Bovine_spongiform_encephalopathy > (accessed 20 January 2015).
  5. John Newsinger, ‘The roast beef of old England’ in An Independent Socialist 48(4) (1996).
  6. See: Kevin E. Jones, ‘BSE, risk and the communication of uncertainty: A review of Lord Phillips’ report from the BSE inquiry (UK)’ in Canadian Journal of Sociology 26(4) (2003), 655-666.
  7. The Guardian, Horsemeat Scandal < http://www.theguardian.com/uk/horsemeat-scandal > (accessed 20 January 2015).
  8. Ian Forbes, ‘Making a crisis out of a drama: The political analysis of BSE policy-making in the UK’ in Political Studies 52 (2004), 342-357.
  9. Sheila Jasanoff, ‘Civilization and madness: The great BSE scare of 1996′ in Public Understanding of Science 6 (1997), 221-232.
  10. James Meikle, ‘Mad cow disease – a very British response to an international crisis’ in The Guardian (25 April 2012, 14:29 BST) < http://www.theguardian.com/uk/2012/apr/25/mad-cow-disease-british-crisis > (accessed 21 January 2015).
  11. Newsinger, ‘The roast beef of old England’.
  12. Meikle, ‘Mad cow disease’.
  13. See: Jeffrey P. Baker, ‘The pertussis vaccine controversy in Great Britain, 1974-1986′ in Vaccine 21 (25-26) (2003) 4003-10.
  14. Eben Harrell, ‘Did the WHO exaggerate the H1N1 flu pandemic’s danger?’ in Time (26 January 2010) < http://content.time.com/time/health/article/0,8599,1956608,00.html > (accessed 21 January 2015).
Print Friendly
Wallstreetbmb

1995 – The Oklahoma City Bombing

25/05/2015

19 April 1995 – Oklahoma City

Terrorism has been omnipresent in Western politics in recent years, usually linked to Islamic organisations and individuals. But terrorism isn’t simply an Islamic thing, or even a middle-eastern thing. That seems most obvious in the case of the Oklahoma City Bombing.

On 19 April 1995, two white Christian men set off explosives in the centre of Oklahoma City. Both had been in the US Army. The main organiser, Timothy McVeigh, had been part of a militia group, angry at the handling of the 1993 Waco Seige. The explosions killed 168 and injured 680, causing an estimated $650 million of damage.1

Since 11 September 2001, the narrative around terrorism has focused predominantly on Islamic militarism. This has been inspired by (and in turn reinforced through) wars in Iraq and Afghanistan, strained relations with Iran over its nuclear programme, increased security in international air travel, extended powers of state surveillance and more. In some European countries, bans on certain types of Islamic dress – usually the full-face veil – have been justified on the grounds of security and the need for better “integration” on the part of Muslim immigrants.

It would be churlish to suggest that terrorism has not been committed by Muslims in the name of Islam. But it is equally myopic to think that this is solely an Islamic problem, or that Muslims are inherently more prone to this sort of thing than other groups. Indeed, that the general public discourse has shifted to equate the middle east with terrorism is one of the most important political shift of the past 20 years.2

The aftermath of the Wall Street Bombing in 1920. (Source)

The aftermath of the Wall Street Bombing in 1920. (Source)

Clearly, terrorism did not begin in 2001. September 11th wasn’t even the first terrorist attack in the financial district of New York. On 16 September 1920, a horse-drawn wagon entered Wall Street, packed with explosives. It detonated, killing around 38 people and injuring several more. It was believed to have been perpetrated by Italian anarchists in retaliation for the arrests of key figures, but the investigation never conclusively revealed who had set off the bomb or why.3

Moreover, the United States has not always been so abhorrent of terrorism in all its forms. For many years, Irish Americans provided funds for the Irish Republican Army (IRA), a terrorist organisation that committed a number of bombings on British targets.4

Not that the British could take the moral high-ground, of course. The armed forces in Ireland regularly used Protestant terrorist groups for intelligence, and were even implicated as being complicit in attacks on individuals.5

The point, of course, is to say that terrorism is not new. Nor has it always been universally condemned. By different populations and governments at different times it has been encouraged, or at least passively tolerated.

The way in which terrorism has been turned into a racial and religious issue will almost certainly come up again, probably in 7 weeks’ time (sorry… that should come with a spoiler alert, shouldn’t it…). But it is worth emphasising that terrorism is an emotive term whose connotations have shifted across time. In 1920, the FBI assumed the bombers must have been Bolsheviks. For they were the enemy. In the 1970s and 1980s in Britain, terrorism was synonymous with the Irish Troubles. And, indeed, what made Oklahoma so shocking was that it was purpotrated by “one of our own” – McVeigh, the white, Christian, Gulf War veteran.

Terrorism is going to be a recurring theme for the rest of this 30-for-30. As such, this is a good time to remind people it’s not all about Mohammed cartoons and Osama Bin Laden.

  1. ‘Oklahoma City bombing’, Wikipedia < http://en.wikipedia.org/wiki/Oklahoma_City_bombing > (accessed 24 May 2015).
  2. For how this process began in the United States, see: Stuart A. Wright, ‘Strategic Framing of Racial-Nationalism in North America and Europe: An Analysis of a Burgeoning Transnational Network’, Terorism and Political Violence 21(2) (2009), 189-210.
  3. Ella Morton, ‘The Wall Street Bombing: Low-tech terrorism in prohibition-era New York’, Slate (16 September 2014, 8.00am) < http://www.slate.com/blogs/atlas_obscura/2014/09/16/the_1920_wall_st_bombing_a_terrorist_attack_on_new_york.html > (accessed 24 May 2015).
  4. Anne Applebaum, ‘The discreet charm of the terrorist cause’, Washington Post (3 August 2005) < http://www.washingtonpost.com/wp-dyn/content/article/2005/08/02/AR2005080201943.html > (accessed 24 May 2015).
  5. ‘Pat Finucane murder: “Shocking state collusion”, says PM’, BBC News (12 December 2012) < http://www.bbc.co.uk/news/uk-northern-ireland-20662412 > (accessed 24 May 2015).
Print Friendly
rwanda-banner

1994 – The Rwandan Genocide

18/05/2015

6 April 1994 – Kigali

When a aeroplane carrying the President was shot down over Rwanda’s capital, Kigali, the government responded in brutal fashion. The ruling classes, dominated by an ethnic group called the Hutu, began systematically raping and murdering Tutsis.

The massacre, which lasted around three months, was part of a bloody civil war. Plenty has been written about the genocide, and (as with other topics in this series) I doubt I will be able to do the subject justice. Thankfully, as a historian I can twist the question “what can you tell me about Rwanda ’94?” and talk about something completely different.

Two historically interesting issues surround the narrative of Rwanda. The first is “what constitutes a genocide”? Is it simply a question of numbers? Does there have to be an overt racial element? What about religion or nationality? And do the perpetrators have to be racially homogeneous?

The second is “should ‘we’ have done more to stop it”? Is military intervention in a foreign conflict ever acceptable? Is there such thing as “neutrality”? Indeed, is non-intervention a concious and dangerous act in itself? At what point does a stronger power have to put itself at risk to protect the most vulnerable?

These are questions worth asking, because in the twenty years since both have been prevalent in British politics – and have therefore had a big impact upon the world in which I live.

The term “genocide” or “ethnic cleansing” has been attached to a number of atrocities in the twentieth century. Because it is so emotive, there are some very sensitive issues to confront before using it in any serious assessment of history. For the alleged perpetrators, it carries a stigma that very few would be willing to admit – certainly not if that group still holds power to this day. For the alleged victims, there is great political capital to be made – either from seeking political redress after the fact, or in trying to secure military support from an outside power (on which more later). This is not, of course, to suggest that Kurds, Tutsis, Armenians or Jews have “played the victim” – rather, it shows the various factors that lead to people on both sides becoming so defensive of their relative positions.

Some cases of denialism are less convincing than others. There is a great industry of holocaust deniers, most notably in the work David Irving. A German “historian”, Irving used a number of German documents from the Nazi Party to claim that Hitler did not order the final solution, and questions whether any systematic attempt to wipe out Jewish people ever existed.1 He was proved an incompetent liar in court.2 But even a cursory glance at the seedier sides of the internet shows that this sort of attitude persists.

Attacking National-Socialist Germany is, of course, easier because of their utter defeat in the Second World War. Even if people will defend them, there is no need for any major company or nation state to negotiate the truth. Turkey, on the other hand, is a different matter. The Turkish Government completely denies that the murder of Armenians at the end of World War One constitutes a genocide. Its allies, including the United States, often try to find euphemistic terms for the killing.3

What’s interesting here is that there is no denial that killing took place. Just as there is little doubt that Saddam Hussein gassed Kurdish people in Northern Iraq; or that there were thousands of people murdered in ethnic wars in Yugoslavia. Rather their significance is “downgraded” from genocide to less emotive terms. Hussein and Milošević were (no longer) useful allies for Western governments – and were eventually put on trial (with dubious success and/or impartiality).

Turkey, however, as first a safety zone against the encroachment of Communism in Eastern Europe, and then as a secular buffer against unstable dictatorships in the Middle East, is not a country to piss off. Despite the opinions of many historians with more knowledge than I – and, indeed, Barack Obama himself4 – the Office of the POTUS will not say that what happened was a genocide.

While not universally considered a genocide, the potato famine in Ireland is seen by some as such. (<a href="http://en.wikipedia.org/wiki/Great_Famine_(Ireland)#/media/File:An_gorta_Mor.jpg">Source</a>)More subtle are mass killings that, for a variety of reasons, are not considered genocides. In some cases this is because the ethnic grouping of the victims is unclear. In others, deaths are caused by indirect means, such as the inadvertent or deliberate exacerbation of famine and abject poverty. For instance, the famine that struck Ireland in the nineteenth century was fuelled by British agricultural and taxation policy, but is not generally considered a genocide. Forced collectivisation of farms in Ukraine under Stalin, similarly, border on the definition, but are not usually treated as such.

Then there are the practices of European Empires which killed millions, but are usually couched in other terms (if they are confronted at all). The famous stories of smallpox-riddled blankets being “donated” to Native Americans, for example;5 or the systematic destruction of traditional trade networks leading to famines in the Indian subcontinent.6

The United Nations Security Council. (Source)

The United Nations Security Council. (Source)

OK. So even if there are questions about whether a conflict is necessarily a “genocide”, what responsibility do others have to intervene? One of the main criticisms about Rwanda has been that Western Nations could have prevented a catastrophe if they had come to the aid of the Tutsi rebels.7 Since then, military interventions have taken place (or been proposed) in Bosnia, Libya, Iraq, Syria, Sierra Leone and others.

Much like with the question of Europe, the split in opinion in places like Britain has not been across the traditional left/right dividing line. There is a strong anti-war contingent on the left, often painted as anti-imperialism. Opposition to Vietnam and nuclear weapons also extends to an overtly pacifist approach to international relations. On the other hand, there is also a group that professes “solidarity” with common people in foreign countries unable to defend themselves from the forces of dictatorial regimes. Thus, there was left-wing support for Iraq and Libya in recent years; and more famously the Spanish Civil War in the 1930s.

Intervention is not always well-received by the electorate of the intervening nation. (Source)

Intervention is not always well-received by the electorate of the intervening nation. (Source)

On the right, there are the “hawks” who see military intervention as a way of ensuring stability in regions of the world that might hurt Western interests. Either through direct conflict and terrorism, or through disrupting potential markets and supplies of raw materials. More cynically, some see it as a boom for the arms trade. Then there are isolationists who believe that problems in the rest of the world are not our concern. Money spent on wars should be kept at home, with poorer countries left to sort out their own messes. Less Machiavellian is the belief that intervention has unknown consequences, and may lead to power vacuums that, in turn, create bigger and longer-lasting problems down the line. This is a concern shared by the anti-war left, too.

It is certainly the case that intervention tends to happen when there is a benefit to the intervening state.8 But it would be wrong to describe this as solely to do with “oil” (as in the case of Iraq) – there were also genuine humanitarian concerns from the British people about the safety of civilians in a number of conflicts.

Rwanda, then, brings into focus many foreign policy question which have yet to be resolved. After the Cold War, this sort of global politics seems to have intensified, and been a key part of my adolescent and adult life. No doubt it will continue to reverberate 30 years hence.

  1. For more on this pleasant individual, see ‘David Irving’, Wikipedia http://en.wikipedia.org/wiki/David_Irving > (accessed 7 May 2015).
  2. Richard Evans, Telling Lies about Hitler: The Holocaust, History and the David Irving Trial (Verso, 2002).
  3. Chris Bohjalian, ‘Why does Turkey continue to deny Armenian genocide’, The Boston Globe (9 March 2015) < http://www.bostonglobe.com/opinion/2015/03/09/why-does-turkey-continue-deny-armenian-genocide/rV7dcOXxrDb7wz01AoXgZM/story.html > (accessed 7 May 2015).
  4. AP, ‘Barack Obama will not label 1915 massacre of Armenians a genocide’, The Guardian (22 April 2015, 4:23 BST) < http://www.theguardian.com/us-news/2015/apr/22/barack-obama-will-not-label-1915-massacre-of-armenians-a-genocide > (accessed 7 May 2015).
  5. K B Patterson and T Runge, ‘Smallpox and the Native American’, American Journal of Medical Science 323(4) (2002), 216-22.
  6. David Arnold, ‘The “discovery” of malnutrition and diet in colonial India’, Indian Economic and Social History Review 31(1) (1994), 1-26.
  7. See, for example, Emily Sciarillo, ‘Genocide in Rwanda: The United Nations’ ability to act from a neoliberal perspective’, Towson University Journal of International Affairs 38(2) (2002), 17-29.
  8. There are dangers in applying this interpretation in all cases, however. See: Philip Spence, ‘Imperialism, anti-imperialism and the problem of genocide, past and present’, History 98(332), 606-22.
Print Friendly
Older Posts