historical stuff by Gareth Millward

Search

2000 – The Millennium Bug

29/06/2015

1 January 1900 – Earth

Technology can become obselete. That in itself doesn’t seem to be too contentious a statement. Just ask coopers, smiths and thatchers about how business has been going lately. But the furore over the “millennium bug” or “Y2K” showed just how dangerous this can be when every major administrative system in the world relies on an outmoded date format.

In early computing, both memory and processing power were at a premium. It is estimated that one bit cost one dollar in the 1960s.1 There are 8 bits to a byte. Storing a four digit number requires two bytes; a two digit number only needs one. That’s an $8 per date. Since most calculations would be for dates within the 20th century, adding the “19” to the beginning of the date seemed redundant. It became convention in a lot of software to simply write my birthday as 10/10/85. Which is fine.

However, software used by air traffic control, international banking and national security took these dates to calculate a number of things. Something simple such as the day of the week become complicated once you move over the century boundary. 12 June 1914, for example, was a Friday. 12 June 2014 was a Thursday. And in 1814, it fell on a Sunday.

There are other things that could go wrong, too. Say I wanted to book an appointment with someone seven days after Friday 25 February 2000. No problem – I’ll see them on Friday 4 March. But what if the computer sees that I want to book an appointment seven days after 25/02/00, and thinks it’s 1900? Well, that’s a problem. Because it will want me to be there on Monday 5th March. 1900, unlike 2000, wasn’t a leap year.

Retrofitting old software and data to include the correct date formats and calculations was not a trivial exercise. We spent an estimated $300 billion to fix the problem. In the end, it may not have even been that big a deal.2 But it shows how decisions made out of convenience or financial necessity could come to create problems for future generations who get locked into a particular format.

The most famous example of this theory has come from Paul David, who described how the QWERTY keyboard came to dominate the Anglophonic world. First, it was used for mass produced typewriters. As typists became trained to work quickly on these machines, any potential benefits from a more ergonomic layout were negated by the massive short-term costs of retraining everybody to use the new system.3

If you’ve ever tried typing an e-mail back to your parents using your French exchange family’s AZERTY monstrosity, you’ll know just how this feels.

Human rights abuse. (Source)

Human rights abuse. (Source)

Historiographically, the idea of path dependence is an interesting one. Certainly, you could apply it to bureaucratic and operational decisions made by businesses, government or the collective action of societies.4 The recent furore over Universal Credit, for example, shows that while there may have been the political will to produce a more rational and consistent benefits system, the existing web of payments and tax credits is unfathomably costly to untangle.5

The current social security system is an accident of history. After being overhauled in the 1940s, new schemes have been added as society has changed and holes in the safety net have been identified. No doubt, a single benefit, administered using a overarching computer system and bureaucratic machinery, would be more efficient that what we have now. But if the cost of change – to both the government in terms of infrastructure and claimants in terms of lost income – is higher than the potential gain, it can cause a political crisis. One might argue there is a reason why no other government has been so… “brave”… in their overhaul of the Department of Work and Pensions.

Despite being a massive part of the late 1990s news cycle, Y2K never really caused that many problems. Like much else with the dawning of the new millennium, the really exciting part was that Western society was approaching a nice round number. It’s like watching your odometer tick over to 100,000, or getting 100 lines in Tetris. Objectively, it means little. But there’s something nice about hitting a milestone.

Still. It’s a helpful reminder. However you design any system, eventually it will start to creak and groan under its own contradictions. But fixing it properly may end up being more costly than patching it up with string and sticky back plastic.

  1. ‘Year 2000 problem’, Wikipedia < https://en.wikipedia.org/wiki/Year_2000_problem > (accessed 28 June 2015).
  2. ‘Y2K: Overhyped and oversold?’, BBC News 6 January 2000 < http://news.bbc.co.uk/1/hi/talking_point/586938.stm > (accessed 28 June 2015).
  3. Paul A. David, ‘Clio and the economics of QWERTY’, The American Economic Review 75(2) (1985), 332-7. (Copy available – http://www.econ.ucsb.edu/~tedb/Courses/Ec100C/DavidQwerty.pdf – (as of 28 June 2015).
  4. Paul Pierson, ‘Increasing Returns, Path Dependence, and the Study of Politics’, The American Political Science Review 94 (2000), 251-67.
  5. Asa Bennett, ‘Iain Duncan Smith’s Universal Credit Could Cost More Than Planned, Warns Think-Tank’, Huffington Post (9 September 2014) < http://www.huffingtonpost.co.uk/2014/09/09/ids-universal-credit-welfare_n_5789200.html > (accessed 28 June 2015).
Print Friendly
Hope_Columbine_Memorial_Library_thin

1999 – The Columbine High School Massacre

22/06/2015

20 April 1999 – Columbine

Last week, it was fortunate coincidence that I had planned to write about Google the weekend after getting back from a conference on the history of the Web. This week, it’s utterly depressing in the wake of the Charleston shootings that I should be writing about Columbine.

School shootings are – thankfully – a rare event in Europe. The Dunblane shooting in 1996 was a reminder that these things do happen, though not with the regularity that they have plagued the United States in recent years. When Dunblane happened, I was 10 years old; with Columbine I was 13. Of course, the chance of being caught up in one of these tragedies was infinitesimally small. But that didn’t stop kids of my age (and, probably, more our parents) worrying about this sort of thing.

The memorial library at Columbine High School, built after the massacre. (Source)

The memorial library at Columbine High School, built after the massacre. (Source)

Columbine was now over 15 years ago, and yet mass shootings continue across the United States. Colorado had another incident only as recently as 2012 when a gunman attacked a screening of the final Chirstopher Nolan Batman movie. Sadly, as Jon Stewart put it:

I honestly have nothing other than just sadness once again that we have to peer into the abyss of the depraved violence that we do to each other and the nexus of a just gaping racial wound that will not heal, yet we pretend doesn’t exist. And I’m confident, though, that by acknowledging it, by staring into that and seeing it for what it is, we still won’t do jack shit.1

I could take this opportunity to look at what it is historically about American culture that allows this to keep happening. That particular topic has been wrung to death over recent days. But we can revisit the greatest hits! Easy access to guns gives people opportunity. The glorification of fire arms creates a fantasy culture that those with dangerous personalities get lost in. Not enough support networks exist for those with mental health issues. Racism is still rife. A large section of the community celebrates the names and flags of white supremacists from the nineteenth century. Others hide behind a constitutional amendment designed to allow the country to defend itself. All good stuff, to be repeated ad nauseam next time this happens.

Because, let’s not kid ourselves. There will be many next times.

The historical attachment to gun ownership in America makes sense within a narrow historical interpretation. The country was founded as a modern experiment in liberalism. Individual property was to be protected from the intrusion of the state, and individuals were free to follow their own religious and political lives providing these did not impinge on the rights of others to do the same.

One of the important pillars of this concept was the Constitution – a set of rules governing the government. The state would not be allowed to interfere in the private lives of others apart from within strict laws of conduct. In order for that to happen, the appropriate levers needed to be created to allow the people to kick out those who would abuse the system.

In the ideal world, of course, this would happen with regular elections to the Senate, House of Representatives and the Presidency. But if those failed, the people needed to be able to defend themselves against injustice: hence, the Second Amendment. The right for people to form militias to protect against tyranny, domestic and foreign. Especially foreign. Those dirty Brits might come back one day…

The idea that firearms will protect the individual from tyranny continues in US politics. See this cartoon, reproduced on NaturalNews.com. (Source)

The idea that firearms will protect the individual from tyranny continues in US politics. See this cartoon, reproduced on NaturalNews.com. (Source)

Within the context of the late eighteenth century, this made perfect sense. This was a new country, just starting to form the sort of economic and political infrastructure required to defend itself and to provide a mature, accountable democracy. As time has gone on, however, history has left this world behind. On the one hand, the United States as a wide network of military and police bases, with a highly developed justice system and supreme court. While these institutions do not work anywhere near as well as Americans would like, there are myriad constitutional forms of protection and of law enforcement. If these fail, there are also myriad avenues for challenging this power.

Second, fire arms are far more deadly and far more prevalent than anyone in the eighteenth century could have assumed. Individuals can, realistically, pull together the fire power to form a militia that would rival that of many middle-income nations.

To many in the States, however, that paranoia – a vital defence mechanism 200 years ago – remains. And it has blended with a fetishisation of guns and a deep mistrust of the federal government. Many believe a gun makes them safer, even though you are more likely to be killed by a gun if you own one.2 In the Southern states, you can combine all this with a nagging suspicion that the economic and political dominance of Northern states and California means that “liberals” are trying to impose a political way of life upon the states that once tried to secede from the Union.

On this historical reading, then, guns are justified because governments can never be trusted to operate within the law. At any moment, the Federal government could seize your property (including your guns).

To Europeans, this sounds like utter nonsense. And, increasingly, it is being ridiculed by middle America too. But just because the Second Amendment is an anachronism doesn’t make it any less potent to many. In fact, when one of the major forces in American politics is named after an act of sabotage in Boston harbour, its roots in the eighteenth century make it even more relevant.

Columbine will happen again. And it will keep happening until those most wedded to gun culture understand that they are being manipulated far more by the arms industry and vested capital interests than they are by the Federal government. For it is the legal framework and protection offered by a government – constrained by the rule of law – that will ultimately make America a safer place.

That’s going to take a long time. Because such a collectivist attitude, relatively common in Europe, is an anathema to the individual rights approach at the heart of American politics and history. And we should be honest – collective trust in government hasn’t always worked out so well this side of the pond.

Until then, America will continue to tell the rest of the world – mass shootings are the price we pay for freedom.

Addendum

Last night, a friend posted this to Facebook. If you want a more sweary and entertaining version of the above, see below:

  1. Jon Stewart on The Daily Show, broadcast on Comedy Central. Transcript from ‘Read Jon Stewart’s blistering monologue about race, terrorism and gun violence after Charleston church massacre’, Washington Post, 19 June 2015 < http://www.washingtonpost.com/blogs/style-blog/wp/2015/06/19/read-jon-stewarts-blistering-monologue-about-race-terrorism-and-gun-violence-after-charleston-church-massacre/ > (accessed 21 June 2015).
  2. Linda L Dahlberg, Robin M Ikeda and Marcie-jo Kresnow, ‘Guns in the home and risk of violent death in the home: Findings from a national study’, American Journal of Epidemiology 160(10) (2004), 929-36.
Print Friendly

1998 – Google, Inc.

15/06/2015

4 September 1998 – Menlo Park

Hoover, Xerox and Coke have come to mean vacuum cleaner, photocopier and cola in colloquial English. Such was the success of those brands, either as inventors or popularisers of day-to-day products that we use their trademarks more than the more generic terms; even when Vax, Epson and Pepsi are available.

Google is another of those brands. It is the world’s most used search engine, accounting for 88% of the planet’s searches.1 Yet that isn’t primarily what Google does any more. It offers a range of services and collects mind-blowing amounts of data, leading many to criticise its dominant position on the internet and World Wide Web.

Google, November 1998

The Google search page as it looked around the time Google was incorporated.
(http://google.stanford.edu/ [The Internet Archive, site captured 11 November 1998])

The Web is full of information, but it’s relatively useless if you can’t find anything. There are a number of ways you can find stuff, but it generally boils down to access to one of three things:

  • someone recommends a site to go to;
  • you follow a link on an existing site; or
  • you search for a particular topic in a search engine.

In the late nineties, search wasn’t particularly sophisticated. The main providers would maintain large databases of websites, and then would provide the user with results based on how often a search term appeared. (To very crudely summarise the technology.) Students at Stanford, however, wondered whether an algorithm could deduce relevance, but monitoring how often authoritative websites linked to each other on a specific topic. Using these and other metrics, they developed a search engine that eventually became Google. It launched in 1997, and the company was incorporated in September 1998.2

My family got the internet in 2000. Google has been practically a constant in my experience of the web since then, as it has been for many others. But there was a Web before Google. And there was an internet before the Web. So the question is – how did we ever find anything?

Yahoo had a search option, but also gave prominence to its directory. Much like a telephone directory or book index, it sorted sites by category and was maintained by the site itself. (ww2.yahoo.com [The Internet Archive, captured 17 October 1996].

Yahoo had a search option, but also gave prominence to its directory. Much like a telephone directory or book index, it sorted sites by category and was maintained by the site itself.
(www2.yahoo.com [The Internet Archive, captured 17 October 1996].

One form of “discovery” came through directories. The example above was on the Yahoo front page in late 1996. Google also maintained a directory in its early years, before neglecting it in favour of other services. While these were helpful, they were also at the mercy of those maintaining them. Humans simply could not read and categorise every single page on the Web.

Another way of maintaining links for like-minded people, then, was to gather in one place. These sorts of internet communities have existed for many years, even before the invention of the Web. At the recent RESAW conference in Aarhus, Kevin Driscoll spoke about the history of Bulletin Board Systems. Much like the modern “internet forum”, people could exchange messages on the internet equivalent of the community bulletin board in the local parish church or community centre. Access came through dialling up BBS providers using a modem and transferring data over the phone line. This is essentially how modern Internet Service Providers work, but in the days before the Web as we know it. Indeed, a number of BBS providers went on to become ISPs.3

These boards provided information not just on the “real” world goings on in the cities in which they were based. They also gave people recommendations for other places on the internet to visit.

Other messaging systems such as Usenet provided similar information amongst the core service of facilitating conversations on a particular topic. This was brought out in William McCarthy’s paper on the history of troll communities.4

Some users took the geographical analogy and ran with it, contributing to the rise of GeoCities in the late 1990s. Ian Milligan‘s research showed that the owners of GeoCities pages tended to keep themselves in the city which most reflected their interests. In doing so, they were able to join communities of like-minded people, share information, and then “go” to other cities to learn about other topics. This was a web constructed by amateurs rather than professional content generating sites, but it allowed people to discover and – crucially – be discovered by their fellow Web consumers.5

xkcd nails it again... (Source)

xkcd nails it again… (Source)

Google has become a valuable tool for web “discovery”. Alongside the rise in social media, we have been able to share our own content and that of others in ways that would have been difficult or impossible in the 1980s and 1990s. Finding people, places and things has never been easier.

Aside from the political concerns and debates over the “right to forget“, it has also made things tricky for documentary researchers. The intricate details of that are probably worth explaining elsewhere (such as in this paper I gave with Richard Deswarte and Peter Webster last year). Suffice to say, the sheer volume of data available through Google is overwhelming. So too is the gnawing suspicion that it is almost too easy, leading us to do research on the sorts of topics that serve only to confirm our own prejudices of the world and ignoring wider and important outside context.

In any case, Google looks like it’s here to stay. Given the way it has revolutionised the Web, which has in turn revolutionised the twenty-first century, the company had to go into my 30-for-30.

  1. ‘Worldwide market share of leading search engines from January 2010 to April 2015′, Statista < http://www.statista.com/statistics/216573/worldwide-market-share-of-search-engines/ > (accessed 14 June 2015).
  2. ‘Google’, Wikipedia < https://en.wikipedia.org/wiki/Google > (accessed 14 June 2015).
  3. Kevin Driscol, ‘Shaping the social web: Recovering the contributions of bulletin board system operators’ (unpublished conference paper at Web Archives as Scholarly Sources: Issues, Practices and Perspectives, 9 June 2015, Aarhus).
  4. William McCarthy, ‘The Advent of the Online Troll Community: A Digital Ethnography of alt.flame’ (unpublished conference paper at Web Archives as Scholarly Sources: Issues, Practices and Perspectives, 9 June 2015, Aarhus).
  5. Ian Milligan, ‘Welcome to the GeoHood: Using the GeoCities Web Archive to Explore Virtual Communities’ (unpublished conference paper at Web Archives as Scholarly Sources: Issues, Practices and Perspectives, 9 June 2015, Aarhus).
Print Friendly

1997 – The 1997 General Election

08/06/2015

1 May 1997 – United Kingdom

TRIGGER WARNING: The images and video associated with this post may be “too nineties” for young children, and may also be unacceptable to other viewers. None of this should be recreated at home, at school, or anywhere. Even in ten years time, when people will think it’s cool to be all “retro”. No. Just don’t.

The 1997 General Election was the high point for “New Labour” and Tony Blair. It marked the beginning of 13 years of continuous Labour government, a feat that had never been achieved before. In the wake of the 2010 Election, however, it has come to signify much more. While there were traditional Labour supporters at the time who were worried about the party’s rightward march towards neoliberalism and Thatcherism, today the party appears to be fiercely divided between those who wish to continue Blair’s legacy, and those who believe the Labour Movement needs to re-connect with its democratic socialist roots.

In 1979, the Callaghan Government fell following a vote of “No Confidence”. Margaret Thatcher’s Conservative Party won the proceeding election and, off the back of the economic recovery, ushered in an era of 18 years of Tory rule.

Thatcher offered an alternative to the “consensus” politics of the post-war era. The “Butskellism” of the 1950s was largely agreed on by the leadership of the two main parties. By contemporary standards, high taxation and public spending would buttress the economy. Planning departments would ensure that prices were controlled, and wages would be set through corporatist negotiations between the powerful unions, Parliament and business leaders. This formed the economic basis for the foundation of the modern welfare state in the 1940s, and supervised its expansion during the 1960s and 1970s. The narrative of “consensus” has been challenged by a number of historians1 – including this one2 – but there was enough accord in the public mind for it to act as a reasonably useful narrative.

By the 1970s, however, there was a sense that the post-war agreement was beginning to crumble. “Stagflation” (where the economy stagnates and inflation soars) was becoming more prevalent. Cycles of boom and bust were more frequent. Wilson faced a currency crisis in the 1960s in which the pound was devalued.3 Heath was forced to “U-turn” his economic plans in the face of mounting unemployment and union dissatisfaction in the early 1970s.4 And the death knell – The Winter of Discontent – saw bodies and rubbish bins piled up in the street as local council workers went on strike.5

Margaret Thatcher’s policies moved the country away from this. The unions were sidelined. Wages were to be a matter for market forces alone. Government planning and services were seen as inefficient, and where private companies could do the job state functions were privatised. The rhetoric was one of rolling back the state to allow the market to flourish. A new consensus emerged, in which nationalisation was seen as wasteful, and low taxation was seen as an end in itself to allow individuals and businesses to create wealth.

Whether or not Thatcher managed this – state expenditure increased under her tenure, and a host of new advisory bodies and standards agencies were created – the scene was set. The Labour party of Benn and Foot was a dinosaur. Any Labour leader committed to high taxation and nationalisation would fail.

And, indeed, they did fail. So unpopular was Thatcher and the Conservative Party as a whole that she was removed in 1990, replaced with the less-than-inspiring John Major. Labour, under Neil Kinnock, had already undergone a number of changes to make itself more relevant to the post-consensus political landscape. He still offered higher taxation and spending on services, however. And somehow – despite the polls going into the 1992 election – he managed to lose.

Neil Kinnock, Labour Leader in 1992. (Source)

Neil Kinnock, Labour Leader in 1992. (Source)

Kinnock was replaced with John Smith. But Smith was only leader for a short time before he died in 1994. In amidst the crisis, a younger generation of politicians vied for power. The two front-runners were Tony Blair and Gordon Brown.

A lot has been written about the rivalry between (arguably) the two greatest Labour Prime Ministers of the twenty-first century.6 What was important was that they pulled the party towards Thatcher’s new consensus. Whilst still favouring increased public expenditure on services that would provide opportunity and protection for the working classes, this was to be done within a Thatcherite framework.

This included privatisation of services to allow the market to provide more efficiency. Maintaining a low income tax and corporate tax rate. Increasing spending gradually, rather than succumbing to the “boom and bust” issues faced by Wilson and Callaghan. In short, it tried to position itself to the mortgage owning, white collar, middle-income families of the south rather than the traditional unionised blue collar north.

Mandelson: "They have nowhere else to go." (Source)

Mandelson: “They have nowhere else to go.” (Source)

Peter Mandelson, one of the architects of this “New” Labour borrowed another tactic from the Conservatives – pragmatism. For the Tories, the most important political outcome was that the party wins elections. Because without political power, no change can be effected. This requires politicians to bend towards the “common sense” of the electorate.7 For Labour, this meant appealing to those who had left Labour for Thatcher in the 1970s, and hoping the traditional support would not leave. At the time, as Mandelson put it, they had “nowhere else to go”. In 1997, there was no Green Party pulling at them from the social-democracy angle; no UKIP claiming to defend working class jobs.

The 1997 Election was the first I was old enough to truly grasp (though it would be another 6 years before I was eligible to vote). Therefore, I grew up under Tony Blair. This version of the Labour party is the only I have really known; and until recently I had not really experienced what living under a Conservative one was like. For all the political arguments Blair caused – and trust me, guys, there’s going to be some doozies coming up over the next few weeks – he and his New Labour Party had a profound impact upon the political world in which we now live.

  1. Richard Toye, ‘From “Consensus” to “Common Ground”: The rhetoric of the postwar settlement and its collapse’, Journal of Contemporary History, 48(1) (2013)
  2. Gareth Millward, ‘Invalid definitions, invalid responses: Disability and the welfare state, 1965-1995′ (Unpublished PhD thesis, University of London, 2013)
  3. Michael J. Oliver, ‘The management of sterling 1964-1967′, English Historical Review 126(520) (2011), 582-613.
  4. Anthony Seldon, ‘The Heath government in history’ in Stuart Ball and Anthony Seldon (eds) The Heath Government, 1-20.
  5. Colin Hay,’The Winter of Discontent thirty yeas on’, Political Quarterly 80(4) (2009), 545-52.
  6. At the reader’s discretion, replace “greatest” with “only”.
  7. See: Philip Norton and Arthur Aughey, Conservatives and Conservatism (London: Temple Smith, 1981).
Print Friendly

1996 – BSE

01/06/2015

20 March 1996 – London

While the “BSE crisis” was not a specific event that can be pinned down to a particular day or year, it was one of the stories on the news that I remember vividly as a kid. Concerns had been raised about the safety of food in the past – such as salmonella in eggs1 – but “Mad Cow” seemed to tick all the boxes for a juicy medical scandal that the tabloids in Britain love so much. The subject for 1996, then, is Bovine Spongiform Encephalopathy and its human counterpart New Variant Creutzfeldt–Jakob Disease. BSE and vCJD, for short.

Moo. (Source)

Moo. (Source)

BSE had been detected in cattle ten years earlier. It was later found to be caused by a particular protein which was not destroyed through cooking, heat treatment or other traditional forms of food processing. Although cows are usually herbivores, British farms had begun a practice whereby the useless parts of other cattle were ground up and added to their feed to reduce costs. Cows which had died with or of the disease therefore passed on the condition to live cattle – and the cycle continued.2 The government continued to downplay the risk until the mid-90s, when the scientific evidence seemed to point to a link between BSE in cows and vCJD in humans.

On 20 March 1996, Minister for Health Stephen Dorrell announced to Parliament that, contrary to reassurances across the 1990s from the government, BSE could be passed to humans through eating beef. There was public outcry, summed up quite clearly by Harriet Harman’s response to the statement from the opposition benches.

Is it not the case that the time has passed for false reassurance? There must be no more photo-calls of Ministers feeding beefburgers to their children. The question whether there is a link between BSE and CJD is an issue, is it not, of immense importance to consumers, and particularly for parents of young children. Does the Secretary of State acknowledge, as I do, that it is also of immense importance for hundreds of thousands of people who work in farming and the meat industry? Does he acknowledge that the situation remains uncertain and that it is now apparent that there has been too much reassurance and too little action?3

Private Eye cover - Gummer

John Gummer fed his daughter a burger in front of the press in order to “prove” that British beef was safe. He received condemnation and ridicule for using his child as a PR stunt. Private Eye satirised this on their cover. (Source)

What makes the story remarkable is that vCJD was never widespread. While its effects were devastating, to date only 226 cases of vCJD have been confirmed in humans.4 The idea that Britain’s national dish could be a killer, however, was too good for the press to resist.5 For historians and social scientists, BSE has become an example of how public and media attitudes towards health, risk and uncertainty have played out, often with little bearing on the aetiology of the disease or the “real” danger that the average citizen was ever in.6

BSE was everywhere in the 1990s – or rather, stories about BSE were. It was the first real health scare that I remember, and since then we have been flooded with tabloid headlines about what will or will not give you cancer. Some have been of “genuine” public concern, such as the horse meat scandal.7 (Although even here the issue wasn’t the health risk, rather the concern was over honest labelling of food sources and the British distaste for eating cute animals). But they better reflect public anxieties than genuine approaches to epidemiological risk. Shocking headlines sell newspapers – and they wouldn’t if people weren’t willing to buy them. At the same time, the areas of doubt over BSE allowed fear to grow and provided a flame to be fanned. As public health issues have become more complicated, it has become more difficult to provide definitive answers and risk-free advice to citizens.

These events are still very recent, which can be problematic when trying to assess the historical impact of BSE. Already, however, some are beginning to question whether this was a real “crisis” of policy making and of government ineptitude, or a “crisis” in the media sense – i.e., it was a crisis because that was the label given to it by journalists reporting on events at the time. Ian Forbes, for example, has argued that the process was more a public ‘drama’, which said more about the government’s ability to communicate ‘risk and trust’ than it did scientific or political negligence.8 Sheila Jasanoff has also questioned the British public’s ‘trust’ in the politicians it elects, arguing that the UK puts more stock in the character of the person making the statement than the rationality or scientific strength of her statements.9 In other words, by whipping up the drama and casting doubt on the integrity and competence of those in charge, BSE could be framed as a fundamental failing of the British political system, and a direct assault on the health of the average citizen. Somewhat ironically, as James Meikle wrote in The Guardian, by 1996 the decision taken “in secret” to ban the feeding of spinal cords and brain matter to cows in the late 1980s had meant that cases of BSE were already in retreat.10

Stephen Dorrell (Source)

Stephen Dorrell (Source)

As a historian, though, I’m also interested in the political shifts that we see in the crisis. They weren’t caused by it, but the political rhetoric employed by Harman in her response to Dorrell is fascinating, and a reflection on how the Labour Party had changed since the election of Margaret Thatcher. For CJD was, apparently, ‘of immense importance to consumers’. This suggests that people took risks with food and with their health as part of some sort of rational “choice”, expressed through market forces. In much the same way as obesity is linked to “choices” in diet and lifestyle. In this specific context, it appears that the government’s duty (through its scientific advisers) is to provide adequate guidance and information to allow people to make these rational choices. It was also a parent’s responsibility to make these choices on behalf of their children. This was another recurring theme in the late twentieth century, seen with vaccination crises and a growing anxiety over “bad parenting”.

The market was also important with relation to the meat industry. “Confidence” was crucial if agricultural workers and retailers were to survive. Somewhat ironically, the government had spent the previous years downplaying the tentative evidence of the BSE-vCJD risk precisely because they wanted to protect the agricultural sector.11 Presumably the argument was that if the science was forthrightly stated and well-explained, that the market would sort these issues out itself.

Harriet Harman (Source)

Harriet Harman (Source)

This represents quite a shift, of course – a Labour shadow minister making neo-liberal appeals to the market as a way of protecting citizens and workers. New Labour had certainly come into force, and in these arguments we see a very particular way of viewing public health. At no point does she attack the power of the meat industry or the vested interests of the medical establishment, as might be expected from earlier generations.

The BSE crisis, then, represents a number of aspects of late-twentieth and early-twenty-first-century Britain. The neo-liberalisation of health and the Labour Party. Growing mistrust of public health advice, fuelled by the tabloid press and scientific uncertainty amongst the lay population. It may not be the sexiest topic, but it is certainly one that resonates in the world I live in today. Meikle argues (although historians may need to test this) that governments are now far less willing to sit on potentially bad news, communicating risk to the public at an earlier stage than they would have done in the past.12 The diphtheria-tetanus-pertussis scandal would be a good example from the 1970s of when the government was more cautions, more worried about causing “panic” than providing clear information.13 By the time of the H1N1 “bird flu” drama, however, the government was accused of overreacting, given how few people died from it.14

That relationship between the public and authority remains tense. Mass cynicism is, unfortunately, a facet of political discourse in our times. For all these reasons, BSE is number 11 of 30.

  1. Martin J. Smith, ‘From policy community to issue network: Salmonella in eggs and the new politics of food’ in Policy Administration (69) (1991), 235-55.
  2. See the official report on the enquiry, also known as the Phillips Report: HC 887-I (2000-01).
  3. Parliamentary Debates (Commons) 274, 20 March 1996, 366.
  4. Wikipedia contains a summary table on its entry on “BSE”. See also: The National Creutzfeldt-Jakob Disease Research & Surveillance Unit, ‘Variant Creutzfeldt-Jakob disease, Current data’ (University of Edinburgh, July 2012) < https://web.archive.org/web/20120721234746/http://www.cjd.ed.ac.uk/vcjdworld.htm > (The Internet Archive, site captured 21 July 2012); Wikipedia, ‘Bovine spongiform encephalopathy’ < http://en.wikipedia.org/wiki/Bovine_spongiform_encephalopathy > (accessed 20 January 2015).
  5. John Newsinger, ‘The roast beef of old England’ in An Independent Socialist 48(4) (1996).
  6. See: Kevin E. Jones, ‘BSE, risk and the communication of uncertainty: A review of Lord Phillips’ report from the BSE inquiry (UK)’ in Canadian Journal of Sociology 26(4) (2003), 655-666.
  7. The Guardian, Horsemeat Scandal < http://www.theguardian.com/uk/horsemeat-scandal > (accessed 20 January 2015).
  8. Ian Forbes, ‘Making a crisis out of a drama: The political analysis of BSE policy-making in the UK’ in Political Studies 52 (2004), 342-357.
  9. Sheila Jasanoff, ‘Civilization and madness: The great BSE scare of 1996′ in Public Understanding of Science 6 (1997), 221-232.
  10. James Meikle, ‘Mad cow disease – a very British response to an international crisis’ in The Guardian (25 April 2012, 14:29 BST) < http://www.theguardian.com/uk/2012/apr/25/mad-cow-disease-british-crisis > (accessed 21 January 2015).
  11. Newsinger, ‘The roast beef of old England’.
  12. Meikle, ‘Mad cow disease’.
  13. See: Jeffrey P. Baker, ‘The pertussis vaccine controversy in Great Britain, 1974-1986′ in Vaccine 21 (25-26) (2003) 4003-10.
  14. Eben Harrell, ‘Did the WHO exaggerate the H1N1 flu pandemic’s danger?’ in Time (26 January 2010) < http://content.time.com/time/health/article/0,8599,1956608,00.html > (accessed 21 January 2015).
Print Friendly
Wallstreetbmb

1995 – The Oklahoma City Bombing

25/05/2015

19 April 1995 – Oklahoma City

Terrorism has been omnipresent in Western politics in recent years, usually linked to Islamic organisations and individuals. But terrorism isn’t simply an Islamic thing, or even a middle-eastern thing. That seems most obvious in the case of the Oklahoma City Bombing.

On 19 April 1995, two white Christian men set off explosives in the centre of Oklahoma City. Both had been in the US Army. The main organiser, Timothy McVeigh, had been part of a militia group, angry at the handling of the 1993 Waco Seige. The explosions killed 168 and injured 680, causing an estimated $650 million of damage.1

Since 11 September 2001, the narrative around terrorism has focused predominantly on Islamic militarism. This has been inspired by (and in turn reinforced through) wars in Iraq and Afghanistan, strained relations with Iran over its nuclear programme, increased security in international air travel, extended powers of state surveillance and more. In some European countries, bans on certain types of Islamic dress – usually the full-face veil – have been justified on the grounds of security and the need for better “integration” on the part of Muslim immigrants.

It would be churlish to suggest that terrorism has not been committed by Muslims in the name of Islam. But it is equally myopic to think that this is solely an Islamic problem, or that Muslims are inherently more prone to this sort of thing than other groups. Indeed, that the general public discourse has shifted to equate the middle east with terrorism is one of the most important political shift of the past 20 years.2

The aftermath of the Wall Street Bombing in 1920. (Source)

The aftermath of the Wall Street Bombing in 1920. (Source)

Clearly, terrorism did not begin in 2001. September 11th wasn’t even the first terrorist attack in the financial district of New York. On 16 September 1920, a horse-drawn wagon entered Wall Street, packed with explosives. It detonated, killing around 38 people and injuring several more. It was believed to have been perpetrated by Italian anarchists in retaliation for the arrests of key figures, but the investigation never conclusively revealed who had set off the bomb or why.3

Moreover, the United States has not always been so abhorrent of terrorism in all its forms. For many years, Irish Americans provided funds for the Irish Republican Army (IRA), a terrorist organisation that committed a number of bombings on British targets.4

Not that the British could take the moral high-ground, of course. The armed forces in Ireland regularly used Protestant terrorist groups for intelligence, and were even implicated as being complicit in attacks on individuals.5

The point, of course, is to say that terrorism is not new. Nor has it always been universally condemned. By different populations and governments at different times it has been encouraged, or at least passively tolerated.

The way in which terrorism has been turned into a racial and religious issue will almost certainly come up again, probably in 7 weeks’ time (sorry… that should come with a spoiler alert, shouldn’t it…). But it is worth emphasising that terrorism is an emotive term whose connotations have shifted across time. In 1920, the FBI assumed the bombers must have been Bolsheviks. For they were the enemy. In the 1970s and 1980s in Britain, terrorism was synonymous with the Irish Troubles. And, indeed, what made Oklahoma so shocking was that it was purpotrated by “one of our own” – McVeigh, the white, Christian, Gulf War veteran.

Terrorism is going to be a recurring theme for the rest of this 30-for-30. As such, this is a good time to remind people it’s not all about Mohammed cartoons and Osama Bin Laden.

  1. ‘Oklahoma City bombing’, Wikipedia < http://en.wikipedia.org/wiki/Oklahoma_City_bombing > (accessed 24 May 2015).
  2. For how this process began in the United States, see: Stuart A. Wright, ‘Strategic Framing of Racial-Nationalism in North America and Europe: An Analysis of a Burgeoning Transnational Network’, Terorism and Political Violence 21(2) (2009), 189-210.
  3. Ella Morton, ‘The Wall Street Bombing: Low-tech terrorism in prohibition-era New York’, Slate (16 September 2014, 8.00am) < http://www.slate.com/blogs/atlas_obscura/2014/09/16/the_1920_wall_st_bombing_a_terrorist_attack_on_new_york.html > (accessed 24 May 2015).
  4. Anne Applebaum, ‘The discreet charm of the terrorist cause’, Washington Post (3 August 2005) < http://www.washingtonpost.com/wp-dyn/content/article/2005/08/02/AR2005080201943.html > (accessed 24 May 2015).
  5. ‘Pat Finucane murder: “Shocking state collusion”, says PM’, BBC News (12 December 2012) < http://www.bbc.co.uk/news/uk-northern-ireland-20662412 > (accessed 24 May 2015).
Print Friendly
rwanda-banner

1994 – The Rwandan Genocide

18/05/2015

6 April 1994 – Kigali

When a aeroplane carrying the President was shot down over Rwanda’s capital, Kigali, the government responded in brutal fashion. The ruling classes, dominated by an ethnic group called the Hutu, began systematically raping and murdering Tutsis.

The massacre, which lasted around three months, was part of a bloody civil war. Plenty has been written about the genocide, and (as with other topics in this series) I doubt I will be able to do the subject justice. Thankfully, as a historian I can twist the question “what can you tell me about Rwanda ’94?” and talk about something completely different.

Two historically interesting issues surround the narrative of Rwanda. The first is “what constitutes a genocide”? Is it simply a question of numbers? Does there have to be an overt racial element? What about religion or nationality? And do the perpetrators have to be racially homogeneous?

The second is “should ‘we’ have done more to stop it”? Is military intervention in a foreign conflict ever acceptable? Is there such thing as “neutrality”? Indeed, is non-intervention a concious and dangerous act in itself? At what point does a stronger power have to put itself at risk to protect the most vulnerable?

These are questions worth asking, because in the twenty years since both have been prevalent in British politics – and have therefore had a big impact upon the world in which I live.

The term “genocide” or “ethnic cleansing” has been attached to a number of atrocities in the twentieth century. Because it is so emotive, there are some very sensitive issues to confront before using it in any serious assessment of history. For the alleged perpetrators, it carries a stigma that very few would be willing to admit – certainly not if that group still holds power to this day. For the alleged victims, there is great political capital to be made – either from seeking political redress after the fact, or in trying to secure military support from an outside power (on which more later). This is not, of course, to suggest that Kurds, Tutsis, Armenians or Jews have “played the victim” – rather, it shows the various factors that lead to people on both sides becoming so defensive of their relative positions.

Some cases of denialism are less convincing than others. There is a great industry of holocaust deniers, most notably in the work David Irving. A German “historian”, Irving used a number of German documents from the Nazi Party to claim that Hitler did not order the final solution, and questions whether any systematic attempt to wipe out Jewish people ever existed.1 He was proved an incompetent liar in court.2 But even a cursory glance at the seedier sides of the internet shows that this sort of attitude persists.

Attacking National-Socialist Germany is, of course, easier because of their utter defeat in the Second World War. Even if people will defend them, there is no need for any major company or nation state to negotiate the truth. Turkey, on the other hand, is a different matter. The Turkish Government completely denies that the murder of Armenians at the end of World War One constitutes a genocide. Its allies, including the United States, often try to find euphemistic terms for the killing.3

What’s interesting here is that there is no denial that killing took place. Just as there is little doubt that Saddam Hussein gassed Kurdish people in Northern Iraq; or that there were thousands of people murdered in ethnic wars in Yugoslavia. Rather their significance is “downgraded” from genocide to less emotive terms. Hussein and Milošević were (no longer) useful allies for Western governments – and were eventually put on trial (with dubious success and/or impartiality).

Turkey, however, as first a safety zone against the encroachment of Communism in Eastern Europe, and then as a secular buffer against unstable dictatorships in the Middle East, is not a country to piss off. Despite the opinions of many historians with more knowledge than I – and, indeed, Barack Obama himself4 – the Office of the POTUS will not say that what happened was a genocide.

While not universally considered a genocide, the potato famine in Ireland is seen by some as such. (<a href="http://en.wikipedia.org/wiki/Great_Famine_(Ireland)#/media/File:An_gorta_Mor.jpg">Source</a>)More subtle are mass killings that, for a variety of reasons, are not considered genocides. In some cases this is because the ethnic grouping of the victims is unclear. In others, deaths are caused by indirect means, such as the inadvertent or deliberate exacerbation of famine and abject poverty. For instance, the famine that struck Ireland in the nineteenth century was fuelled by British agricultural and taxation policy, but is not generally considered a genocide. Forced collectivisation of farms in Ukraine under Stalin, similarly, border on the definition, but are not usually treated as such.

Then there are the practices of European Empires which killed millions, but are usually couched in other terms (if they are confronted at all). The famous stories of smallpox-riddled blankets being “donated” to Native Americans, for example;5 or the systematic destruction of traditional trade networks leading to famines in the Indian subcontinent.6

The United Nations Security Council. (Source)

The United Nations Security Council. (Source)

OK. So even if there are questions about whether a conflict is necessarily a “genocide”, what responsibility do others have to intervene? One of the main criticisms about Rwanda has been that Western Nations could have prevented a catastrophe if they had come to the aid of the Tutsi rebels.7 Since then, military interventions have taken place (or been proposed) in Bosnia, Libya, Iraq, Syria, Sierra Leone and others.

Much like with the question of Europe, the split in opinion in places like Britain has not been across the traditional left/right dividing line. There is a strong anti-war contingent on the left, often painted as anti-imperialism. Opposition to Vietnam and nuclear weapons also extends to an overtly pacifist approach to international relations. On the other hand, there is also a group that professes “solidarity” with common people in foreign countries unable to defend themselves from the forces of dictatorial regimes. Thus, there was left-wing support for Iraq and Libya in recent years; and more famously the Spanish Civil War in the 1930s.

Intervention is not always well-received by the electorate of the intervening nation. (Source)

Intervention is not always well-received by the electorate of the intervening nation. (Source)

On the right, there are the “hawks” who see military intervention as a way of ensuring stability in regions of the world that might hurt Western interests. Either through direct conflict and terrorism, or through disrupting potential markets and supplies of raw materials. More cynically, some see it as a boom for the arms trade. Then there are isolationists who believe that problems in the rest of the world are not our concern. Money spent on wars should be kept at home, with poorer countries left to sort out their own messes. Less Machiavellian is the belief that intervention has unknown consequences, and may lead to power vacuums that, in turn, create bigger and longer-lasting problems down the line. This is a concern shared by the anti-war left, too.

It is certainly the case that intervention tends to happen when there is a benefit to the intervening state.8 But it would be wrong to describe this as solely to do with “oil” (as in the case of Iraq) – there were also genuine humanitarian concerns from the British people about the safety of civilians in a number of conflicts.

Rwanda, then, brings into focus many foreign policy question which have yet to be resolved. After the Cold War, this sort of global politics seems to have intensified, and been a key part of my adolescent and adult life. No doubt it will continue to reverberate 30 years hence.

  1. For more on this pleasant individual, see ‘David Irving’, Wikipedia http://en.wikipedia.org/wiki/David_Irving > (accessed 7 May 2015).
  2. Richard Evans, Telling Lies about Hitler: The Holocaust, History and the David Irving Trial (Verso, 2002).
  3. Chris Bohjalian, ‘Why does Turkey continue to deny Armenian genocide’, The Boston Globe (9 March 2015) < http://www.bostonglobe.com/opinion/2015/03/09/why-does-turkey-continue-deny-armenian-genocide/rV7dcOXxrDb7wz01AoXgZM/story.html > (accessed 7 May 2015).
  4. AP, ‘Barack Obama will not label 1915 massacre of Armenians a genocide’, The Guardian (22 April 2015, 4:23 BST) < http://www.theguardian.com/us-news/2015/apr/22/barack-obama-will-not-label-1915-massacre-of-armenians-a-genocide > (accessed 7 May 2015).
  5. K B Patterson and T Runge, ‘Smallpox and the Native American’, American Journal of Medical Science 323(4) (2002), 216-22.
  6. David Arnold, ‘The “discovery” of malnutrition and diet in colonial India’, Indian Economic and Social History Review 31(1) (1994), 1-26.
  7. See, for example, Emily Sciarillo, ‘Genocide in Rwanda: The United Nations’ ability to act from a neoliberal perspective’, Towson University Journal of International Affairs 38(2) (2002), 17-29.
  8. There are dangers in applying this interpretation in all cases, however. See: Philip Spence, ‘Imperialism, anti-imperialism and the problem of genocide, past and present’, History 98(332), 606-22.
Print Friendly

1993 – Maastricht and NAFTA

11/05/2015

1 November 1993 – Europe; 8 December 1993 – Washington D.C.

1993 saw the ratification of the Maastricht Treaty in Europe, and the formation of the European Union (EU). Over the pond, the North American Free Trade Agreement (NAFTA) signalled greater economic cooperation between the United States, Canada and Mexico. These agreements, naturally, did not appear overnight – but they signalled the shifting dynamic of economic power. One in which supranational organisation would have an impact upon domestic lawmaking and financial autonomy.

It is an unfortunate quirk of timing that I am writing this before (and publishing it after) the 2015 UK General Election. The rise of the right-wing United Kingdom Independence Party (UKIP) has been built largely upon anti-EU sentiment. After the Cold War, economic integration seemed to offer greater financial security: but at a cost.

Salinas de Gortari, Bush and Mulroney initialing a draft of NAFTA in 1992. (Source)

Salinas de Gortari, Bush and Mulroney initialing a draft of NAFTA in 1992. (Source)

Between them, the EU and NAFTA account for around 40% of the world’s wealth. Included in that are eight of the fifteen largest national economies.1 The point being, that the buying power of these trade blocs is huge, and has a significant effect on the world economy.

In Europe, however, the move from the European Community (EC) to the European Union (EU) was greeted with scepticism. Loyalties were split, although not along the traditional “left-right” axis. In Britain during the 1970s, many in the Labour Party were opposed to the EC because they feared the impact it would have upon trade with the Commonwealth and the ability for a socialist government to control the economy. Conservative support, led by Margaret Thatcher, was needed to offset the Labour rebels. By the 1990s, Labour was the pro-European party against large parts of the Conservative Party who feared a loss of national sovereignty and the impact of European “bureaucracy” on businesses. Today, the leadership of the major parties in Britain support the EU, but a sizeable number of MPs (particularly Conservative) have major reservations and would support another referendum.

At the time of Maastricht, similar splits were evident. For the international left (usually of a middle-class bent), greater political integration was seen as a progressive move towards a post-national society. For some, this was a reaction against the nationalism of the earlier decades which had seen untold destruction across the continent. For those living in large sub-national regions, the EU offered an opportunity for greater autonomy within the national structure under a trans-national regulatory authority. Places such as Brittany, for example, with a local language and culture saw great potential in the European project. But for those with a more nationalist bent, free trade and freedom of movement was the threat to local job security. Large companies would find it easier to move to places where labour was cheaper, and migrants would be able to undercut local workers’ wages.

On the neoliberal right, the EU offered opportunities for greater cross-national trade and greater profits. It would allow businesses to be more competitive by providing a tariff-free trade zone, larger labour pool and the ability to be based in multiple sites across the continent. The traditionalist right, however, were worried about the loss of national autonomy. Especially in those countries with a long history and strong national identity, the idea of ceding control to “Brussels” was an anathema. France only narrowly passed a referendum vote on the Maastricht Treaty; the Danes rejected it.2

The Lisbon Treaty also split European opinion in 2005. (Source) &copy Stephane Peray, 2005.

The Lisbon Treaty also split European opinion in 2005. (Source) © Stephane Peray, 2005.

Across the pond, NAFTA did not pursue such close political integration. There is no Pan-American currency similar to the Euro. But the destruction of tariffs and promotion of free trade has had similar consequences for smaller businesses. Take agriculture, for instance. One of the major bones of contention within the EU has been the Common Agricultural Policy. This provides subsidies to European farmers as a way of combating the threat of cheap food from other areas and ensuring supplies. Some countries have claimed that it benefits large, “industrial” farms at the expense of traditional growers – and that nation states have lost their ability to provide subsidies to their local producers. It has also encouraged over-production.3

The opening up of Mexico’s markets to American agricultural businesses appears to have had a similar effect. Local farms – without the size or technology of the massive food companies “north of the border” – have suffered. Cheap corn, and in particular cheap corn syrup, have flooded the Mexican market, hitting the agricultural sector and having a negative impact upon the country’s obesity levels and tooth decay.4

Obligatory picture of a Mexican farmer. (Source)

Obligatory picture of a Mexican farmer. (Source)

Despite all this, I am a fan of the European Union. But to support it and truly get the best from it, we need to be aware of its strengths as well as its faults.

Both the EU and NAFTA have made a significant contribution to the growth of Western economies over the past 20 years. After a period of recession in the early 1990s, there was near-continuous economic growth until the crash of 2008. In that time, Europeans have enjoyed freedom of movement within the Union, and the somewhat mixed blessings of a single currency (apart from in Britain and Scandinavia). Half of Britain’s trade is conducted within the EU, and the elections held for the European Parliament cover over half a billion people. The European courts over recent years have protected citizens from state overreach, and reversed judgements in breach of human rights legislation. And while this may not mean much to some, freedom to move around the continent and the availability of EU grants are an essential pillar of higher education and research across the continent.

The Union is far from perfect, but it offers the possibility of genuine transnational co-operation in the interests of citizens rather than national governments or large corporations. But it also contains the machinery to outlaw economic controls that would curb the worst excesses of free markets and protect smaller businesses and workers. I firmly believe that these sorts of organisations will become increasingly important in the modern world, a vital check against vested interests which are more powerful than many single nations states. At the same time, they may also create exactly the conditions that would allow billion-dollar companies to force democratically elected governments at the local, national and supranational level to kowtow to their demands.

In any case – Britain will not find protection in some fanciful New Commonwealth or as the 51st State of America. Countries such as Australia and India are already moving towards their own free trade alliances – while the US is tied into NAFTA. Supranational organisation will become inescapable. The question is, do you want to use its powers to democratically provide a better deal for ordinary people? Or accept that power is inevitably and solely in the hands of the businesses with the biggest wallets?

"Nige". (die Quelle)

“Nige”. (die Quelle)

  1. According to the International Monetary Fund in 2014 – USA (largest), Germany (4th), UK (5th), France (6th), Italy (8th), Canada (11th), Spain (14th) and Mexico (15th). ‘List of countries by GDP (nominal)’, Wikipedia < http://en.wikipedia.org/wiki/List_of_countries_by_GDP_%28nominal%29 > (accessed 4 May 2015).
  2. Michael S. Lewis-Beck and Daniel S. Morey, ‘The French “Petit Oui”: The Maastricht Treaty and the French voting agenda’, Journal of Interdisciplinary History 38(1) (2007), 65-87.
  3. There is a long-running debate about CAP which is impossible to summarise here. However, a decent overview is available on the Wikipedia article on the subject: ‘Common Agricultural Policy’, Wikipedia, Criticism < http://en.wikipedia.org/wiki/Common_Agricultural_Policy#Criticism >.
  4. Anjali Browning, ‘Corn, tomatoes, and a dead dog: Mexican agricultural restructuring after NAFTA and rural responses to declining maize production in Oaxaca, Mexico’, Mexican Studies / Estudios Mexicanos 29(1) (2013), 85-119; Sarah E. Clark, Corinna Hawkes, Sophia M. E. Murphy, Karen A. Hansen-Kuhn and David Wallinger, ‘Exporting obesity: US farm and trade policy and the transformation of the Mexican consumer food environment’, International Journal of Occupational and Environmental Health 18(1) (2012), 53-64.
Print Friendly

1992 – The Premier League

04/05/2015

15 August 1992 – London, Ipswich, Liverpool, Southampton, Coventry, Leeds and Sheffield

The sport of Association Football was invented in 1992 when the Premier League first kicked off at 3pm, 15 August 1992 in nine grounds around England. It developed out of primitive proto-football games played in local parks, the British Colonies and, for 104 years, as a professional sideshow in England and Wales.

The video below shows some grainy footage of one of these games being played in front of a small audience of circa 107,000.

While the “Sky Sports Generation” is roundly mocked for its insistence on statistics that start with the phrase “in the Premier League era”, it is worth taking my tongue out of my cheek for a moment to note that it has been almost 23 years since the first Premier League game was played. As a result, we are as about as far removed from Brian Deane’s goal against Manchester United as Brian Deane’s goal against Manchester United was from the Apollo XI moon landing.

Brian Deane scored the first Premier League goal. Against Manchester United. For Sheffield United.

Now, as with posts about Wrestlemania or Sol Campbell, why should anyone who doesn’t care about sport give a damn about the Premier League? Well, as with ‘Mania this is a story about globalisation. It is about the breaking down of national boundaries and the great benefits and inherent dangers of the commercialisation of working-class culture.

To set my stall out from the beginning – I am a fan of a non-Premier League club. Walsall. A club whose only real claim to fame is beating Arsenal in the days before the mass production of penicillin and the ground zero of Paul Merson‘s managerial career. If you don’t know who Paul Merson is, don’t bother Googling.

I am, however, very much part of the “Premier League Generation”. I have a Sky Sports subscription. My wife is an Arsenal fan. Rightly or wrongly, I view the Champions League Final as the biggest game of the season. And I still believe David Beckham is a flawless human being.

Flawless. (Source)

Flawless. (Source)

When professional football started in this country back in the 1880s, it was viewed with suspicion by the public-school educated men who governed the sport. The Corinthian – amateur – spirit was supposed to pervade. Football was played for the love of the game, to make men manly and to reflect the masculine Christian ideal of the defender of the British Empire. Professionalism sullied that ideal. So what if the factory workers of the North needed to be compensated for the time they took off work to play the game? So what if the money from paying customers went straight into the hands of business men, not the athletes competing?

If that sounds like a stupid argument, bear in mind the Rugby Football Union only approved of formal professionalisation in 1995. The NCAA in the United States still forbids it.

As time went on, the business side of football became more entrenched. The British Football Association briefly championed the rights of professional clubs, before the Football Association caved and supported the paying of players in 1885. Clubs in the North of England organised a league in 1888 so that teams could play regular fixtures and therefore guarantee a certain standard of matches for paying customers. 12 clubs signed up. Over the years, more clubs were added and split into hierarchical divisions. In 1950 the last expansion occurred, creating a league of 92 professional teams in four divisions. They played over 40 games a year, plus various local, national and eventually international cup competitions.

The first league champions, Preston North End. (Source)

The first league champions, Preston North End. (Source)

League football, then, has always been a business. It exists because owners wanted more reliable income streams. The laws of the game may have changed significantly since 1888, but this has been the one constant. Until 1961, it was against the rules to pay a player more than £20 a week. (£412 or $615 in today’s money.) Owners have always found ways to overcrowd delapidated stadiums to get as much in ticket revenue as they could – sometimes with disastrous consequences. And so when a new cash cow came along, it was no surprise that the bigger owners tried to exploit it to their own ends.

Between 1990 and 1992 a series of negotiations were held with the biggest clubs in the country and broadcasters to form a breakaway league. Under Football League rules, revenue from televised football was split between all 92 clubs. Those in Division 1 believed this was unfair; since they provided the best quality football, they argued that they should keep the vast majority of the proceeds.

The 22 teams in that division voted to form their own league – The FA Premier League – and negotiated a separate deal with BSkyB, the biggest pay-TV supplier in the country. The structure of English football was retained, with three sides relegated to the second tier ever season, and three new arrivals taking their place. Other than one season in which the number of sides was reduced to 20, this has remained the case. The total number of clubs in the top four divisions in English football is still 92. So, why the fuss?

As satellite television became increasingly popular in the UK, BSkyB were able to bid even more for Premier League football. Off the back of a reasonable performance by England in the 1990 World Cup, the return of European competition following the Heysel ban, and the safety improvements off the back of the Taylor Report, football became a more popular product to attend and view on screen. Rupert Murdoch’s TV empire was built on its ability to provide exclusive access to Bart Simpson, Rachel Green and Eric Cantona. Football fuelled Sky as much as Sky fuelled football.

The size of the television contracts meant that promotion to the Premier League became increasingly lucrative. Those in the division had a ridiculous competitive edge over those outside. Those in the old Division Two – now the top tier of the Football League – were forced to either face a season of losing regularly in the Premier League (and inevitable relegation), or to overspend in the pursuit of Sky money. Others found themselves unable to cope with the loss of income that followed relegation. The knock on effects were obvious; even lower league clubs were forced to pay inflated prices for average players just to try and compete in the divisions they had historically been associated with. Football fans are well aware of the difficulties of Crystal Palace, Southampton, Portsmouth, Leicester City, Cardiff City, Luton Town, Wrexham, Wimbledon, York City, Port Vale, Boston United, Leeds United, Chester City, Hereford United, Halifax Town…

Liverpool fans oppose the American owners of the club. "Built by Shanks [Bill Shankley, legendary manager of the club who built the success of the 1970s and 1980s], broke by Yanks [Americans].  (Source)

Liverpool fans oppose the American owners of the club. “Built by Shanks [Bill Shankley, legendary manager of the club who built the success of the 1970s and 1980s], broke by Yanks [Americans].
(Source)

There have been other problems. As football has increased in popularity with the middle classes and foreign tourists, ticket prices have increased well above inflation, so that the demographics of working class men and women who would have attended games 20 years ago can no longer afford to do so. The influx of money has also led to the mass importation of the best talent from around the world, cutting off the traditional opportunities for young British players to play at the highest level of English football. For those who do get the opportunity, wages are so over-inflated it can be difficult to keep their egos grounded. Fans of clubs outside the top six or seven have absolutely no hope going into a new season of winning the league – unless, by some miracle they get a multi-billionaire show interest in their team. But that causes its own unique problems.

Sky's television deals have distorted the competition between the 92 "league" clubs. (Source)

Sky’s television deals have distorted the competition between the 92 “league” clubs.
(Source)

On the other hand, the quality of football on the field is undoubtedly better than it was before the Premier League. World television deals have improved the facilities and talent pools of all the top clubs across Western Europe (soccer’s traditional powerhouse). Players are stronger, fitter and spend more time honing their technique than ever. Tactically, the teams are better drilled. Sometimes that results in turgid chess battles that are less interesting than watching actual chess. But it also gives us the brilliance of Thierry Henry, Cristiano Ronaldo and Luis Suarez. Stadiums across the country are now safe and comfortable. It is easier than ever to follow your team if you cannot for some reason get to the ground. And for people like me and my wife, who grew up in rural areas without access to a decent professional team, it means that you can consume and follow the sport in a way that would have been difficult in the past.

Of course, as with all change narratives, some will praise the march of progress; other will mourn for a culture lost. But this assumes that footballing culture in Britain was always static. Football had already evolved from a local activity to a national and increasingly ‘Europeanized’ culture by 1990.1 Wages had increased since the maximum wage was scrapped in 1961. While not a multi-millionaire in the same way the average Premier League player is now, it would be a stretch to suggest that someone like Kevin Keegan (active 1968-85) was some sort of romantic local working class boy who remained part of the local community. Maybe he was more “one of the people” than Wayne Rooney. But it is a crass over-simplification to assume that the game in 1991 was somehow pure and Bohemian.

The rabid opposition expressed in some to what is considered the “over commercialisation” of football is also reflective of a wider concern about the encroaching power of globalisation and the destruction of local identities. The “39th Game” proposal, an anathema to many, was offered as an example of how modern club owners place profit above the integrity of the sport.2 This would have broken the traditional structure of the league, in which each club plays every other team twice (once at their own stadium and once away). Thus, every team has the same schedule, and can therefore prove over the course of a year which is the strongest. But the opposition also reflected a grave concern that by playing English football in the middle east that local clubs, traditionally rooted in a local community, were simply businesses who happened to be based in England. Those fans from the surrounding area were far less important than anyone who was willing and able to buy a ticket. A culture was being lost.

Even when English clubs play each other, the prize at stake is often international in nature. Whether it be here, in a UEFA Champions League match; or whether it be as part of the process of finishing in the "top 4" so that the club can qualify for the mega-riches of European competition. (Source)

Even when English clubs play each other, the prize at stake is often international in nature. Whether it be here, in a UEFA Champions League match; or whether it be as part of the process of finishing in the “top 4″ so that the club can qualify for the mega-riches of European competition.
(Source)

The trends seen in the Premier League are not unique to England. Spain’s league has also become less competitive in the wake of foreign stars, despite possessing arguably the best two teams in world football over the past 5 years. The increased popularity of the Champions League – an annual competition for the top teams from each of Europe’s leagues – has further distorted the financial gap between the top and bottom of each country. “National” football makes increasingly less sense in a global market place, let alone the idea that a club should reflect and be integrated with the city in which it happens to play.

European football as a whole, then, reflects a world in which local and national assets are owned by increasingly powerful and distant entities, and bear little relation to their historical origins. Much like a car factory can simply leave an area for one with cheaper labour, causing local unemployment in its wake, football clubs can price out their traditional fans in the knowledge that there will be enough TV subscribers and tourists willing and able to fork out the price of viewing. Those praising and demonising the Premier League do so within this historical context. One in which quality undoubtedly improves, but the local character and culture become discarded in favoured of uniformity and the interests of distant powers.

  1. Mark Turner, ‘From local heroism to global celebrity stardom: a critical reflection of the social cultural and political changes in British football culture from the 1950s to the formation of the premier league’, Soccer and Society, 15(5) (2014), 751-60.
  2. Joel Rookwood and Nathan Chan, ‘The 39th game: fan responses to the Premier League’s proposal to globalize the English game’, Soccer and Society, 12(6) (2011), 897-913.
Print Friendly

1991 – The Croatian War of Independence

27/04/2015

1 March 1991 – Pakrac

An old professor of mine during my third year as an undergraduate ran a course on Eastern European history. I enjoyed writing my dissertation on Czechoslovakia and Poland during the Second World War, but his expertise ran much wider than that. We got into a conversation about how the fall of Communism was a good thing because it meant that UEFA got a bunch of great new football teams such as Croatia, Serbia and Ukraine. He grinned, and said that one of his most embarrassing “predictions” in the late 1980s was that Yugoslavia would be one of the only communist states to remain intact. “I said, ‘Oh, Yugoslavia will have to remain together. The bloodshed would be horrific if any of the ethnic groups tried to secede’.” He paused. “I suppose I was half right.”

A disclaimer, which I don’t normally put in these articles: I do not claim to be an expert on Yugoslavia or Balkan politics. My interest as a historian is in the way in which stories about the war have been interpreted and filtered into the popular imagination. And, in particular, how they have been received in the United Kingdom. I don’t care who was “right” and who was “wrong”. And while I sympathise with those affected, it is not my intention here to tell “the real history” of Croatia. I hope the footnotes to this piece offer at least some starting point for those wanting to do their own research.

As tensions increased following the fall of the Berlin Wall, a football riot broke out between Dinamo Zagrb and Red Star Belgarde (respectively Croatia's and Serbia's top teams). For some, it marks the informal star of the war of independence. footage has been uploaded to Youtube.

As tensions increased following the fall of the Berlin Wall, a football riot broke out between Dinamo Zagrb and Red Star Belgarde (respectively Croatia’s and Serbia’s top teams). For some, it marks the informal star of the war of independence.
footage has been uploaded to Youtube.

The various civil wars in Yugoslavia in the 1990s were horrific. It made “ethnic cleansing” a widely recognised term, a euphemism to describe the mass murder of various ethnic groups. Even today, the independence of Kosovo is disputed, while the tensions between Serbs and other nations have often been expressed in violence at international sporting events.1 As a child just becoming aware of international events and the news, the violence in the former Yugoslavia was the first war I remember watching on television. And its effects are still being felt.

Like my professor, ethnic tension is often cited as the root cause of the violence. And in many ways it was. But the source of that tension was historical – not in the sense that the histories of these groups led to inevitable conflict; but that ethnic histories could be manipulated by political leaders to mobilise national groups to war.

Croatia is the land of Marco Polo and the neck tie. (The country is locally known as Hrvatska – which was corrupted in French to “cravat”.)2 In the Middle Ages it was an independent kingdom, but over the Early Modern period it spent time under the rule of Venice, the Hapsburg Empire and the Ottoman Empire. When the Austro-Hungarian state was dismantled following defeat in the First World War (prompted by the assassination of Franz Ferdinand in neighbouring Bosnia), Croatia, along with a number of other Balkan states, formed a new Kingdom of Yugoslavia. Then, after Nazi occupation during the Second World War, Yugoslavia emerged as an independent Communist state.

Unlike many of the other countries in the region, Yugoslavia was not a puppet of Moscow. It was fairly liberal, allowing much more foreign travel and educational opportunities than its neighbours. Indeed, on the eve of the fall of the Berlin wall, it was expected that the state would be become the first Eastern member of the European Economic Community.3 Instead, growing tensions within the Republic following the death of Tito boiled over into bloody violence. The question ever since for political scientists and historians has been – why?

At the time, and in the popular narrative, ethnic tensions between the various groups have been blamed. For the ex-Yugoslavia was by no means ethnically uniform. Slovenians, Croats, Bosniaks, Montenegrins, Albanians, Macedonians and Serbs all lived within the union. The capital, Belgrade, was within Serbia, but there were various constituent republics based around key towns such as Zagreb (Crotaia) and Sarejevo (Bosnia). This reflected the country’s origins as an alliance between various ethnic groups within the Balkans. Aside from these national/ethnic differences, there was also a mix of Muslim, Orthodox and Catholic religion.4 Certainly the mass murders in Kosovo and Bosnia were done in the name of “ethnic cleansing”.5 But regardless of how these events were spun, or the way ethnic identity was harnessed to provoke political mass movements, the historical picture is not quite so clear.

The former Yugoslavia and successor states as of 2008. (Source)

The former Yugoslavia and successor states as of 2008. (Source)

Indeed, Dejan Djokic has noted how one of the more interesting arguments during the Slobodan Milošovic trial centred over the Croatian Party of Right and the historical figure Vuk Stefanović Karadžić – a nineteenth century Serb linguistic reformer. In a debate between the ex-leaders of Serbia (Milošovic) and Croatia (Stjepan Mesic), the battle was over the true interpretation of historical events. For the Serbs, the Croatians took their political heritage from a proto-fascists; for the Croats, they were part of a line of freedom fighters, first against the Austrians and now against the Serbs.6 Others have published studies which show how political discourse at crucial points in Yugoslavian history – 1984-85, 1989-90, 1996 and 2003 – bear the marks of manipulation by political elites. What mattered most, argue Sekulić, Massey and Hodson, was that ethnic identities could be mapped onto political and economic movements. This allowed political and historical narratives to become distorted.7 These histories became necessarily adversarial, as Raymond and Bajic-Raymond discuss with regard to Franjo Tudjman (Croat) and Milošovic.8

In the end, national identities are built on history. Key events and public figures are interpreted and used as symbols of good or evil, and held up as role models for particular interests. They proclaim common interests, a shared heritage which is worth defending and building upon. And it is often said that history cannot exist without the nation.

No history is value free, nor can it ever hope to be. But when used as a call to war, it can be tremendously powerful. There is a danger that we lionise the good – “Churchillian spirit” or “the spirit of Dunkirk” destroying the Nazis – and forget that it is all part of the same process that allowed Hitler’s “Third Reich” to be placed in a continuum of great German states. This is why we should be always be sceptical of those who try to ‘rehabilitate’ the British Empire9 (as well as those who use it for their own nefarious ends).10

All history is distorted. This is not to say that all history is bad, but simply to make us keenly aware that what we think of as “the truth” is often far from objective. We shouldn’t stop building and retelling national myths necessarily. But we should always be questioning and presenting alternatives. An important lesson to learn as we move forward with the 30-for-30.

  1. See: Ellen Connolly, ‘Balkan fans riot at Australian Open tennis’, The Guardian, 24 January 2009 < http://www.theguardian.com/world/2009/jan/24/australian-open-riot > (accessed 25 March 2015); ‘Serbia and Albania game abandoned after drone invasion sparks brawl’, CNN, 15 October 2014 7:27pm EDT < http://edition.cnn.com/2014/10/14/sport/football/serbia-albania-game-abandoned/ > (accessed 25 March 2015).
  2. ‘Cravat’, Wikipedia < http://en.wikipedia.org/wiki/Cravat > (accessed 25 March 2015).
  3. V. P. Gagnon Jr., ‘Yugoslavia in 1989 and after’, Nationalities Papers 38(1) (2010), 23-39.
  4. See: Wendy Bracewell, ‘The end of Yugoslavia and new national histories’, European History Quarterly 29(1), 149-56.
  5. See particularly: ‘Srebrenica Massacre’, Wikipedia < http://en.wikipedia.org/wiki/Srebrenica_massacre > (accessed 25 March 2015).
  6. Dejan Djokic, ‘Coming to terms with the past: Former Yugoslavia’, History Today 54(6) (2004).
  7. Duško Sekulić, Garth Massey, Randy Hodson, ‘Ethnic intolerance and ethnic conflict in the dissolution of Yugoslavia’, Ethnic & Racial Studies 29(5) (2006), 797-827.
  8. G.G. Raymond, S. Bajic-Raymond, ‘Memory and history: The discourse of nation-building in the former Yugoslavia’, Patterns of Prejudice 31(1) (1997), 21-30.
  9. Seumas Milne, ‘This attempt to rehabilitate empire is a recipe for conflict’, The Guardian, 10 June 2010 8:01 BST < http://www.theguardian.com/commentisfree/2010/jun/10/british-empire-michael-gove-history-teaching > (accessed 25 March 2015).
  10. Mark Tran, ‘Mugabe denounces Britain as “thieving colonialists”‘, The Guardian, 18 April 2008 16:09 BST < http://www.theguardian.com/world/2008/apr/18/zimbabwe.independence > (accessed 25 March 2015).
Print Friendly
Older Posts