THE
ASCENT OF MONEY: A FINANCIAL HISTORY OF THE WORLD
BY
NIALL FERGUSON
4. THE RETURN OF RISK
The most
basic financial impulse of all is to save for the future, because the future is
so unpredictable. The world is a dangerous place. Not many of us get through
life without having a little bad luck. Some of us end up having a lot. Often,
it’s just a matter of being in the wrong place at the wrong time: like the Mississippi
delta in the last week of August 2005, when Hurricane Katrina struck not once
but twice. First there was the howling 140-mile-an-hour wind that blew many of
the area’s wooden houses clean off their concrete foundations. Then, two hours
later, came the thirty-foot storm surge that breached three of the levees that
protect New Orleans from Lake Pontchartrain and the Mississippi, pouring
millions of gallons of water into the city. Wrong place, wrong time. Like the
World Trade Center on 11 September 2001. Or Baghdad on pretty much any day
since the US invasion of 2003. Or San Francisco when - as it one day will - a
really big earthquake occurs along the San Andreas fault.
Stuff
happens, as the former Secretary of Defense Donald Rumsfeld insouciantly observed
after the overthrow of Saddam Hussein unleashed an orgy of looting in the Iraqi
capital. Some people argue that such stuff is more likely to happen than in the
past, whether because of climate change, the rise of terrorism or the blowback
from American foreign policy blunders. The question is, how do we deal with the
risks and uncertainties of the future? Does the onus fall on the individual to
insure against misfortune? Should we rely on the voluntary charity of our
fellow human beings when things go horribly wrong? Or should we be able to
count on the state - in other words on the compulsory contributions of our
fellow taxpayers - to bail us out when the flood comes?
The
history of risk management is one long struggle between our vain desire to be
financially secure - as secure as, say, a Scottish widow - and the hard reality
that there really is no such thing as ‘the future’, singular. There are only
multiple, unforeseeable futures, which will never lose their capacity to take
us by surprise.
THE
BIG UNEASY
In the
Westerns I watched as a boy I was fascinated by ghost towns, short-lived
settlements that had been left behind by the fast pace of change on the
American frontier. It was not until I went to New Orleans in the wake of
Hurricane Katrina that I encountered what could very well become America’s
first ghost city.
I had
happy if hazy memories of the ‘Big Easy’. As a teenager between school and
university, savouring my first taste of freedom, I discovered it was about the
only place in the United States where I could get served beer despite being
underage, which certainly made the geriatric jazz musicians in Preservation
Hall sound good. Twenty-five years on, and nearly two years after the great
storm struck, the city is a forlorn shadow of its former self. Saint Bernard
Parish was one of the districts that was worst affected by the storm. Only five
homes out of around 26,000 were not flooded. In all, 1,836 Americans lost their
lives as a result of Katrina, of whom the overwhelming majority were from
Louisiana. In Saint Bernard alone, the death toll was forty-seven. You can
still the see the symbols on the doors of abandoned houses, indicating whether
or not a corpse was found inside. It invites comparison with medieval England
at the time of the Black Death.
When I
revisited New Orleans in June 2007, Councilman Joey DiFatta and the rest of
Saint Bernard’s municipal government were still working in trailers behind
their old office building, which the flood gutted. DiFatta stayed at his desk
during the storm, eventually retreating to the roof as the waters kept rising.
From there, he and his colleagues could only watch helplessly as their beloved
neighbourhood vanished under filthy brown water. Angered by what they saw as
the incompetence of the Federal Emergency Management Agency (FEMA), they
resolved to restore what had been lost. Since then, they have worked tirelessly
to try to rebuild what was once a tightly knit community (many of whom, like
DiFatta himself, are descended from settlers who came to Louisiana from the
Canary Islands). But persuading thousands of refugees to come back to Saint
Bernard has proved far from easy; two years later the parish still has only one
third of its pre-Katrina population. A large part of the problem turns out to
be insurance. Today, insuring a house in Saint Bernard and other low-lying
parts of New Orleans is virtually impossible. And without buildings insurance,
it is virtually impossible to get a mortgage.
Nearly
all the survivors of Katrina lost property in the disaster, since nearly three
quarters of the city’s total housing stock were damaged. There were no fewer
than 1.75 million property and casualty claims, with estimated insurance losses
in excess of $41 billion, making Katrina the costliest catastrophe in modern
American history.1But Katrina not only submerged New Orleans.
It also laid bare the defects of a system of insurance that divided
responsibility between private insurance companies, which offered protection
against wind damage, and the federal government, which offered protection
against flooding, under a scheme that had been introduced after Hurricane Betsy
in 1965. In the aftermath of the 2005 disaster, thousands of insurance company
assessors fanned out along the Louisiana and Mississippi coast-line. According
to many residents, their job was not to help stricken policy-holders but to
avoid paying out to them by asserting that the damage their properties had
suffered was due to flooding and not to wind.ab The
insurance companies did not reckon with one of their policy-holders, former US
Navy pilot and celebrity lawyer Richard F. Scruggs, the man once known as the
King of Torts.
‘Dickie’ Scruggs first hit the headlines in
the 1980s, when he represented shipyard workers whose lungs had been fatally
damaged by exposure to asbestos, winning a $50 million settlement. But that was
small change compared with what he later made the tobacco companies pay: over
$200 billion to Mississippi and forty-five other states as compensation for
Medicaid costs arising from tobacco-related illnesses. The case (immortalized
in the film The Insider) made Scruggs a rich man. His fee in the
tobacco class action is said to have been $1.4 billion, or $22,500 for every
hour his law firm worked. It was money he used to acquire a waterfront house on
Pascagoula’s Beach Boulevard, a short commute (by private jet, naturally) from
his Oxford, Mississippi, offices. All that remained of that house after Katrina
was a concrete base plus a few ruined walls so badly damaged that they had to
be bulldozed. Although his insurance company (wisely) paid out, Scruggs was
dismayed to hear of the treatment of other policy-holders. Among those he
offered to represent was his brother-in-law Trent Lott, the former Republican
majority leader in the Senate, and his friend Mississippi Congressman Gene
Taylor, both of whom had also lost homes to Katrina and had received short
shrift from their insurers.2 In a series of cases on
behalf of policy-holders, Scruggs alleged that the insurers (principally State
Farm and All State) were trying to renege on their legal obligations.3 He
and his ‘Scruggs Katrina Group’ conducted detailed meteorological research to
show that nearly all the damage in places like Pascagoula was caused by the
wind, hours before the floodwaters struck. Scruggs was also approached by two
whistle-blowing insurance adjusters, who claimed the company they worked for
had altered reports in order to attribute damage to flooding rather than wind.
The insurance companies’ record profits in 2005 and 2006 only whetted Scruggs’s
appetite for redress.ac As he told me when we met in the
wasteland where his house used to stand: ‘This [town] was home for fifty years;
where I raised my family; what I was proud of. It makes me somewhat emotional
when I see this.’ By that time, State Farm had already settled 640 cases
brought by Scruggs on behalf of clients whose claims had initially been turned
down, paying out $80 million; and had agreed to review 36,000 other claims.4 It
seemed as if the insurers were retreating. Scruggs’s campaign against them
collapsed in November 2007, however, when he, his son Zachary and three
associates were indicted on charges of trying to bribe a state-court judge in a
case arising from a dispute over Katrina-related legal fees.adScruggs
now faces a prison sentence of up to five years.5
It may
sound like just another story of Southern moral laxity - or proof that those
who live by the tort, die by the tort. Yet, regardless of Scruggs’s descent
from good fellow to bad felon, the fact remains that both State Farm and All
State have now declared a large part of the Gulf of Mexico coast a ‘no
insurance’ zone. Why risk renewing policies here, where natural disasters
happen all too often and where, after the disaster, companies have to contend
with the likes of Dickie Scruggs? The strong implication would seem to be that
providing coverage to the inhabitants of places like Pascagoula and Saint
Bernard is no longer something the private sector is prepared to do. Yet it is
far from clear that American legislators are ready to take on the liabilities
implied by a further extension of public insurance. Total non-insured damages
arising from hurricanes in 2005 are likely to end up costing the federal
government at least $109 billion in post-disaster assistance and $8 billion in
tax relief, nearly three times the estimated insurance losses.6 According
to Naomi Klein, this is symptomatic of a dysfunctional ‘Disaster Capitalism
Complex’, which generates private profits for some, but leaves taxpayers to
foot the true costs of catastrophe.7In the face of such
ruinous bills, what is the right way to proceed? When insurance fails, is the
only alternative, in effect, to nationalize all natural disasters - creating a
huge open-ended liability for governments?
Of
course, life has always been dangerous. There have always been hurricanes, just
as there have always been wars, plagues and famines. And disasters can be small
private affairs as well as big public ones. Every day, men and women fall ill
or are injured and suddenly can no longer work. We all get old and lose the
strength to earn our daily bread. An unlucky few are born unable to fend for
themselves. And sooner or later we all die, often leaving one or more
dependants behind us. The key point is that few of these calamities are random
events. The incidence of hurricanes has a certain regularity like the incidence
of disease and death. In every decade since the 1850s the United States has
been struck by between one and ten major hurricanes (defined as a storm with
wind speeds above 110 mph and a storm surge above 8 feet). It is not yet clear
that the present decade will beat the record of the 1940s, which saw ten such
hurricanes.8 Because there are data covering a century
and a half, it is possible to attach probabilities to the incidence and scale
of hurricanes. The US Army Corps of Engineers described Hurricane Katrina as a
1-in-396 storm, meaning that there is a 0.25 per cent chance of such a large
hurricane striking the United States in any given year.9 A
rather different view was taken by the company Risk Management Solutions, which
judged a Katrina-sized hurricane to be a once-in-forty-years event just a few
weeks before the storm struck.10 These different
assessments indicate that, like earthquakes and wars, hurricanes may belong
more in the realm of uncertainty than of risk properly understood.ae Such
probabilities can be calculated with greater precision for most of the other
risks that people face mainly because they are more frequent, so statistical
patterns are easier to discern. The average American’s lifetime risk of death
from exposure to forces of nature, including all kinds of natural disaster, has
been estimated at 1 in 3,288. The equivalent figure for death due to a fire in
a building is 1 in 1,358. The odds of the average American being shot to death
are 1 in 314. But he or she is even more likely to commit suicide (1 in 119);
more likely still to die in a fatal road accident (1 in 78); and most likely of
all to die of cancer (1 in 5).11
In
pre-modern agricultural societies, nearly everyone was at substantial risk from
premature death due to malnutrition or disease, to say nothing of war. People
in those days could do much less than later generations in the way of
prophylaxis. They relied much more on seeking to propitiate the gods or God
who, they conjectured, determined the incidence of famines, plagues and
invasions. Only slowly did men appreciate the significance of measurable
regularities in the weather, crop yields and infections. Only very belatedly -
in the eighteenth and nineteenth centuries - did they begin systematically to
record rainfall, harvests and mortality in a way that made probabilistic
calculation possible. Yet, even before they did so, they understood the wisdom
of saving: putting money aside for the proverbial (and in agricultural
societies literal) extreme rainy day. Most primitive societies at least attempt
to hoard food and other provisions to tide them over hard times. And our tribal
species intuitively grasped from the earliest times that it makes sense to pool
resources, since there is genuine safety in numbers. Appropriately, given our
ancestors’ chronic vulnerability, the earliest forms of insurance were probably
burial societies, which set aside resources to guarantee a tribe member a
decent interment. (Such societies remain the only form of financial institution
in some of the poorest parts of East Africa.) Saving in advance of probable
future adversity remains the fundamental principle of insurance, whether it is
against death, the effects of old age, sickness or accident. The trick is
knowing how much to save and what to do with those savings to ensure that,
unlike in New Orleans after Katrina, there is enough money in the kitty to
cover the costs of catastrophe when it strikes. But to do that, you need to be
more than usually canny. And that provides an important clue as to just where
the history of insurance had its origins. Where else but in bonny, canny
Scotland?
TAKING
COVER
They say
we Scots are a pessimistic people. Maybe it has to do with the weather - all
those dreary, rainy days. Maybe it’s the endless years of sporting
disappointment. Or maybe it was the Calvinism that Lowlanders like my family
embraced at the time of the Reformation. Predestination is not an especially
cheering article of faith, logical though it may be to assume that an
omniscient God already knows which of us (‘the Elect’) will go to heaven, and
which of us (a rather larger number of hopeless sinners) will go to hell. For
whatever reason, two Church of Scotland ministers deserve the credit for
inventing the first true insurance fund more than two hundred and fifty years
ago, in 1744.
It is
true that insurance companies existed prior to that date. ‘Bottomry’ - the
insurance of merchant ships’ ‘bottoms’ (hulls) - was where insurance originated
as a branch of commerce. Some say that the first insurance contracts date from
early fourteenth-century Italy, when payments for securitas begin
to appear in business documents. But the earliest of these arrangements had the
character of conditional loans to merchants (as in ancient Babylon), which
could be cancelled in case of a mishap, rather than policies in the modern
sense;12 in The Merchant of Venice,
Antonio’s ‘argosies’ are conspicuously uninsured, leaving him exposed to
Shylock’s murderous intent. It was not until the 1350s that true insurance
contracts began to appear, with premiums ranging between 15 and 20 per cent of
the sum insured, falling below 10 per cent by the fifteenth century. A typical
contract in the archives of the merchant Francesco Datini (c. 1335-1410)
stipulates that the insurers agree to assume the risks ‘of God, of the sea, of
men of war, of fire, of jettison, of detainment by princes, by cities, or by
any other person, of reprisals, of arrest, of whatever loss, peril, misfortune,
impediment or sinister that might occur, with the exception of packing and
customs’ until the insured goods are safely unloaded at their destination.13 Gradually
such contracts became standardized - a standard that would endure for centuries
after it became incorporated into the lex mercatoria (mercantile
law). These insurers were, however, not specialists, but merchants who also
engaged in trade on their own account.
Beginning
in the late seventeenth century, something more like a dedicated insurance
market began to form in London. Minds were doubtless focused by the Great Fire of
1666, which destroyed more than 13,000 houses.af Fourteen
years later Nicholas Barbon established the first fire insurance company. At
around the same time, a specialized marine insurance market began to coalesce
in Edward Lloyd’s coffee house in London’s Tower Street (later in Lombard
Street). Between the 1730s and the 1760s, the practice of exchanging
information at Lloyd’s became more routinized until in 1774 a Society of
Lloyd’s was formed at the Royal Exchange, initially bringing together seventynine
life members, each of whom paid a £15 subscription. Compared with the earlier
monopoly trading companies, Lloyd’s was an unsophisticated entity, essentially
an unincorporated association of market participants. The liability of the
underwriters (who literally wrote their names under insurance contracts, and
were hence also known as Lloyd’s Names) was unlimited. And the financial
arrangements were what would now be called pay as you go - that is, the aim was
to collect sufficient premiums in any given year to cover that year’s payments
out and leave a margin of profit. Limited liability came to the insurance
business with the founding of the Sun Insurance Office (1710), a fire insurance
specialist and, ten years later (at the height of the South Sea Bubble), the
Royal Exchange Assurance Corporation and the London Assurance Corporation,
which focused on life and maritime insurance. However, all three firms still
operated on a pay-as-you-go basis. Figures from London Assurance show premium
income usually, but not always, exceeding payments out, with periods of war
against France causing huge spikes in both. (This was not least because before
1793 it was quite normal for London insurers to sell cover to French merchants.14 In
peacetime the practice resumed, so that on the eve of the First World War most
of Germany’s merchant marine was insured by Lloyd’s.15)
Life
insurance, too, existed in medieval times. The Florentine merchant Bernardo
Cambi’s account books contain references to insurance on the life of the pope
(Nicholas V), of the doge of Venice (Francesco Foscari) and of the king of
Aragon (Alfonso V). It seems, however, that these were little more than wagers,
comparable with the bets Cambi made on horse races.16 In
truth, all these forms of insurance - including even the most sophisticated
shipping insurance - were a form of gambling. There did not yet exist an
adequate theoretical basis for evaluating the risks that were being covered.
Then, in a remarkable rush of intellectual innovation, beginning in around
1660, that theoretical basis was created. In essence, there were six crucial
breakthroughs:
1. Probability.
It was to a monk at Port-Royal that the French mathematician Blaise Pascal
attributed the insight (published in Pascal’s Ars Cogitandi) that ‘fear
of harm ought to be proportional not merely to the gravity of the harm, but
also to the probability of the event.’ Pascal and his friend Pierre de Fermat
had been toying with problems of probability for many years, but for the
evolution of insurance, this was to be a critical point.
2. Life
expectancy. In the same year that Ars Cogitandi appeared
(1662), John Graunt published his ‘Natural and Political Observations . . .
Made upon the Bills of Mortality’, which sought to estimate the likelihood of
dying from a particular cause on the basis of official London mortality
statistics. However, Graunt’s data did not include ages at death, limiting what
could legitimately be inferred from them. It was his fellow member of the Royal
Society, Edmund Halley, who made the critical breakthrough using data supplied
to the Society from the Prussian town of Breslau (today Wrocław in Poland).
Halley’s life table, based on 1,238 recorded births and 1,174 recorded deaths,
gives the odds of not dying in a given year: ‘It being 100 to 1 that a Man of
20 dies not in a year, and but 38 to 1 for a Man of 50 . . .’ This was to be
one of the founding stones of actuarial mathematics. 17
3. Certainty.
Jacob Bernoulli proposed in 1705 that ‘Under similar conditions, the occurrence
(or non-occurrence) of an event in the future will follow the same pattern as
was observed in the past.’ His Law of Large Numbers stated that inferences
could be drawn with a degree of certainty about, for example, the total
contents of a jar filled with two kinds of ball on the basis of a sample. This
provides the basis for the concept of statistical significance and modern
formulations of probabilities at specified confidence intervals (for example,
the statement that 40 per cent of the balls in the jar are white, at a
confidence interval of 95 per cent, implies that the precise value lies
somewhere between 35 and 45 per cent - 40 plus or minus 5 per cent).
4. Normal
distribution. It was Abraham de Moivre who showed that outcomes of any kind
of iterated process could be distributed along a curve according to their
variance around the mean or standard deviation. ‘Tho’ Chance produces
Irregularities,’ wrote de Moivre in 1733, ‘still the Odds will be infinitely
great, that in process of Time, those Irregularities will bear no proportion to
recurrency of that Order which naturally results from Original Design.’ The
bell curve that we encountered in Chapter 3 represents the normal distribution,
in which 68.2 per cent of outcomes are within one standard deviation (plus or
minus) of the mean.
5. Utility.
In 1738 the Swiss mathematician Daniel Bernoulli proposed that ‘The value of an
item must not be based on its price, but rather on the utility that it yields’,
and that the ‘utility resulting from any small increase in wealth will be
inversely proportionate to the quantity of goods previously possessed’ - in
other words $100 is worth more to someone on the median income than to a hedge
fund manager.
6. Inference.
In his ‘Essay Towards Solving a Problem in the Doctrine of Chances’ (published
posthumously in 1764), Thomas Bayes set himself the following problem: ‘Given
the number of times in which an unknown event has happened and failed; Required
the chance that the probability of its happening in a single trial lies somewhere
between any two degrees of probability that can be named.’ His resolution of
the problem - ‘The probability of any event is the ratio between the value at
which an expectation depending on the happening of the event ought to be
computed, and the chance of the thing expected upon it’s [sic]
happening’ - anticipates the modern formulation that expected utility is the
probability of an event times the payoff received in case of that event.18
In short,
it was not merchants but mathematicians who were the true progenitors of modern
insurance. Yet it took clergymen to turn theory into practice.
Greyfriars
Kirkyard, on the hill that is the heart of Edinburgh’s Old Town, is best known
today for Greyfriars Bobby, the loyal Skye terrier who refused to desert his
master’s grave, and also for the grave robbers - the so-called ‘Resurrection
Men’ - who went there in the early nineteenth century to supply the medical
school at Edinburgh University with corpses for dissection. But Greyfriars’s
importance in the history of finance lies in the earlier mathematical work of
its minister, Robert Wallace, and his friend Alexander Webster, who was
minister of Tolbooth. Along with Colin Maclaurin, Professor of Mathematics at
Edinburgh, it was their achievement to create the first modern insurance fund,
based on correct actuarial and financial principles, rather than mercantile
gambling.
Living in
Auld Reekie, as the distinctly smelly Scottish capital was then known, Wallace
and Webster had a keen sense of the fragility of the human condition. They
themselves lived to ripe old ages: 74 and 75 respectively. But Maclaurin died
at the age of just 48, having fallen from his horse and suffered exposure while
trying to evade the Jacobites during the 1745 rising. Invasions of Papist Highlanders
were only one of the hazards inhabitants of Edinburgh faced in the mid
eighteenth century. Average life expectancy at birth is unlikely to have been
better than it was in England, where it was just 37 until the 1800s. It may
even have been as bad as in London, where it was 23 in the late eighteenth
century - perhaps even worse, given the Scottish capital’s notoriously bad
hygiene.19 For Wallace and Webster, one group of people
seemed especially vulnerable to the consequences of premature death. Under the
Law of Ann (1672), the widow and children of a deceased minister of the Church
of Scotland received only half a year’s stipend in the year of the minister’s
death. After that, they faced penury. A supplementary scheme had been set up by
the Bishop of Edinburgh in 1711, but on the traditional pay-as-you-go basis.
Wallace and Webster knew this to be unsatisfactory.
We tend
to think of Scottish clergymen as the epitome of prudence and thrift, weighed
down with an anticipation of impending divine retribution for every tiny
transgression. In reality, Robert Wallace was a hard drinker as well as a
mathematical prodigy, who loved to knock back claret with his bibulous buddies
at the Rankenian Club, which met in what used to be Ranken’s Inn.ag Alexander
Webster’s nickname was Bonum Magnum; it was said to be ‘hardly in the power of
liquor to affect Dr Webster’s understanding or his limbs’. Yet no one was more
sober when it came to calculations of life expectancy. The plan Webster and
Wallace came up with was ingenious, reflecting the fact that they were as much
products of Scotland’s eighteenth-century Enlightenment as of the Calvinist
Reformation that had preceded it. Rather than merely having ministers pay an
annual premium, which could be used to take care of widows and orphans as and
when ministers died, they argued that the premiums should be used to create a
fund that could then be profitably invested. Widows and orphans would be paid
out of the returns on the investment, not just the premiums themselves. All
that was required for the scheme to work was an accurate projection of how many
beneficiaries there would be in the future, and how much money could be
generated to support them. Modern actuaries still marvel at the precision with
which Webster and Wallace did their calculations.20‘It is
experience alone & nice calculation that must determine the proportional
sum the widow is to have after the husband’s death,’ wrote Wallace in an early
draft, ‘but a beginning may be made by allowing triple the sum the husband
payed [sic] in [yearly] during his life . . .’ Wallace then turned to
the evidence that he and Webster had been able to gather from presbyteries all
over Scotland. It seemed that there tended to be ‘930 ministers in life at all
times’: . . . ’tis found by a Medium of 20 years back, that 27 [of 930]
ministers die yearly, 18 of them leave Widows, 5 of them Children without a
Widow, 2 of them who leave Widows, leave also Children of a former Marriage,
under the Age of 16; and when the whole Number of Widows shall be complete, 3
Annuitants will die, or marry, leaving Children under 16.
Wallace
originally estimated the maximum number of widows living at any one time to be
279; but Maclaurin was able to correct this, pointing out that it was wrong to
assume a constant mortality rate for the widows, since they would not all be
the same age. To arrive at the correct, higher figure, he turned to Halley’s
life tables.21
Time was
to be the test of their calculations. According to the final version of the
scheme, each minister was to pay an annual premium of between £2 12s 6d and £6
11s 3d (there were four levels of premium to choose from). The proceeds would
then be used to create a fund that could be profitably invested (initially in
loans to younger ministers) to yield sufficient income to pay annuities to new
widows of between £10 and £25, depending on the level of premium paid, and to
cover the fund’s management costs. In other words, the ‘Fund for a Provision
for the Widows and Children of the Ministers of the Church of Scotland’ was the
first insurance fund to operate on the maximum principle, with capital being
accumulated until interest and contributions would suffice to pay the maximum
amount of annuities and expenses likely to arise. If the projections were
wrong, the fund would either overshoot or, more problematically, undershoot the
amount required. After at least five attempts to estimate the rate of growth of
the fund, Wallace and Webster agreed figures that projected a rise from £18,620
at the inception in 1748 to £58,348 in 1765. They were out by just one pound.
The actual free capital of the fund in 1765 was £58,347. Both Wallace and
Webster lived to see their calculations vindicated.
Calculations
for the original Scottish Ministers’
Widows’ Fund (1)
Calculations
for the original Scottish Ministers’ Widows’
In 1930
the German insurance expert Alfred Manes concisely defined insurance as:
An economic institution resting on the
principle of mutuality, established for the purpose of supplying a fund, the
need for which arises from a chance occurrence whose probability can be
estimated.22
The
Scottish Ministers’ Widows’ Fund was the first such fund, and its foundation
was truly a milestone in financial history. It established a model not just for
Scottish clergymen, but for everyone who aspired to provide against premature
death. Even before the fund was fully operational, the universities of
Edinburgh, Glasgow and St Andrews had applied to join. Within the next twenty
years similar funds sprang up on the same model all over the English-speaking
world, including the Presbyterian Ministers’ Fund of Philadelphia (1761) and
the English Equitable Company (1762), as well as the United Incorporations of
St Mary’s Chapel (1768), which provided for the widows of Scottish artisans. By
1815 the principle of insurance was so widespread that it was adopted even for
those men who lost their lives fighting against Napoleon. A soldier’s odds of
being killed at Waterloo were roughly 1 in 4. But if he was insured, he had the
consolation of knowing, even as he expired on the field of battle, that his
wife and children would not be thrown out onto the streets (giving a whole new
meaning to the phrase ‘take cover’). By the middle of the nineteenth century,
being insured was as much a badge of respectability as going to Church on a
Sunday. Even novelists, not generally renowned for their financial prudence,
could join. Sir Walter Scott23 took out a policy in 1826
to reassure his creditors that they would still get their money back in the
event of his death.ah A fund that had originally been
intended to support the widows of a few hundred clergymen grew steadily to
become the general insurance and pension fund we know today as Scottish Widows.
Although it is now just another financial services provider, having been taken
over by Lloyds Bank in 1999, Scottish Widows is still seen as exemplifying the
benefits of Calvinist thrift, thanks in no small measure to one of the most
successful advertising campaigns in financial history.ai
What no
one anticipated back in the 1740s was that by constantly increasing the number
of people paying premiums, insurance companies and their close relatives the
pension funds would rise to become some of the biggest investors in the world -
the so-called institutional investors who today dominate global financial
markets. When, after the Second World War, insurance companies were allowed to
start investing in the stock market, they quickly snapped up huge chunks of the
British economy, owning around a third of major UK companies by the mid 1950s.24 Today
Scottish Widows alone has over £100 billion under management. Insurance
premiums have risen steadily as a proportion of gross domestic product in
developed economies, from around 2 per cent on the eve of the First World War
to just under 10 per cent today.
As Robert
Wallace realized more than 250 years ago, size matters in insurance because the
more people who pay into a fund the easier it becomes, by the law of averages,
to predict what will have to be paid out each year. Although no individual’s
date of death can be known in advance, actuaries can calculate the likely life
expectancies of a large group of individuals with astonishing precision using
the principles first applied by Wallace, Webster and Maclaurin. In addition to
how long the policy-holders are likely to live, insurers also need to know what
the investment of their funds will bring in. What should they buy with the
premiums their policy-holders pay? Relatively safe bonds, as recommended by
Victorian authorities such as A. H. Bailey, head actuary of the London
Assurance Corporation? Or riskier but probably higher yielding stocks?
Insurance, in other words, is where the risks and uncertainties of daily life
meet the risks and uncertainties of finance. To be sure, actuarial science
gives insurance companies an in-built advantage over policy-holders. Before the
dawn of modern probability theory, insurers were the gamblers; now they are the
casino. The case can be made, as it was by Dickie Scruggs before his fall from
grace, that the odds are now stacked unjustly against the punters/policy-holders.
But as the economist Kenneth Arrow long ago pointed out, most of us prefer a
gamble that has a 100 per cent chance of a small loss (our annual premium) and
a small chance of a large gain (the insurance payout after disaster) to a
gamble that has a 100 per cent chance of a small gain (no premiums) but an
uncertain chance of a huge loss (no payout after a disaster). That is why the
guitarist Keith Richards insured his fingers and the singer Tina Turner her
legs. Only if insurance companies systematically fail to pay out to those who
have placed their bets will their long-standing reputation for Scottish
prudence become a reputation for stinginess and lack of scruple.
Sir
Walter Scott’s life insurance policy
Yet there
remains a puzzle. It may seem appropriate that, as the inventors of modern
insurance, the British remain the world’s most insured people, paying more than
12 per cent of GDP on premiums, roughly a third more than Americans spend on
insurance and nearly twice what the Germans spend.25 A
moment’s reflection, however, prompts the question, why should that be? Unlike
the United States, Britain rarely suffers extreme weather events; the nearest
thing to a hurricane in my lifetime was the storm of October 1987. No British
city stands on a fault-line, as San Francisco does. And, compared with Germany,
Britain’s history since the foundation of Scottish Widows has been one of
almost miraculous political stability. Why, then, do the British take out so
much insurance?
The
answer lies in the rise and fall of an alternative form of protection against
risk: the welfare state.
FROM
WARFARE TO WELFARE
No matter
how many private funds like Scottish Widows were set up, there were always
going to be people beyond the reach of insurance, who were either too poor or
too feckless to save for that rainy day. Their lot was a painfully hard one:
dependence on private charity or the austere regime of the workhouse. At the
large Marylebone Workhouse on London’s Northumberland Street, the ‘poor being
lame impotent old and blind’ numbered up to 1,900 in hard times. When the
weather was bitter, work scarce and food dear, men and women ‘casuals’ would
submit to a prison-like regime. As the Illustrated London News described
it in 1867:
They are washed with plenty of hot and cold
water and soap, and receive six ounces of bread and a pint of gruel for supper;
after which, their clothes being taken to be cleaned and fumigated, they are
furnished with warm woollen night-shirts and sent to bed. Prayers are read by
Scripture-readers; strict order and silence are maintained all night in the
dormitory . . . The bed consists of a mattress stuffed with coir, a flock
pillow, and a pair of rugs. At six o’clock in the morning in summer, and at
seven in winter, they are aroused and ordered to work. The women are set to
clean the wards, or to pick oakum; the men to break stones, but none are
detained longer than four hours after their breakfast which is of the same kind
and quantity as their supper. Their clothes, disinfected and freed of vermin,
being restored to them in the morning, those who choose to mend their ragged
garments are supplied with needles, thread, and patches of cloth for that
purpose. If any are ill, the medical officer of the workhouse attends to them;
if too ill to travel, they are admitted into the infirmary.
The
author of the report concluded that ‘the “Amateur Casual” would find nothing to
complain of . . . A board of Good Samaritans could do no more.’26 By
the later nineteenth century, however, a feeling began to grow that life’s
losers deserved better. The seeds began to be planted of a new approach to the
problem of risk - one that would ultimately grow into the welfare state. These
state systems of insurance were designed to exploit the ultimate economy of scale,
by covering literally every citizen from birth to death.
We tend
to think of the welfare state as a British invention. We also tend to think of
it as a socialist or at least liberal invention. In fact, the first system of
compulsory state health insurance and old age pensions was introduced not in
Britain but in Germany, and it was an example the British took more than twenty
years to follow. Nor was it a creation of the Left; rather the opposite. The
aim of Otto von Bismarck’s social insurance legislation, as he himself put it
in 1880, was ‘to engender in the great mass of the unpropertied the
conservative state of mind that springs from the feeling of entitlement to a
pension.’ In Bismarck’s view, ‘A man who has a pension for his old age is . . .
much easier to deal with than a man without that prospect.’ To the surprise of
his liberal opponents, Bismarck openly acknowledged that this was ‘a
state-socialist idea! The generality must undertake to assist the
unpropertied.’ But his motives were far from altruistic. ‘Whoever embraces this
idea’, he observed, ‘will come to power.’27 It was not
until 1908 that Britain followed the Bismarckian example, when the Liberal
Chancellor of the Exchequer David Lloyd George introduced a modest and
means-tested state pension for those over 70. A National Health Insurance Act
followed in 1911. Though a man of the Left, Lloyd George shared Bismarck’s
insight that such measures were vote-winners in a system of rapidly widening
electoral franchises. The rich were outnumbered by the poor. When Lloyd George
raised direct taxes to pay for the state pension, he relished the label that
stuck to his 1909 budget: ‘The People’s Budget.’
If the
welfare state was conceived in politics, however, it grew to maturity in war.
The First World War expanded the scope of government activity in nearly every
field. With German submarines sending no less than 7,759,000 gross tons of
merchant shipping to the bottom of the ocean, there was clearly no way that war
risk could be covered by the private marine insurers. The standard Lloyd’s
policy had in fact already been modified (in 1898) to exclude ‘the consequences
of hostilities or warlike operations’ (the so-called f.c.s. clause: ‘free of
capture and seizure’). But even those policies that had been altered to remove
that exclusion were cancelled when war broke out.28 The
state stepped in, virtually nationalizing merchant shipping in the case of the
United States,29 and (predictably) enabling insurance
companies to claim that any damage to ships between 1914 and 1918 was a
consequence of the war.30With the coming of peace,
politicians in Britain also hastened to cushion the effects of demobilization
on the labour market by introducing an Unemployment Insurance Scheme in 1920.31 This
process repeated itself during and after the Second World War. The British
version of social insurance was radically expanded under the terms of the 1942
Report of the Inter-Departmental Committee on Social Insurance and Allied
Services, chaired by the economist William Beveridge, which recommended a broad
assault on ‘Want, Disease, Ignorance, Squalor and Idleness’ through a variety
of state schemes. In a March 1943 broadcast, Churchill summarized these as:
‘national compulsory insurance for all classes for all purposes from the cradle
to the grave’; the abolition of unemployment by government policies which would
‘exercise a balancing influence upon development which can be turned on or off
as circumstances require’; ‘a broadening field for State ownership and
enterprise’; more publicly provided housing; reforms to public education and
greatly expanded health and welfare services.32
The
arguments for state insurance extended beyond mere social equity. First, state
insurance could step in where private insurers feared to tread. Second,
universal and sometimes compulsory membership removed the need for expensive
advertising and sales campaigns. Third, as one leading authority observed in
the 1930s, ‘the larger numbers combined should form more stable averages for
the statistical experience’.33 State insurance exploited
economies of scale, in other words; so why not make it as comprehensive as
possible? The enthusiasm with which the Beveridge Report was greeted not just
in Britain but around the world helps explain why the welfare state is still
thought of as having ‘Made in Britain’ stamped on it. However, the world’s
first welfare superpower, the country that took the principle furthest and with
the greatest success, was not Britain but Japan. Nothing illustrates more
clearly than the Japanese experience the intimate links between the welfare
state and the warfare state.
Disaster
kept striking Japan in the first half of the twentieth century. On 1 September
1923, a huge earthquake (7.9 on the Richter scale) struck the Kantō region, devastating
the cities of Yokohama and Tokyo. More than 128,000 houses completely
collapsed, around the same number half-collapsed, 900 were swept away by the
sea and nearly 450,000 were burnt down in fires that broke out almost
immediately after the quake.34 The Japanese were
insured; between 1879 and 1914 their insurance industry had grown from nothing
into a vibrant sector of the economy, offering cover against loss at sea,
death, fire, conscription, transport accident and burglary, to name just some
of the thirteen distinct forms of insurance sold by more than thirty companies.
In the year of the earthquake, for example, Japanese citizens had purchased
¥699,634,000 ($328 million) worth of new life insurance for 1923, with an
average policy amount of ¥1,280 ($600).35 But the total
losses caused by the earthquake were in the region of $4.6 billion. Six years
later the Great Depression struck, pushing some rural areas to the brink of
starvation (at this time 70 per cent of the population was engaged in agriculture,
of whom 70 per cent tilled an average of just one and a half acres).36 In
1937 the country embarked on an expensive and ultimately futile war of conquest
in China. Then, in December 1941, Japan went to war with the world’s economic
colossus, the United States, and eventually paid the ultimate price at
Hiroshima and Nagasaki. Quite apart from the nearly three million lives lost in
Japan’s doomed bid for empire, by the end in 1945 the value of Japan’s entire
capital stock seemed to have been reduced to zero by American bombers. In
aggregate, according to the US Strategic Bombing Survey, at least 40 per cent
of the built-up areas of more than sixty cities had been destroyed; 2.5 million
homes had been lost, leaving 8.3 million people homeless.37 Practically
the only city to survive intact (though not wholly unscathed) was Kyoto, the
former imperial capital - a city which still embodies the ethos of pre-modern
Japan, as it is one of the last places where the traditional wooden townhouses
known as machiya can still be seen. One look at these long,
thin structures, with their sliding doors, paper screens, polished beams and
straw mats, makes it clear why Japanese cities were so vulnerable to fire.
In Japan,
as in most combatant countries, the lesson was clear: the world was just too
dangerous a place for private insurance markets to cope with. (Even in the
United States, the federal government took over 90 per cent of the risk for war
damage through the War Damage Corporation, one of the most profitable public
sector entities in history for the obvious reason that no war damage befell the
mainland United States.)38 With the best will in the
world, individuals could not be expected to insure themselves against the US
Air Force. The answer adopted more or less everywhere was for the government to
take over, in effect to nationalize risk. When the Japanese set out to devise a
system of universal welfare in 1949, their Advisory Council for Social Security
acknowledged a debt to the British example. In the eyes of Bunji Kondo, a
convinced believer in universal welfare coverage, it was time to have bebariji
no nihonhan: Beveridge for the Japanese.39 But they
took the idea even further than Beveridge had intended. The aim, as the report
of the Advisory Council put it, was to create a system in which measures are
taken for economic security for sickness, injury, childbirth, disability,
death, old age, unemployment, large families and other causes of impoverishment
through . . . payment by governments . . . [and] in which the needy will be
guaranteed the minimum standard of living by national assistance.40
From now
on, the welfare state would cover people against all the vagaries of modern
life. If they were born sick, the state would pay. If they could not afford
education, the state would pay. If they could not find work, the state would
pay. If they were too ill to work, the state would pay. When they retired, the
state would pay. And when they finally died, the state would pay their
dependants. This certainly chimed with one of the objectives of the post-war
American occupation: ‘To replace a feudal economy by a welfare economy’.41 Yet
it would be wrong to assume (as a number of post-war commentators did) that
Japan’s welfare state was ‘imposed wholesale by an alien power’.42 In
reality, the Japanese set up their own welfare state - and they began to do so
long before the end of the Second World War. It was the mid twentieth-century
state’s insatiable appetite for able-bodied young soldiers and workers, not
social altruism, that was the real driver. As the American political scientist
Harold D. Lasswell put it, Japan in the 1930s became a garrison state.43 But
it was one which carried within it the promise of a ‘warfare-welfare state’,
offered social security in return for military sacrifice.
There had
been some basic social insurance in Japan before the 1930s: factory accident
insurance and health insurance (introduced for factory workers in 1927). But
this covered less than two fifths of the industrial workforce.44 Significantly,
the plan for a Japanese Welfare Ministry (Kōseishō) was approved by
Japan’s imperial government on 9 July 1937, just two months after the outbreak
of war with China.45Its first step was to introduce a new
system of universal health insurance to supplement the existing programme for
industrial employees. Between the end of 1938 and the end of 1944, the number
of citizens covered by the scheme increased nearly a hundred-fold, from just
over 500,000 to over 40 million. The aim was explicit: a healthier populace
would ensure healthier recruits to the Emperor’s armed forces. The wartime
slogan of ‘all people are soldiers’ (kokumin kai hei) was adapted to
become ‘all people should have insurance’ (kokumin kai hoken). And to
ensure universal coverage, the medical profession and pharmaceutical industry
were essentially subordinated to the state.46 The war
years also saw the introduction of compulsory pension schemes for seamen and
workers, with the state covering 10 per cent of the costs, while employers and
employees each contributed 5.5 per cent of the latter’s wages. The first steps
towards the large-scale provision of public housing were also taken. So what
happened after the war in Japan was in large measure the extension of the
warfare-welfare state. Now ‘all people should have pensions’, kokumin
kai nenkin. Now there should be unemployment insurance, rather than the
earlier paternalistic practice of keeping workers on payrolls even in lean
times. Small wonder some Japanese tended to think of welfare in nationalistic
terms, a kind of peaceful mode of national aggrandisement. The 1950 report,
with its British-style recommendations, was in fact rejected by the government.
Only in 1961, long after the end of American control, were most of its
recommendations adopted. By the late 1970s a Japanese politician, Nakagawa
Yatsuhiro, could boast that Japan had become ‘The Welfare Super-Power’ ( fukushi
chōdaikoku), precisely because its system was different from (and superior
to) Western models.47
There was
in fact nothing institutionally unique about Japan’s system, of course. Most
welfare states aimed at universal, cradle-to-grave coverage. Yet the Japanese
welfare state seemed to be a miracle of effectiveness. In terms of life
expectancy, the country led the world. In education, too, it was ahead of the
field. Around 90 per cent of the population had graduated from high school in
the mid seventies, compared with just 32 per cent in England.48 Japan
was also a much more equal society than any in the West, with the sole
exception of Sweden. And Japan had the largest state pension fund in the world,
so that every Japanese who retired could count on a generous bonus as well as a
regular income throughout his (generally rather numerous) years of well-earned
rest. The welfare superpower was also a miracle of parsimony. In 1975 just 9
per cent of national income went on social security, compared with 31 per cent
in Sweden.49 The burden of tax and social welfare was
roughly half that in England. Run on this basis, the welfare state seemed to
make perfect sense. Japan had achieved security for all - the elimination of
risk - while at the same time its economy grew so rapidly that by 1968 it was
the second largest in the world. A year before, Herman Kahn had predicted that
Japan’s per capita income would overtake America’s by 2000. Indeed, Nakagawa
Yatsuhiro argued that, when fringe benefits were taken into account, ‘the
actual income of the Japanese worker [was already] at least three times more
than that of the American’.50 Warfare had failed to make
Japan Top Nation, but welfare was succeeding. The key turned out to be not a
foreign empire, but a domestic safety net.51
Yet there
was a catch, a fatal flaw in the design of the post-warfare welfare state. The
welfare state might have worked smoothly enough in 1970s Japan. But the same
could not be said of its counterparts in the Western world. Despite their
superficial topographical and historical resemblances (archipelagos off
Eurasia, imperial pasts, buttoned-up behaviour when sober) the Japanese and the
British had quite different cultures. Outwardly, their welfare systems might
seem similar: state pensions financed out of taxation on the old pay-as-you-go
model; standardized retirement ages; universal health insurance; unemployment
benefits; subsidies to farmers; quite heavily restricted labour markets. But
these institutions worked in quite different ways in the two countries. In
Japan egalitarianism was a prized goal of policy, while a culture of social
conformism encouraged compliance with the rules. English individualism, by
contrast, inclined people cynically to game the system. In Japan, firms and
families continued to play substantial supporting roles in the welfare system.
Employers offered supplementary benefits and were reluctant to fire workers. As
recently as the 1990s, two thirds of Japanese older than 64 lived with their
children.52 In Britain, by contrast, employers did not
hesitate to slash payrolls in hard times, while people were much more likely to
leave elderly parents to the tender mercies of the National Health Service. The
welfare state might have made Japan an economic superpower, but in the 1970s it
appeared to be having the opposite effect in Britain.
According
to British conservatives, what had started out as a system of national
insurance had degenerated into a system of state handouts and confiscatory
taxation which disastrously skewed economic incentives. Between 1930 and 1980,
social transfers in Britain had risen from just 2.2 per cent of gross domestic
product to 10 per cent in 1960, 13 per cent in 1970 and nearly 17 per cent in
1980, more than 6 per cent higher than in Japan.53 Health
care, social services and social security were consuming three times more than
defence as a share of total managed government expenditure. Yet the results
were dismal. Increased expenditure on UK welfare had been accompanied by low
growth and inflation significantly above the developed world average. A
particular problem was chronically slow productivity growth (real GDP per
person employed grew by just 2.8 per cent between 1960 and 1979, compared with
8.1 per cent in Japan),54 which in turn seemed closely
related to the bloody-minded bargaining techniques of British trade unions (‘go
slows’ being a favourite alternative to outright ‘downing tools’). Meanwhile,
marginal tax rates in excess of 100 per cent on higher incomes and capital
gains discouraged traditional forms of saving and investment. The British
welfare state, it seemed, had removed the incentives without which a capitalist
economy simply could not function: the carrot of serious money for those who
strove, the stick of hardship for those who slacked. The result was
‘stagflation’: stagnant growth plus high inflation. Similar problems were
afflicting the US economy, where expenditure on health, Medicare, income
security and social security had risen from 4 per cent of GDP in 1959 to 9 per
cent in 1975, outstripping defence spending for the first time. In America,
too, productivity was scarcely growing and stagflation was rampant. What was to
be done?
One man,
and his pupils, thought they knew the answer. Thanks in large measure to their
influence, one of the most pronounced economic trends of the past twenty-five
years has been for the Western welfare state to be dismantled, reintroducing
people with a sharp shock to the unpredictable monster they thought they had
escaped from: risk.
THE
BIG CHILL
In 1976 a
diminutive professor working at the University of Chicago won the Nobel Prize
in economics. Milton Friedman’s reputation as an economist rested in large
measure on his reinstatement of the idea that inflation was due to an excessive
increase in the supply of money. As we have seen, he co-wrote perhaps the
single most important book on US monetary policy of all time, firmly laying the
blame for the Great Depression on mistakes by the Federal Reserve.55 But
the question that had come to preoccupy him by the mid-seventies was: what had
gone wrong with the welfare state? In March 1975, Friedman flew from Chicago to
Chile to answer that question.
Only
eighteen months earlier, in September 1973, tanks had rolled through the
capital Santiago to overthrow the government of the Marxist President Salvador
Allende, whose attempt to turn Chile into a Communist state had ended in total
economic chaos and a call by the parliament for a military takeover. Air force
jets bombed the presidential Moneda Palace, watched from the balcony of the
nearby Carera Hotel by opponents of Allende who celebrated with champagne.
Inside the palace, the president himself fought a hopeless rearguard action
armed with an AK47 - a gift from Fidel Castro, the man he had sought to
emulate. As the tanks rumbled towards him, Allende realized it was all over
and, cornered in what was left of his quarters, shot himself.
The coup
epitomized a world-wide crisis of the post-war welfare state and posed a stark
choice between rival economic systems. With output collapsing and inflation
rampant, Chile’s system of universal benefits and state pensions was essentially
bankrupt. For Allende, the answer had been full blown Marxism, a complete
Soviet-style takeover of every aspect of economic life. The generals and their
supporters knew they were against that. But what were they actually for, since
the status quo was clearly unsustainable? Enter Milton Friedman. Amid his
lectures and seminars, he spent three quarters of an hour with the new
president General Pinochet and later wrote him an assessment of the Chilean
economic situation, urging him to reduce the government deficit that he had
identified as the main cause of the country’s sky-high inflation, then running
at an annual rate of 900 per cent.56A month after Friedman’s
visit, the Chilean junta announced that inflation would be stopped ‘at any
cost’. The regime cut government spending by 27 per cent and set fire to
bundles of banknotes. But Friedman was offering more than his patent monetarist
shock therapy. In a letter to Pinochet written after his return to Chicago, he
argued that ‘this problem’ of inflation arose ‘from trends toward socialism
that started forty years ago, and reached their logical - and terrible - climax
in the Allende regime’. As he later recalled, ‘The general line I was taking .
. . was that their present difficulties were due almost entirely to the
forty-year trend toward collectivism, socialism, and the welfare state . . .’57 And
he assured Pinochet: ‘The end of inflation will lead to a rapid expansion of
the capital market, which will greatly facilitate the transfer of enterprises
and activities still in the hands of the government to the private sector.’58
For
tendering this advice Friedman found himself denounced by the American press.
After all, he was acting as a consultant to a military dictator responsible for
the executions of more than two thousand real and suspected Communists and the
torture of nearly 30,000 more. As the New York Times asked: ‘.
. . if the pure Chicago economic theory can be carried out in Chile only at the
price of repression, should its authors feel some responsibility?’aj
Chicago’s
role in the new regime consisted of more than just one visit by Milton
Friedman. Since the 1950s, there had been a regular stream of bright young
Chilean economists studying at Chicago on an exchange programme with the
Universidad Católica in Santiago, and they went back convinced of the need to
balance the budget, tighten the money supply and liberalize trade.59 These
were the so-called Chicago Boys, Friedman’s foot-soldiers: Jorge Cauas,
Pinochet’s finance minister and later economics ‘superminister’, Sergio de
Castro, his successor as finance minister, Miguel Kast, labour minister and
later central bank chief, and at least eight others who studied in Chicago and
went on to serve in government. Even before the fall of Allende, they had
devised a detailed programme of reforms known as El Ladrillo (The
Brick) because of the thickness of the manuscript. The most radical measures,
however, would come from a Catholic University student who had opted to study
at Harvard, not Chicago. What he had in mind was the most profound challenge to
the welfare state in a generation. Thatcher and Reagan came later. The backlash
against welfare started in Chile.
For José
Piñera, just 24 when Pinochet seized power, the invitation to return to Chile from
Harvard posed an agonizing dilemma. He had no illusions about the nature of
Pinochet’s regime. Yet he also believed there was an opportunity to put into
practice ideas that had been taking shape in his mind ever since his arrival in
New England. The key, as he saw it, was not just to reduce inflation. It was
also essential to foster that link between property rights and political rights
which had been at the heart of the successful North American experiment with
capitalist democracy. There was no surer way to do this, Piñera believed, than
radically to overhaul the welfare state, beginning with the pay-as-you-go
system of funding state pensions and other benefits. As he saw it:
What had begun as a system of large-scale
insurance had simply become a system of taxation, with today’s contributions
being used to pay today’s benefits, rather than to accumulate a fund for future
use. This ‘pay-as-you-go’ approach had replaced the principle of thrift with
the practice of entitlement . . . [But this approach] is rooted in a false
conception of how human beings behave. It destroys, at the individual level,
the link between contributions and benefits. In other words, between effort and
reward. Wherever that happens on a massive scale and for a long period of time,
the final result is disaster. 60
Between
1979 and 1981, as minister of labour (and later minister of mining), Piñera
created a radically new pension system for Chile, offering every worker the
chance to opt out of the state pension system. Instead of paying a payroll tax,
they would put an equivalent amount (10 per cent of their wages) into an
individual Personal Retirement Account, to be managed by private and competing
companies known as Administradora de Fondos de Pensiones (AFPs).61 On
reaching retirement age, a participant would withdraw his money and use it to
buy an annuity; or, if he preferred, he could keep working and contributing. In
addition to a pension, the scheme also included a disability and life insurance
premium. The idea was to give the Chilean worker a sense that the money being
set aside was really his own capital. In the words of Hernán Büchi (who helped
Piñera draft the social security legislation and went on to implement the
reform of health care), ‘Social programmes have to include some incentive for
individual effort and for persons gradually to be responsible for their own
destiny. There is nothing more pathetic than social programmes that encourage
social parasitism.’62
Piñera
gambled. He gave workers a choice: stick with the old system of pay-as-you-go,
or opt for the new Personal Retirement Accounts. He cajoled, making regular
television appearances to reassure workers that ‘Nobody will take away your
grand-mother’s cheque’ (from the old state system). He held firm, sarcastically
dismissing a proposal that the country’s trade unions, rather than individual
workers, should be responsible for choosing their members’ AFPs. Finally, on 4
November 1980, the reform was approved, coming into effect at Piñera’s
mischievous suggestion on 1 May, international Labour Day, the following year.63 The
public response was enthusiastic. By 1990 more than 70 per cent of workers had
made the switch to the private system.64 Each one
received a shiny new book in which the contributions and investment returns
were recorded. By the end of 2006, around 7.7 million Chileans had a Personal
Retirement Account; 2.7 million were also covered by private health schemes,
under the so-called ISAPRE system, which allowed workers to opt out of the
state health insurance system in favour of a private provider. It may not sound
like it, but - along with the other Chicago-inspired reforms implemented under
Pinochet - this represented as big a revolution as anything the Marxist Allende
had planned back in 1973. Moreover, the reform had to be introduced at a time
of extreme economic instability, a consequence of the ill-judged decision to
peg the Chilean currency to the dollar in 1979, when the inflation dragon
appeared to have been slain. When US interest rates rose shortly afterwards,
the deflationary pressure plunged Chile into a recession that threatened to
derail the Chicago-Harvard express altogether. The economy contracted 13 per
cent in 1982, seemingly vindicating the left-wing critics of Friedman’s ‘shock
treatment’. Only towards the end of 1985 could the crisis really be regarded as
over. By 1990 it was clear that the reform had been a success: welfare reforms
were responsible for fully half the decline of total government expenditure
from 34 per cent of GDP to 22 per cent.
Was it
worth it? Was it worth the huge moral gamble that the Chicago and Harvard boys
made, of getting into bed with a murderous, torturing military dictator? The
answer depends on whether or not you think these economic reforms helped pave
the way back to a sustainable democracy in Chile. In 1980, just seven years
after the coup, Pinochet conceded a new constitution that prescribed a ten-year
transition back to democracy. In 1990, having lost a referendum on his
leadership, he stepped down as president (though he remained in charge of the
army for a further eight years). Democracy was restored, and by that time the
economic miracle was under way that helped to ensure its survival. For the
pension reform not only created a new class of property-owners, each with his
own retirement nest egg. It also gave the Chilean economy a massive shot in the
arm, since the effect was significantly to increase the savings rate (to 30 per
cent of GDP by 1989, the highest in Latin America). Initially, a cap was imposed
that prevented the AFPs from investing more than 6 per cent (later 12 per cent)
of the new pension funds outside Chile.65 The effect of
this was to ensure that Chile’s new source of savings was channelled into the
country’s own economic development. In January 2008 I visited Santiago and
watched brokers at the Banco de Chile busily investing the pension
contributions of Chilean workers in their own stock market. The results have
been impressive. The annual rate of return on the Personal Retirement Accounts
has been over 10 per cent, reflecting the soaring performance of the Chilean
stock market, which has risen by a factor of 18 since 1987.
There is
a shadow side to the system, to be sure. The administrative and fiscal costs of
the system are sometimes said to be too high.66 Since
not everyone in the economy has a regular full-time job, not everyone ends up
participating in the system. The self-employed were not obliged to contribute
to Personal Retirement Accounts, and the casually employed do not contribute
either. That leaves a substantial proportion of the population with no pension
coverage at all, including many of the people living in La Victoria, once a
hotbed of popular resistance to the Pinochet regime - and still the kind of
place where Che Guevara’s face is spray-painted on the walls. On the other
hand, the government stands ready to make up the difference for those whose
savings do not suffice to pay a minimum pension, provided they have done at
least twenty years of work. And there is also a Basic Solidarity pension for
those who do not qualify for this.67 Above all, the
improvement in Chile’s economic performance since the Chicago Boys’ reforms is
very hard to argue with. The growth rate in the fifteen years before Friedman’s
visit was 0.17 per cent. In the fifteen years that followed, it was 3.28 per
cent, nearly twenty times higher. The poverty rate has declined dramatically to
just 15 per cent, compared with 40 per cent in the rest of Latin America.68 Santiago
today is the shining city of the Andes, easily the continent’s most prosperous
and attractive city.
It is a
sign of Chile’s success that the country’s pension reforms have been imitated
all across the continent, and indeed around the world. Bolivia, El Salvador and
Mexico copied the Chilean scheme to the letter. Peru and Colombia introduced
private pensions as an alternative to the state system.69 Kazakhstan,
too, has followed the Chilean example. Even British MPs have beaten a path from
Westminster to Piñera’s door. The irony is that the Chilean reform was far more
radical than anything that has been attempted in the United States, the
heartland of free market economics. Yet welfare reform is coming to North
America, whether anyone wants it or not.
When
Hurricane Katrina struck New Orleans, it laid bare some realities about the
American system that many people had been doing their best to ignore. Yes,
America had a welfare state. No, it didn’t work. The Reagan and Clinton
administrations had implemented what seemed like radical welfare reforms,
reducing unemployment benefits and the periods for which they could be claimed.
But no amount of reform could insulate the system from the ageing of the
American population and the spiralling cost of private health care.
The US
has a unique welfare system. Social Security provides a minimal state pension
to all retirees, while at the same time the Medicare system covers all the
health costs of the elderly and disabled. Income support and other health
expenditures push up the total cost of federal welfare programmes to 11 per
cent of GDP. American healthcare, however, is almost entirely provided by the
private sector. At its best it is state-of-the-art, but it is very far from
cheap. And, if you want treatment before you retire, you need a private insurance
policy - something an estimated 47 million Americans do not have, since such
policies tend to be available only to those in regular, formal employment. The
result is a welfare system which is not comprehensive, is much less
redistributive than European systems, but is still hugely expensive. Since 1993
Social Security has been more expensive than National Security. Public
expenditure on education is higher as a percentage of GDP (5.9 per cent) than
in Britain, Germany or Japan. Public health expenditures are equivalent to
around 7 per cent of GDP, the same as in Britain; but private health care
spending accounts for more (8.5 per cent, compared with a paltry 1.1 per cent
in Britain).70
Such a
welfare system is ill prepared to cope with a rapid increase in the number of
claimants. But that is precisely what Americans face as the members of the
so-called ‘Baby Boomer’ generation, born after the Second World War, begin to
retire. 71According to the United Nations, between now
and 2050 male life expectancy in the United States is likely to rise from 75 to
80. Over the next forty years, the share of the American population that is
aged 65 or over is projected to rise from 12 per cent to nearly 21 per cent.
Unfortunately, many of the soon-to-be-retired have made inadequate provision
for life after work. According to the 2006 Retirement Confidence Survey, six in
ten American workers say they are saving for retirement and just four in ten
say they have actually calculated how much they should be saving. Many of those
without sufficient savings imagine that they will compensate by working for
longer. The average worker plans to work until age 65. But it turns out that he
or she actually ends up retiring at 62; indeed, around four in ten American
workers end up leaving the workforce earlier than they planned.72 This
has grave implications for the federal budget, since those who make these
miscalculations are likely to end up a charge on taxpayers in one way or
another. Today the average retiree receives Social Security, Medicare and
Medicaid benefits totalling $21,000 a year. Multiply this by the current 36
million elderly and you see why these programmes already consume such a large
proportion of federal tax revenues. And that proportion is bound to rise, not
only because the number of retirees is going up but also because the costs of
benefits like Medicare are out of control, rising at double the rate of
inflation. The 2003 extension of Medicare to cover prescription drugs only made
matters worse. According to one projection, by the aptly named Medicare Trustee
Thomas R. Saving, the cost of Medicare alone will absorb 24 per cent of all
federal income taxes by 2019. Current figures also imply that the federal
government has much larger unfunded liabilities than official data imply. The
Government Accountability Office’s latest estimate of the implicit ‘exposures’
arising from unfunded future Social Security and Medicare benefits is $34
trillion.73 That is nearly four times the size of the
official federal debt.
Ironically,
there’s only one country where the problem of an ageing population has more
serious economic implications than the United States. That country is Japan. So
successful was the Japanese ‘welfare superpower’ that by the 1970s life
expectancy in Japan had become the longest in the world. But that, combined
with a falling birth rate, has produced the world’s oldest society, with more
than 21 per cent of the population already over the age of 65. According to
Nakamae International Economic Research, the elderly population will be equal
to that of the working population by 2044 .74 As a
result, Japan is now grappling with a profound structural crisis of its welfare
system, which was not designed to cope with what the Japanese call the
longevity society (chôju shakai).75 Despite
raising the retirement age, the government has not yet resolved the problems of
the state pension system. (Matters are not helped by the fact that many
self-employed people and students - not to mention some eminent politicians -
are failing to make their required social security contributions.) Public
health insurers, meanwhile, have been in deficit since the early 1990s.76 Japan’s
welfare budget is now equal to three quarters of tax revenues. Its debt exceeds
one quadrillion yen, around 170 per cent of GDP .77 Yet
private sector institutions are in no better shape. Life insurance companies
have been struggling since the 1990 stock market crash; three major insurers
failed between 1997 and 2000. Pension funds are in equally dire straits. As
most countries in the developed world are moving in the same direction, it
gives a new meaning to that old 1980s pop song about ‘turning Japanese’. Assets
at the world’s largest pension funds (which include the Japanese government’s
own fund, its Dutch counterpart and the California Public Employees’ fund) now
exceed $10 trillion, having risen by 60 per cent between 2004 and 2007.78 But
are their liabilities ultimately going to grow so large that perhaps even these
huge sums will not suffice?
The
demographics of a welfare crisis: Japan, 1950-2050
(percentage shares of population by age group)
(percentage shares of population by age group)
Longer
life is good news for individuals, but it is bad news for the welfare state and
the politicians who have to persuade voters to reform it. The even worse news
is that, even as the world’s population is getting older, the world itself may
be getting more dangerous.79
THE
HEDGED AND THE UNHEDGED
What if
international terrorism strikes more frequently and/or lethally, as Al Qaeda
continues its quest for weapons of mass destruction? There is in fact good
reason to fear this. Given the relatively limited impact of the 2001 attacks,
Al Qaeda has a strong incentive to attempt a ‘nuclear 9/11’.80The
organization’s spokesmen do not deny this; on the contrary, they openly boast
of their ambition ‘to kill 4 million Americans - 2 million of them children -
and to exile twice as many and wound and cripple hundreds of thousands’.81 This
cannot be dismissed as mere rhetoric. According to Graham Allison, of Harvard
University’s Belfer Center, ‘if the US and other governments just keep doing
what they are doing today, a nuclear terrorist attack in a major city is more
likely than not by 2014’. In the view of Richard Garwin, one of the designers
of the hydrogen bomb, there is already a ‘20 per cent per year probability of a
nuclear explosion with American cities and European cities included’. Another
estimate, by Allison’s colleague Matthew Bunn, puts the odds of a nuclear
terrorist attack over a ten-year period at 29 per cent.82 Even
a small 12.5-kiloton nuclear device would kill up to 80,000 people if detonated
in an average American city; a 1.0 megaton hydrogen bomb could kill as many as
1.9 million. A successful biological attack using anthrax spores could be
nearly as lethal.83
What if
global warming is increasing the incidence of natural disasters? Here, too,
there are some grounds for unease. According to the scientific experts on the
Intergovernmental Panel on Climate Change ‘the frequency of heavy precipitation
events has increased over most areas’ as a result of man-made global warming.
There is also ‘observational evidence of an increase in intense tropical
cyclone activity in the North Atlantic since about 1970’. The rising sea levels
forecast by the IPCC would inevitably increase the flood damage caused by
storms like Katrina.84Not all scientists accept the notion
that hurricane activity along the US Atlantic coast is on the increase (as
claimed by Al Gore in his film An Inconvenient Truth). But it would
clearly be a mistake blithely to assume that this is not the case, especially
given the continued growth of residential construction in vulnerable states.
For governments that are already tottering under the weight of ever-increasing
welfare commitments, an increase in the frequency or scale of catastrophes
could be fiscally fatal. The insurance (and reinsurance) losses arising from
the 9/11 attacks were in the region of $30-58 billion, close to the insurance
losses due to Katrina.85 In both cases, the US federal
government had to step in to help private insurers meet their commitments,
providing emergency federal terrorism insurance in the aftermath of 9/11, and
absorbing the bulk of the costs of emergency relief and reconstruction along
the coast of the Gulf of Mexico. In other words, just as happened during the
world wars, the welfare state steps in when the insurers are overwhelmed. But
this has a perverse result in the case of natural disasters. In effect,
taxpayers in relatively safer parts of the country are subsidizing those who
choose to live in hurricane-prone regions. One possible way of correcting this
imbalance would be to create a federal reinsurance programme to cover
mega-catastrophes. Rather than looking to taxpayers to pick up the tab for big
disasters, insurers would charge differential premiums (higher for those
closest to hurricane zones), laying off the risk of another Katrina by
reinsuring the risk through the government.86 But there
is another way.
Insurance
and welfare are not the only way of buying protection against future shocks.
The smart way to do it is by being hedged. Everyone today has heard of hedge
funds like Kenneth C. Griffin’s Chicago-based Citadel. As founder of the
Citadel Investment Group, now one of the twenty biggest hedge funds in the
world, Griffin currently manages around $16 billion in assets. Among them are
many so-called distressed assets, which Griffin picks up from failed companies
like Enron for knock-down prices. It would not be too much to say that Ken
Griffin loves risk. He lives and breathes uncertainty. Since he began trading
convertible bonds from his Harvard undergraduate dormitory, he has feasted on
‘fat tails’. Citadel’s main offshore fund has generated annual returns of 21
per cent since 1998.87 In 2007, when other financial
institutions were losing billions in the credit crunch, he personally made more
than a billion dollars. Among the artworks that decorate his penthouse
apartment on North Michigan Avenue is Jasper Johns’s False Start,
for which he paid $80 million, and a Cézanne which cost him $60 million. When
Griffin got married, the wedding was at Versailles (the French château, not the
small Illinois town of the same name).88 Hedging is
clearly a good business in a risky world. But what exactly does it mean, and
where did it come from?
The
origins of hedging, appropriately enough, are agricultural. For a farmer
planting a crop, nothing is more crucial than the price it will fetch after it
has been harvested and taken to market. But that could be lower than he expects
or higher. A futures contract allows him to protect himself by committing a
merchant to buy his crop when it comes to market at a price agreed when the
seeds are being planted. If the market price on the day of delivery is lower
than expected, the farmer is protected; the merchant who sells him the contract
naturally hopes it will be higher, leaving him with a profit. As the American
prairies were ploughed and planted, and as canals and railways connected them
to the major cities of the industrial Northeast, they became the nation’s
breadbasket. But supply and demand, and hence prices, fluctuated wildly.
Between January 1858 and May 1867, partly as a result of the Civil War, the
price of wheat soared from 55 cents to $2.88 per bushel, before plummeting back
to 77 cents in March 1870. The earliest forms of protection for farmers were
known as forward contracts, which were simply bilateral agreements between
seller and buyer. A true futures contract, however, is a standardized
instrument issued by a futures exchange and hence tradable. With the
development of a standard ‘to arrive’ futures contract, along with a set of
rules to enforce settlement and, finally, an effective clearinghouse, the first
true futures market was born. Its birthplace was the Windy City: Chicago. The
creation of a permanent futures exchange in 1874 - the Chicago Produce
Exchange, the ancestor of today’s Chicago Mercantile Exchange - created a home
for ‘hedging’ in the US commodity markets.89
A pure
hedge eliminates price risk entirely. It requires a speculator as a
counter-party to take on the risk. In practice, however, most hedgers tend to
engage in some measure of speculative activity, looking for ways to profit from
future price movements. Partly because of public unease about this - the
feeling that futures markets were little better than casinos - it was not until
the 1970s that futures could also be issued for currencies and interest rates;
and not until 1982 that futures contracts on the stock market became possible.
At
Citadel, Griffin has brought together mathematicians, physicists, engineers,
investment analysts and advanced computer technology. Some of what they do is
truly the financial equivalent of rocket science. But the underlying principles
are simple. Because they are all derived from the value of underlying assets,
all futures contracts are forms of ‘derivative’. Closely related, though
distinct from futures, are the financial contracts known as options. In
essence, the buyer of a call option has the right, but not the obligation, to
buy an agreed quantity of a particular commodity or financial asset from the
seller (‘writer’) of the option at a certain time (the expiration date) for a
certain price (known as the strike price). Clearly, the buyer of a call option
expects the price of the commodity or underlying instrument to rise in the
future. When the price passes the agreed strike price, the option is ‘in the
money’ - and so is the smart guy who bought it. A put option is just the
opposite: the buyer has the right, but not the obligation, to sell an agreed
quantity of something to the seller of the option. A third kind of derivative
is the swap, which is effectively a bet between two parties on, for example,
the future path of interest rates. A pure interest rate swap allows two parties
already receiving interest payments literally to swap them, allowing someone
receiving a variable rate of interest to exchange it for a fixed rate, in case
interest rates decline. A credit default swap, meanwhile, offers protection
against a company’s defaulting on its bonds. Perhaps the most intriguing kind
of derivative, however, are the weather derivatives like natural catastrophe
bonds, which allow insurance companies and others to offset the effects of
extreme temperatures or natural disasters by selling the so-called tail risk to
hedge funds like Fermat Capital. In effect, the buyer of a ‘cat bond’ is
selling insurance; if the disaster specified in the bond happens, the buyer has
to pay out an agreed sum or forfeit his principal. In return, the seller pays
an attractive rate of interest. In 2006 the total notional value of
weather-risk derivatives was around $45 billion.
There was
a time when most such derivatives were standardized instruments produced by
exchanges like the Chicago Mercantile, which has pioneered the market for
weather derivatives. Now, however, the vast proportion are custom-made and sold
‘over-the-counter’ (OTC), often by banks which charge attractive commissions
for their services. According to the Bank for International Settlements, the
total notional amounts outstanding of OTC derivative contracts - arranged on an
ad hoc basis between two parties - reached a staggering $596 trillion in
December 2007, with a gross market value of just over $14.5 trillion.ak Though
they have famously been called financial weapons of mass destruction by more
traditional investors like Warren Buffett (who has, nonetheless, made use of
them), the view in Chicago is that the world’s economic system has never been
better protected against the unexpected.
The fact
nevertheless remains that this financial revolution has effectively divided the
world in two: those who are (or can be) hedged, and those who are not (or
cannot be). You need money to be hedged. Hedge funds typically ask for a
minimum six- or seven-figure investment and charge a management fee of at least
2 per cent of your money (Citadel charges four times that) and 20 per cent of
the profits. That means that most big corporations can afford to be hedged
against unexpected increases in interest rates, exchange rates or commodity
prices. If they want to, they can also hedge against future hurricanes or
terrorist attacks by selling cat bonds and other derivatives. By comparison,
most ordinary households cannot afford to hedge at all and would not know how
to even if they could. We lesser mortals still have to rely on the relatively
blunt and often expensive instrument of insurance policies to protect us
against life’s nasty surprises; or hope for the welfare state to ride to the
rescue.
There is,
of course, a third and much simpler strategy: the old one of simply saving for
that rainy day. Or, rather, borrowing to buy assets whose future appreciation
in value will supposedly afford a cushion against calamity. For many families in
recent years, making provision for an uncertain future has taken the very
simple form of an investment (usually leveraged, that is debt-financed) in a
house, the value of which is supposed to keep increasing until the day the
breadwinners need to retire. If the pension plan falls short, never mind. If
you run out of health insurance, don’t panic. There is always home, sweet home.
As an
insurance policy or a pension plan, however, this strategy has one very obvious
flaw. It represents a one-way, totally unhedged bet on one market: the property
market. Unfortunately, as we shall see in the next chapter, a bet on bricks and
mortar is very far from being as safe as houses. And you do not need to live in
New Orleans to find that out the hard way.
NOTES
1 Rawle O. King,
‘Hurricane Katrina: Insurance Losses and National Capacities for Financing
Disaster Risks’, Congressional Research Service Report for Congress, 31 January
2008, table 1.
2 Joseph B.
Treaster, ‘A Lawyer Like a Hurricane: Facing Off Against Asbestos, Tobacco and
Now Home Insurers’, New York Times, 16 March 2007.
3 For details, see
Richard F. Scruggs, ‘Hurricane Katrina: Issues and Observations’, American
Enterprise Institute-Brookings Judicial Symposium, ‘Insurance and Risk
Allocation in America: Economics, Law and Regulation’, Georgetown Law Center,
20-22 September 2006.
4 Details
from http://www.usa.gov/Citizen/Topics/PublicSafety/Hurricane_Katrina_Recovery.shtml, http://katrina.louisiana.gov/index.
html and http://www.ldi.state.la.us/HurricaneKatrina.htm.
5 Peter Lattman,
‘Plaintiffs Laywer Scruggs is Indicted on Bribery Charges’, Wall Street
Journal, 29 November 2007; Ashby Jones and Paulo Prada, ‘Richard Scruggs
Pleads Guilty’, ibid., 15 March 2008.
6 King, ‘Hurricane
Katrina’, p. 4.
7 Naomi
Klein, The Shock Doctrine: The Rise of Disaster Capitalism (New
York, 2007).
8 http://www.nhc.noaa.gov/pastdec.shtml.
9 John Schwartz,
‘One Billion Dollars Later, New Orleans is Still at Risk’, New York
Times, 17 August 2007.
10 Michael Lewis,
‘In Nature’s Casino’, New York Times Magazine, 26 August 2007.
11 National Safety
Council, ‘What are the Odds of Dying?’: http:// www.nsc.org/lrs/statinfo/odds.htm.
For the cancer statistic, see the National Cancer Institute, ‘SEER Cancer
Statistics Review, 1975- 2004’, table I-17: http://srab.cancer.gov./devcan/.
The precise lifetime probability of dying from cancer in the United States
between 2002 and 2004 was 21.29 per cent, with a 95 per cent confidence
interval.
12 Florence Edler de
Roover, ‘Early Examples of Marine Insurance’, Journal of Economic
History, 5, 2 (November 1945), pp. 172-200.
13 Ibid., pp. 188f.
14 A. H. John, ‘The
London Assurance Company and the Marine Insurance Market of the Eighteenth
Century’, Economica, New Series, 25, 98 (May 1958), p. 130.
15 Paul A.
Papayoanou, ‘Interdependence, Institutions, and the Balance of Power’, International
Security, 20, 4 (Spring 1996), p. 55.
16 Roover, ‘Early
Examples of Marine Insurance’, p. 196.
17 M. Greenwood,
‘The First Life Table’, Notes and Records of the Royal Society of
London, 1, 2 (October 1938), pp. 70-2.
18 The preceding
paragraph owes a great debt to Peter L. Bernstein, Against the Gods:
The Remarkable Story of Risk (New York, 1996).
19 Gregory
Clark, A Farewell to Alms: A Brief Economic History of the World(Princeton,
2007).
20 See the essays in
A. Ian Dunlop (ed.), The Scottish Ministers’ Widows’ Fund, 1743-1993 (Edinburgh,
1992) for details.
21 The key documents
are to be found in the Robert Wallace papers, National Archives of Scotland:
CH/9/17/6-13.
22 G. W. Richmond,
‘Insurance Tendencies in England’, Annals of the American Academy of
Political and Social Science, 161 (May 1932), p. 183.
23 A. N.
Wilson, A Life of Walter Scott: The Laird of Abbotsford (London:
Pimlico, 2002), pp. 169-71.
24 G. Clayton and W.
T. Osborne, ‘Insurance Companies and the Finance of Industry’, Oxford
Economic Papers, New Series, 10, 1 (February 1958), pp. 84-97.
25 ‘American
Exceptionalism’, Economist, 10 August 2006.
26 http://www.workhouses.org.uk/index.html?StMarylebone/ StMarylebone.shtml.
27 Lothar
Gall, Bismarck: The White Revolutionary, vol. II: 1879-
1898, trans. J. A. Underwood (London, 1986), p. 129.
28 H. G. Lay, Marine
Insurance: A Text Book of the History of Marine Insurance, including the
Functions of Lloyd’s Register of Shipping (London, 1925), p. 137.
29 Richard Sicotte,
‘Economic Crisis and Political Response: The Political Economy of the Shipping
Act of 1916’, Journal of Economic History, 59, 4 (December 1999),
pp. 861-84.
30 Anon.,
‘Allocation of Risk between Marine and War Insurer’, Yale Law Journal,
51, 4 (February 1942), p. 674; C., ‘War Risks in Marine Insurance’, Modern
Law Review, 10, 2 (April 1947), pp. 211-14.
31 Alfred T.
Lauterbach, ‘Economic Demobilization in Great Britain after the First World
War’, Political Science Quarterly, 57, 3 (September 1942), pp.
376-93.
32 Correlli
Barnett, The Audit of War (London, 2001), pp. 31f.
33 Richmond,
‘Insurance Tendencies’, p. 185.
34 Charles Davison,
‘The Japanese Earthquake of 1 September’, Geographical Journal, 65,
1 (January 1925), pp. 42f.
35 Yoshimichi Miura,
‘Insurance Tendencies in Japan’, Annals of the American Academy of
Political and Social Science, 161 (May 1932), pp. 215-19.
36 Herbert H. Gowen,
‘Living Conditions in Japan’, Annals of the American Academy of
Political and Social Science, 122 (November 1925), p. 163.
37 Kenneth Hewitt,
‘Place Annihilation: Area Bombing and the Fate of Urban Places’, Annals
of the Association of American Geographers, 73 (1983), p. 263.
38 Anon., ‘War
Damage Insurance’, Yale Law Journal, 51, 7 (May 1942), pp. 1160-1.
It made $210 million, having collected premiums from 8 million policies and
paid out only a modest amount.
39 Kingo Tamai,
‘Development of Social Security in Japan’, in Misa Izuhara (ed.), Comparing
Social Policies: Exploring New Perspectives in Britain and Japan (Bristol,
2003), pp. 35-48. See also Gregory J. Kasza, ‘War and Welfare Policy in
Japan’, Journal of Asian Studies, 61, 2 (May 2002), p. 428.
40 Recommendation of
the Council of Social Security System (1950).
41 W. Macmahon Ball,
‘Reflections on Japan’, Pacific Affairs, 21, 1 (March 1948), pp.
15f.
42 Beatrice G.
Reubens, ‘Social Legislation in Japan’, Far Eastern Survey ,
18, 23 (16 November 1949), p. 270.
43 Keith L. Nelson,
‘The “Warfare State”: History of a Concept’, Pacific Historical Review,
40, 2 (May 1971), pp. 138f.
44 Kasza, ‘War and
Welfare Policy’, pp. 418f.
45 Ibid., p. 423.
46 Ibid., p. 424.
47 Nakagawa
Yatsuhiro, ‘Japan, the Welfare Super-Power’, Journal of Japanese
Studies, 5, 1 (Winter 1979), pp. 5-51.
48 Ibid., p. 21.
49 Ibid., p. 9.
50 Ibid., p. 18.
51 For comparative
studies, see Gregory J. Kasza, One World of Welfare: Japan in
Comparative Perspective (Ithaca, 2006) and Neil Gilbert and Ailee Moon,
‘Analyzing Welfare Effort: An Appraisal of Comparative Methods’, Journal
of Policy Analysis and Management, 7, 2 (Winter 1988), pp. 326-40.
52 Kasza, One
World of Welfare, p. 107.
53 Peter H.
Lindert, Growing Public: Social Spending and Economic Growth since the
Eighteenth Century (Cambridge, 2004), vol. I, table I.2.
54 Hiroto
Tsukada, Economic Globalization and the Citizens’ Welfare State (Aldershot
/ Burlington / Singapore / Sydney, 2002), p. 96.
55 Milton Friedman
and Anna J. Schwartz, A Monetary History of the United States,
1867-1960 (Princeton, 1963).
56 Milton Friedman
and Rose D. Friedman, Two Lucky People: Memoirs (Chicago /
London, 1998), p. 399.
57 Ibid., p. 400.
58 Ibid., p. 593.
59 Patricio Silva,
‘Technocrats and Politics in Chile: From the Chicago Boys to the CEIPLAN
Monks’, Journal of Latin American Studies, 23, 2 (May 1991), pp.
385-410.
60 Bill Jamieson,
‘25 Years On, Chile Has a Pensions Message for Britain’, Sunday
Business, 14 December 2006.
61 Rossana
Castiglioni, ‘The Politics of Retrenchment: The Quandaries of Social Protection
under Military Rule in Chile, 1973-1990’, Latin American Politics and
Society, 43, 4 (Winter 2001), pp. 39ff.
62 Ibid., p. 55.
63 José Piñera,
‘Empowering Workers: The Privatization of Social Security in Chile’, Cato
Journal, 15, 2-3 (Fall / Winter 1995/96), pp. 155-166.
64 Ibid., p. 40.
65 Teresita Ramos,
‘Chile: The Latin American Tiger?’, Harvard Business School Case 9-798-092 (21
March 1999), p. 6.
66 Laurence J.
Kotlikoff, ‘Pension Reform as the Triumph of Form over Substance’, Economists’
Voice (January 2008), pp. 1-5.
67 Armando
Barrientos, ‘Pension Reform and Pension Coverage in Chile: Lessons for Other
Countries’, Bulletin of Latin American Research, 15, 3 (1996), p.
312.
68 ‘Destitute No
More’, Economist, 16 August 2007.
69 Barrientos,
‘Pension Reform’, pp. 309f. See also Raul Madrid, ‘The Politics and Economics
of Pension Privatization in Latin America’, Latin American Research
Review, 37, 2 (2002), pp. 159-82.
70 All figures are
for 2004, the latest comparative data available from the World Bank’s World
Development Indicators database.
71 I am indebted
here to Laurence J. Kotlikoff and Scott Burns, The Coming Generational
Storm: What You Need to Know about America’s Economic Future(Cambridge,
2005). See also Peter G. Peterson, Running on Empty: How the Democratic
and Republican Parties Are Bankrupting Our Future and What Americans Can Do
about It (New York, 2005).
72 Ruth Helman,
Craig Copeland and Jack VanDerhei, ‘Will More of Us Be Working Forever? The
2006 Retirement Confidence Survey’, Employee Benefit Research Institute Issue
Brief, 292 (April 2006).
73 Gene L. Dodaro,
Acting Comptroller General of the United States, ‘Working to Improve
Accountability in an Evolving Environment’, address to the 2008 Maryland
Association of CPAs’ Government and Not-for-profit Conference (18 April 2008).
74 James Brooke, ‘A
Tough Sell: Japanese Social Security’, New York Times, 6 May 2004.
75 See Mutsuko
Takahashi, The Emergence of Welfare Society in Japan (Aldershot
/ Brookfield / Hong Kong / Singapore / Sydney, 1997), pp. 185f. See also
Kasza, One World of Welfare, pp. 179-82.
76 Alex Kerr, Dogs
and Demons: The Fall of Modern Japan (London, 2001), pp. 261-66.
77 Gavan
McCormack, Client State: Japan in the American Embrace (London,
2007), pp. 45-69.
78 Lisa Haines,
‘World’s Largest Pension Funds Top $10 Trillion’, Financial News, 5
September 2007.
79 ‘Living
Dangerously’, Economist, 22 January 2004.
80 Philip
Bobbitt, Terror and Consent: The Wars for the Twenty-first Century (New
York, 2008), esp. pp. 98-179.
81 Suleiman abu
Gheith, quoted in ibid., p. 119.
82 Graham Allison,
‘Time to Bury a Dangerous Legacy, Part 1’, Yale Global, 14 March
2008. Cf. idem, Nuclear Terrorism: The Ultimate Preventable
Catastrophe(Cambridge, MA, 2004).
83 Michael D.
Intriligator and Abdullah Toukan, ‘Terrorism and Weapons of Mass Destruction’,
in Peter Kotana, Michael D. Intriligator and John P. Sullivan (eds.), Countering
Terrorism and WMD: Creating a Global Counter-terrorism Network (New
York, 2006), table 4.1A.
84 See IPCC, Climate
Change 2007: Synthesis Report (Valencia, 2007).
85 Robert Looney,
‘Economic Costs to the United States Stemming from the 9/11 Attacks’, Center
for Contemporary Conflict Strategic Insight (5 August 2002).
86 Robert E. Litan,
‘Sharing and Reducing the Financial Risks of Future Mega-Catastrophes’, Brookings
Issues in Economic Policy, 4 (March 2006).
87 William
Hutchings, ‘Citadel Builds a Diverse Business’, Financial News, 3
October 2007.
88 Marcia Vickers,
‘A Hedge Fund Superstar’, Fortune, 3 April 2007.
89 Joseph Santos, ‘A
History of Futures Trading in the United States’, South Dakota University MS,
n.d.
No hay comentarios:
Publicar un comentario