The most basic financial impulse of all is to save for the future, because the future is so unpredictable. The world is a dangerous place. Not many of us get through life without having a little bad luck. Some of us end up having a lot. Often, it's just a matter of being in the wrong place at the wrong time: like the Mississippi delta in the last week of August 2005, when Hurricane Katrina struck not once but twice. First there was the howling 140-milean-hour wind that blew many of the area's wooden houses clean off their concrete foundations. Then, two hours later, came the thirty-foot storm surge that breached three of the levees that protect New Orleans from Lake Pontchartrain and the Mississippi, pouring millions of gallons of water into the city. Wrong place, wrong time. Like the World Trade Center on 11 September 2001. Or Baghdad on pretty much any day since the US invasion of 2003.

The history of risk management is one long struggle between our vain desire to be financially secure - and the hard reality that there really is no such thing as 'the future', singular. There are only multiple, unforeseeable futures, which will never lose their capacity to take us by surprise.

In the case of Katrina, nearly all the survivors lost property in the disaster, since nearly three quarters of the city's total housing stock were damaged. There were no fewer than 1.75 million property and casualty claims, with estimated insurance losses in excess of $4.1 billion, making Katrina the costliest catastrophe in modern American history.1 But Katrina not only submerged New Orleans.

It also laid bare the defects of a system of insurance that divided responsibility between private insurance companies, which offered protection against wind damage, and the federal government, which offered protection against flooding, under a scheme that had been introduced after Hurricane Betsy in I965. In the aftermath of the 2005 disaster, thousands of insurance company assessors fanned out along the Louisiana and Mississippi coastline. According to many residents, their job was not to help stricken policy holders but to avoid paying out to them by asserting that the damage their properties had suffered was due  to flooding and not to wind.  The insurance companies did not reckon with one of their policy-holders, former US Navy pilot and celebrity lawyer Richard F. Scruggs, the man once known as the King of Torts.

'Dickie' Scruggs first hit the headlines in the 1980s, when he represented shipyard workers whose lungs had been fatally damaged by exposure to asbestos, winning a $50 million settlement. But that was small change compared with what he later made the tobacco companies pay: over $200 billion to Mississippi and forty-five other states as compensation for Medicaid costs arising from tobacco-related illnesses. The case (immortalized in the film The Insider) made Scruggs a rich man. His fee in the tobacco class action is said to have been $1.4 billion, or $22,500 for every hour his law firm worked. It was money he used to acquire a waterfront house on Pascagoula's Beach Boulevard, a short commute (by private jet, naturally) from his Oxford, Mississippi, offices. All that remained of that house after Katrina was a concrete base plus a few ruined walls so badly damaged that they had to be bulldozed. Although his insurance company (wisely) paid out, Scruggs was dismayed to hear of the treatment of other policy-holders. Among those he offered to represent was his brother-in-law Trent Lott, the former Republican majority leader in the Senate, and his friend Mississippi Congressman Gene Taylor, both of whom had also lost homes to Katrina and had received short shrift from their insurers.2 In a series of cases on behalf of policy holders, Scruggs alleged that the insurers (principally State Farm and All State) were trying to renege on their legal obligations.3 He and his 'Scruggs Katrina Group' conducted detailed meteorological research to show that nearly all the damage in places like Pascagoula was caused by the wind, hours before the floodwaters struck. Scruggs was also approached by two whistle-blowing insurance adjusters, who claimed the company they worked for had altered reports in order to attribute damage to flooding rather than wind. The insurance companies' record profits in 2005 and 2006 only whetted Scruggs's appetite for redress." As he told me when we met in the wasteland where his house used to stand: 'This [town] was home for fifty years; where I raised my family; what I was proud of. It makes me somewhat emotional when I see this.' By that time, State Farm had already settled 640 cases brought by Scruggs on behalf of clients whose claims had initially been turned down, paying out $80 million; and had agreed to review 36,000 other claims.4 It seemed as if the insurers were retreating. Scruggs's campaign against them collapsed in November 2007, however, when he, his son Zachary and three associates were indicted on charges of trying to bribe a statecourt judge in a case arising from a dispute over Katrina-related legal fees.  Scruggs now faces a prison sentence of up to five years.5

It may sound like just another story of Southern moral laxity - or proof that those who live by the tort, die by the tort. Yet, regardless of Scruggs's descent from good fellow to bad felon, the fact remains that both State Farm and All State have now declared a large part of the Gulf of Mexico coast a 'no insurance' zone. Why risk renewing policies here, where natural disasters happen all too often and where, after the disaster, companies have to contend with the likes of Dickie Scruggs? The strong implication would seem to be that providing coverage to the inhabitants of places like Pascagoula and Saint Bernard is no longer something the private sector is prepared to do. Yet it is far from clear that American legislators are ready to take on the liabilities implied by a further extension of public insurance. Total non-insured damages arising from hurricanes in 2005 are likely to end up costing the federal government at least $109 billion in post-disaster assistance and $8 billion in tax relief, nearly three times the estimated insurance losses.6 According to Naomi Klein, this is symptomatic of a dysfunctional 'Disaster Capitalism Complex', which generates private profits for some, but leaves taxpayers to foot the true costs of catastrophe.? In the face of such ruinous bills, what is the right way to proceed? When insurance fails, is the only alternative, in effect, to nationalize all natural disasters - creating a huge. open-ended liability for governments?

Of course, life has always been dangerous. There have always been hurricanes, just as there have always been wars, plagues and famines. And disasters can be small private affairs as well as big public ones. Every day, men and women fall ill or are injured and suddenly can no longer work. We all get old and lose the strength to earn our daily bread. An unlucky few are born unable to fend for themselves. And sooner or later we all die, often leaving one or more dependants behind us. The key point is that few of these calamities are random events. The incidence of hurricanes has a certain regularity like the incidence of disease and death. In every decade since the 1850s the United States has been struck by between one and ten major hurricanes (defined as a storm with wind speeds above 110 mph and a storm surge above 8 feet). It is not yet clear that the present decade will beat the record of the 1940s, which saw ten such hurricanes.8 Because there are data covering a century and a half, it is possible to attach probabilities to the incidence and scale of hurricanes. The US Army Corps of Engineers described Hurricane Katrina as a I-in-396 storm, meaning that there is a 0.25 per cent chance of such a large hurricane striking the United States in any given year.9 A rather different view was taken by the company Risk Management Solutions, which judged a Katrina-sized hurricane to be a once-in-forty-years event just a few weeks before the storm struck. 10 These different assessments indicate that, like earthquakes and wars, hurricanes may belong more in the realm of uncertainty than of risk properly understood. Such probabilities can be calculated with greater precision for most of the other risks that people face mainly because they are more frequent, so statistical patterns are easier to discern. The average American's lifetime risk of death from exposure to forces of nature, including all kinds of natural disaster, has been estimated at 1 in 3,288. The equivalent figure for death due to a fire in a building is 1 in 1,358. The odds of the average American being shot to death are 1 in 314. But he or she is even more likely to commit suicide (1 in 11 9); more likely still to die in a fatal road accident (1 in 78); and most likely of all to die of cancer (1 in 5).11

In pre-modern agricultural societies, nearly everyone was at substantial risk from premature death due to malnutrition or disease, to say nothing of war. People in those days could do much less than later generations in the way of prophylaxis. They relied much more on seeking to propitiate the gods or God who, they conjectured, determined the incidence of famines, plagues and invasions. Only slowly did men appreciate the significance of measurable regularities in the weather, crop yields and infections. Only very belatedly - in the eighteenth and nineteenth centuries - did they begin systematically to record rainfall, harvests and mortality in a way that made probabilistic calculation possible. Yet, even before they did so, they understood the wisdom of saving: putting money aside for the proverbial (and in agricultural societies literal) extreme rainy day. Most primitive societies at least attempt to hoard food and other provisions to tide them over hard times. And our tribal species intuitively grasped from the earliest times that it makes sense to pool resources, since there is genuine safety in numbers. Appropriately, given our ancestors' chronic vulnerability, the earliest forms of insurance were probably burial societies, which set aside resources to guarantee a tribe member a decent interment. (Such societies remain the only form of financial institution in some of the poorest parts of East Africa.) Saving in advance of probable future adversity remains the fundamental principle of insurance, whether it is against death, the effects of old age, sickness or accident. The trick is knowing how much to save and what to do with those savings to ensure that, unlike in New Orleans after Katrina, there is enough money in the kitty to cover the costs of catastrophe when it strikes.

Enter the Scottish Ministers' Widows' Fund. It established a model not just for Scottish clergymen, but for everyone who aspired to provide against premature death. Within the next twenty years similar funds sprang up on the same model all over the English-speaking world, including the Presbyterian Ministers' Fund of Philadelphia (1761) and the English Equitable Company (1762), as well as the United Incorporations of St Mary's Chapel (1768), which provided for the widows of Scottish artisans. By 1815 the principle of insurance was so widespread that it was adopted even for those men who lost their lives fighting against Napoleon. A soldier's odds of being killed at Waterloo were roughly I in 4. But if he was insured, he had the consolation of knowing, even as he expired on the field of battle, that his wife and children would not be thrown out onto the streets (giving a whole new meaning to the phrase 'take cover'). By the middle of the nineteenth century, being insured was as much a badge of respectability as going to Church on a Sunday. Even novelists, not generally renowned for their financial prudence, could join. Sir Walter Scott took out a policy in 1826 to reassure his creditors that they would still get their money back in the event of his death.( See A. N. Wilson, A Life of Walter Scott: The Laird of Abbotsford, London: Pimlico, 2002, pp. 169-71). A fund that had originally been intended to support the widows of a few hundred clergymen grew steadily to become the general insurance and pension fund we know today as Scottish Widows. Although it is now just another financial services provider, having been taken over by Lloyds Bank in 1999, Scottish Widows is still seen as exemplifying the benefits of Calvinist thrift, thanks in no small measure to one of the most successful advertising campaigns in financial history.

But no matter how many private funds like Scottish Widows were set up, there were always going to be people beyond the reach of insurance, who were either too poor or too feckless to save for that rainy day. Their lot was a painfully hard one: dependence on private charity or the austere regime of the workhouse. At the large Marylebone Workhouse on London's Northumberland Street, the 'poor being lame impotent old and blind' numbered up to 1900 in hard times. When the weather was bitter, work scarce and food dear, men and women 'casuals' would submit to a prison-like regime. As the Illustrated London News described it in 1867:

They are washed with plenty of hot and cold water and soap, and receive six ounces of bread and a pint of gruel for supper; after which, their clothes being taken to be cleaned and fumigated, they are furnished with warm woolen night-shirts and sent to bed. Prayers are read by Scripture-readers; strict order and silence are maintained all night in the dormitory ... The bed consists of a mattress stuffed with coir, a flock pillow, and a pair of rugs. At six o'clock in the morning in summer, and at seven in winter, they are aroused and ordered to work. The women are set to clean the wards, or to pick oakum; the men to break stones, but none are detained longer than four hours after their breakfast which is of the same kind and quantity as their supper. Their clothes, disinfected and freed of vermin, being restored to them in the morning, those who choose to mend their ragged garments are supplied with needles, thread, and patches of cloth for that purpose. If any are ill, the medical officer of the workhouse attends to them; if too ill to travel, they are admitted into the infirmary.

The author of the report concluded that 'the "Amateur Casual" would find nothing to complain of ... A board of Good Samaritans could do no more.'12 By the later nineteenth century, however, a feeling began to grow that life's losers deserved better. The seeds began to be planted of a new approach to the problem of risk - one that would ultimately grow into the welfare state. These state systems of insurance were designed to exploit the ultimate economy of scale, by covering literally every citizen from birth to death.

We tend to think of the welfare state as a British invention. We also tend to think of it as a socialist or at least liberal invention.

In fact, the first system of compulsory state health insurance and old age pensions was introduced not in Britain but in Germany, and it was an example the British took more than twenty years to follow. Nor was it a creation of the Left; rather the opposite. The aim of Otto von Bismarck's social insurance legislation, as he himself put it in 1880, was 'to engender in the great mass of the unpropertied the conservative state of mind that springs from the feeling of entitlement to a pension.' In Bismarck's view, 'A man who has a pension for his old age is ... much easier to deal with than a man without that prospect.' To the surprise of his liberal opponents, Bismarck openly acknowledged that this was 'a state-socialist idea! The generality must undertake to assist the unpropertied.' But his motives were far from altruistic. 'Whoever embraces this idea', he observed, 'will come to power.13 It was not until 1908 that Britain followed the Bismarckian example, when the Liberal Chancellor of the Exchequer David Lloyd George introduced a modest and means-tested state pension for those over 70. A National Health Insurance Act followed in 1911. Though a man of the Left, Lloyd George shared Bismarck's insight that such measures were vote-winners in a system of rapidly widening electoral franchises. The rich were outnumbered by the poor. When Lloyd George raised direct taxes to pay for the state pension, he relished the label that stuck to his 1909 budget: 'The People's Budget.'

If the welfare state was conceived in politics, however, it grew to maturity in war. The First World War expanded the scope of government activity in nearly every field. With German submarines sending no less than 7,759,000 gross tons of merchant shipping to the bottom of the ocean, there was clearly no way that war risk could be covered by the private marine insurers. The standard Lloyd's policy had in fact already been modified (in 1898) to exclude 'the consequences of hostilities or warlike operations' (the so-called f.c.s. clause: 'free of capture and seizure'). But even those policies that had been altered to remove that exclusion were cancelled when war broke out.14 The state stepped in, virtually nationalizing merchant shipping in the case of the United States,15 and (predictably) enabling insurance companies to claim that any damage to ships between 1914 and 1918 was a consequence of the war.16 With the coming of peace, politicians in Britain also hastened to cushion the effects of demobilization on the labor market by introducing an Unemployment Insurance Scheme in 1920.17 This process repeated itself during and after the Second World War. The British version of social insurance was radically expanded under the terms of the 1942 Report of the Inter-Departmental Committee on Social Insurance and Allied Services, chaired by the economist William Beveridge, which recommended a broad assault on 'Want, Disease, Ignorance, Squalor and Idleness' through a variety of state schemes. In a March 1943 broadcast, Churchill summarized these as: 'national compulsory insurance for all classes for all purposes from the cradle to the grave'; the abolition of unemployment by government policies which would 'exercise a balancing influence upon development which can be turned on or off as circumstances require'; 'a broadening field for State ownership and enterprise'; more publicly provided housing; reforms to public education and greatly expanded health and welfare services.18

The arguments for state insurance extended beyond mere social equity. First, state insurance could step in where private insurers feared to tread. Second, universal and sometimes compulsory membership removed the need for expensive advertising and sales campaigns. Third, as one leading authority observed in the 1930s, 'the larger numbers combined should form more stable averages for the statistical experience'. 19 State insurance exploited economies of scale, in other words; so why not make it as comprehensive as possible? The enthusiasm with which the Beveridge Report was greeted not just in Britain but around the world helps explain why the welfare state is still thought of as having 'Made in Britain' stamped on it. However, the world's first welfare superpower, the country that took the principle furthest and with the greatest success, was not Britain but Japan. Nothing illustrates more clearly than the Japanese experience the intimate links between the welfare state and the warfare state.

Disaster kept striking Japan in the first half of the twentieth century. On 1 September 1923, a huge earthquake (7.9 on the Richter scale) struck the Kanta region, devastating the cities of Yokohama and Tokyo. More than 128,000 houses completely collapsed, around the same number half-collapsed, 900 were swept away by the sea and nearly 450,000 were burnt down in fires that broke out almost immediately after the quake.20 The Japanese were insured; between 1879 and 1914 their insurance industry had grown from nothing into a vibrant sector of the economy, offering cover against loss at sea, death, fire, conscription, transport accident and burglary, to name just some of the thirteen distinct forms of insurance sold by more than thirty companies. In the year of the earthquake, for example, Japanese citizens had purchased ¥699,634,000 ($328 million) worth of new life insurance for 1923, with an average policy amount of ¥1,280 ($600).21 But the total losses caused by the earthquake were in the region of $4-6 billion. Six years later the Great Depression struck, pushing some rural areas to the brink of starvation (at this time 70 per cent of the population was engaged in agriculture, of whom 70 per cent tilled an average of just one and a half acres).22 In 1937 the country embarked on an expensive and ultimately futile war of conquest in China. Then, in December 1941, Japan went to war with the world's economic colossus, the United States, and eventually paid the ultimate price at Hiroshima and Nagasaki. Quite apart from the nearly three million lives lost in Japan's doomed bid for empire, by the end in 1945 the value of Japan's entire capital stock seemed to have been reduced to zero by American bombers. In aggregate, according to the US Strategic Bombing Survey, at least 40 per cent of the built-up areas of more than sixty cities had been destroyed; 2.5 million homes had been lost, leaving 8.3 million people homeless.23 Practically the only city to survive intact (though not wholly unscathed) was Kyoto, the former imperial capital - a city which still embodies the ethos of pre-modern Japan, as it is one of the last places where the traditional wooden townhouses known as machiya can still be seen. One look at these long, thin structures, with their sliding doors, paper screens, polished beams and straw mats, makes it clear why Japanese cities were so vulnerable to fire.

In Japan, as in most combatant countries, the lesson was clear: the world was just too dangerous a place for private insurance markets to cope with. (Even in the United States, the federal government took over 90 per cent of the risk for war damage through the War Damage Corporation, one of the most profitable public sector entities in history for the obvious reason that no war damage befell the mainland United States. 25 With the best will in the world, individuals could not be expected to insure themselves against the US Air Force. The answer adopted more or less everywhere was for the government to take over, in effect to nationalize risk. When the Japanese set out to devise a system of universal welfare in 1949, their Advisory Council for Social Security acknowledged a debt to the British example. In the eyes of Bunji Kondo, a convinced believer in universal welfare coverage, it was time to have bebariji no nihonhan: Beveridge for the Japanese.26 But they took the idea even further than Beveridge had intended. The aim, as the report of the Advisory Council put it, was to create a system in which measures are taken for economic security for sickness, injury, childbirth, disability, death, old age, unemployment, large families and other causes of impoverishment through ... payment by governments ... [and] in which the needy will be guaranteed the minimum standard of living by national assistance.27

From now on, the welfare state would cover people against all the vagaries of modern life. If they were born sick, the state would pay. If they could not afford education, the state would pay. If they could not find work, the state would pay. If they were too ill to work, the state would pay. When they retired, the state would pay. And when they finally died, the state would pay their dependants. This certainly chimed with one of the objectives of the post-war American occupation: 'To replace a feudal economy by a welfare economy'.28 Yet it would be wrong to assume (as a number of post-war commentators did) that Japan's welfare state was 'imposed wholesale by an alien power'.29 In reality, the Japanese set up their own welfare state - and they began to do so long before the end of the Second World War. It was the mid twentieth-century state's insatiable appetite for able-bodied young soldiers and workers, not social altruism, that was the real driver. As the American political scientist Harold D. Lasswell put it, Japan in the 1930s became a garrison state.30 But it was one which carried within it the promise of a 'warfare-welfare state', offered social security in return for military sacrifice.

There had been some basic social insurance in Japan before the 1930s: factory accident insurance and health insurance (introduced for factory workers in 1927). But this covered less than two fifths of the industrial workforce.31 Significantly, the plan for a Japanese Welfare Ministry (Koseisho) was approved by Japan's lmperial government on 9 July 1937, just two months after the Jutbreak of war with China.32 Its first step was to introduce a new system of universal health insurance to supplement the existing programme for industrial employees. Between the end of 1938 and the end of 1944, the number of citizens covered by the scheme increased nearly a hundred-fold, from just over 500,000 to over 40 million. The aim was explicit: a healthier populace would ensure healthier recruits to the Emperor's armed forces. The wartime slogan of 'all people are soldiers' (kokumin kai hei) was adapted to become 'all people should have insurance' (kokumin kai hoken). And to ensure universal coverage, the medical profession and pharmaceutical industry were essentially subordinated to the state.33 The war years also saw the introduction of compulsory pension schemes for seamen and workers, with the state covering 10 per cent of the costs, while employers and employees each contributed 5.5 per cent of the latter's wages. The first steps towards the large-scale provision of public housing were also taken. So what happened after the war in Japan was in large measure the extension of the warfare-welfare state. Now 'all people should have pensions', kokumin kai nenkin. Now there should be unemployment insurance, rather than the earlier paternalistic practice of keeping workers on payrolls even in lean times. Small wonder some Japanese tended to think of welfare in nationalistic terms, a kind of peaceful mode of national aggrandisement. The 1950 report, with its British-style recommendations, was in fact rejected by the government. Only in 1961, long after the end of American control, were most of its recommendations adopted. By the late 1970s a Japanese politician, Nakagawa Yatsuhiro, could boast that Japan had become 'The Welfare Super-Power' (fukushi chodaikoku), precisely because its system was different from (and superior to) Western models.34

There was in fact nothing institutionally unique about Japan's system, of course. Most welfare states aimed at universal, cradle-to-grave coverage. Yet the Japanese welfare state seemed to be a miracle of effectiveness. In terms of life expectancy, the country led the world. In education, too, it was ahead of the field. Around 90 per cent of the population had graduated from high school in the mid seventies, compared with just 35 per cent in England.36 Japan was also a much more equal society than any in the West, with the sole exception of Sweden. And Japan had the largest state pension fund in the world, so that every Japanese who retired could count on a generous bonus as well as a regular income throughout his (generally rather numerous) years of well earned rest. The welfare superpower was also a miracle of parsimony. In 1975 just 9 per cent of national income went on social security, compared with 3l per cent in Sweden.37 The burden of tax and social welfare was roughly half that in England. Run on this basis, the welfare state seemed to make perfect sense. Japan had achieved security for all- the elimination of risk - while at the same time its economy grew so rapidly that by 1968 it was the second largest in the world. A year before, Herman Kahn had predicted that Japan's per capita income would overtake America's by 2000. Indeed, Nakagawa Yatsuhiro argued that, when fringe benefits were taken into account, 'the actual income of the Japanese worker [was already] at least three times more than that of the American'.35 Warfare had failed to make Japan Top Nation, but welfare was succeeding. The key turned out to be not a foreign empire, but a domestic safety net. 38

Yet there was a catch, a fatal flaw in the design of the post warfare welfare state. The welfare state might have worked smoothly enough in 1970s Japan. But the same could not be said of its counterparts in the Western world. Despite their superficial topographical and historical resemblances (archipelagos off Eurasia, imperial pasts, buttoned-up behavior when sober) the Japanese and the British had quite different cultures. Outwardly, their welfare systems might seem similar: state pensions financed out of taxation on the old pay-as-you-go model; standardized retirement ages; universal health insurance; unemployment benefits; subsidies to farmers; quite heavily restricted labor markets. But these institutions worked in quite different ways in the two countries. In Japan egalitarianism was a prized goal of policy, while a culture of social conformism encouraged compliance with the rules. English individualism, by contrast, inclined people cynically to game the system. In Japan, firms and families continued to play substantial supporting roles in the welfare system. Employers offered supplementary benefits and were reluctant to fire workers. As recently as the 1990s, two thirds of Japanese older than 64 lived with their children.39 In Britain, by contrast, employers did not hesitate to slash payrolls in hard times, while people were much more likely to leave elderly parents to the tender mercies of the National Health Service. The welfare state might have made Japan an economic superpower, but in the 1970s it appeared to be having the opposite effect in Britain.

According to British conservatives, what had started out as a system of national insurance had degenerated into a system of state handouts and confiscatory taxation which disastrously skewed economic incentives. Between 1930 and 1980, social transfers in Britain had risen from just 2.2 per cent of gross domestic product to 10 per cent in 1960, 13 per cent in 1970 and nearly 17 per cent in 1980, more than 6 per cent higher than in Japan.40 Health care, social services and social security were consuming three times more than defense as a share of total managed government expenditure. Yet the results were dismal. Increased expenditure on UK welfare had been accompanied by low growth and inflation significantly above the developed world average. A particular problem was chronically slow productivity growth (real GDP per person employed grew by just 2.8 per cent between 1960 and 1979, compared with 8.1 per cent in Japan),41 which in turn seemed closely related to the bloody-minded bargaining techniques of British trade unions ('go slows' being a favorite alternative to outright 'downing tools'). Meanwhile, marginal tax rates in excess of 100 per cent on higher incomes and capital gains discouraged traditional forms of saving and investment. The British welfare state, it seemed, had removed the incentives without which a capitalist economy simply could not function: the carrot of serious money for those who strove, the stick of hardship for those who slacked. The result was 'stagflation': stagnant growth plus high inflation. Similar problems were afflicting the US economy, where expenditure on health, Medicare, income security and social security had risen from 4 per cent of GDP in 1959 to 9 per cent in 1975, outstripping defense spending for the first time. In America, too, productivity was scarcely growing and stagflation was rampant. What was to be done?

One man, and his pupils, thought they knew the answer. Thanks in large measure to their influence, one of the most pronounced economic trends of the past twenty-five years has been for the Western welfare state to be dismantled, reintroducing people with a sharp shock to the unpredictable monster they thought they had escaped from: risk.

 

The Big Chill

In 1976 a diminutive professor working at the University of Chicago won the Nobel Prize in economics. Milton Friedman’s reputation as an economist rested in large measure on his reinstatement of the idea that inflation was due to an excessive increase in the supply of money. He co-wrote perhaps the single most important book on US monetary policy of all time, firmly laying the blame for the Great Depression on mistakes by the Federal Reserve.42 But the question that had come to preoccupy him by the mid-seventies was: what had gone wrong with the welfare state? In March 1975, Friedman flew from Chicago to Chile to answer that question.

Only eighteen months earlier, in September 1973, tanks had rolled through the capital Santiago to overthrow the government of the Marxist President Salvador Allende, whose attempt to turn Chile into a Communist state had ended in total economic chaos and a call by the parliament for a military takeover. Air force jets bombed the presidential Moneda Palace, watched from the balcony of the nearby Carera Hotel by opponents of Allende who celebrated with champagne. Inside the palace, the president himself fought a hopeless rearguard action armed with an AK47 – a gift from Fidel Castro, the man he had sought to emulate. As the tanks rumbled towards him, Allende realized it was all over and, cornered in what was left of his quarters, shot himself.

The coup epitomized a world-wide crisis of the post-war welfare state and posed a stark choice between rival economic systems. With output collapsing and inflation rampant, Chile’s system of universal- benefits and state pensions was essentially bankrupt. For Allende, the answer had been full blown Marxism, a complete Soviet-style takeover of every aspect of economic life. The generals and their supporters knew they were against that. But what were they actually for, since the status quo was clearly unsustainable? Enter Milton Friedman. Amid his lectures and seminars, he spent three quarters of an hour with the new president General Pinochet and later wrote him an assessment of the Chilean economic situation, urging him to reduce the government deficit that he had identified as the main cause of the country’s sky-high inflation, then running at an annual rate of 900 per cent.43

 A month after Friedman’s visit, the Chilean junta announced that inflation would be stopped ‘at any cost’. The regime cut government spending by 27 per cent and set fire to bundles of banknotes. But Friedman was offering more than his patent monetarist shock therapy. In a letter to Pinochet written after his return to Chicago, he argued that ‘this problem’ of inflation arose ‘from trends toward socialism that startedi0rty years ago, and reached their logical – and terrible climax in the Allende regime’. As he later recalled, ‘The general line I was taking … was that their present difficulties were due almost entirely to the forty-year trend toward collectivism, socialism, and the welfare state .. .’44 And he assured Pinochet: ‘The end of inflation will lead to a rapid expansion of the capital market, which will greatly facilitate the transfer of enterprises and activities still in the hands of the government to the private sector.’45

For tendering this advice Friedman found himself denounced by the American press. After all, he was acting as a consultant to a military dictator responsible for the executions of more than two thousand real and suspected Communists and the torture of nearly 30,000 more. As the New York Times asked: ‘ … if the pure Chicago economic theory can be carried out in Chile only at the price of repression, should its authors feel some responsibility?’

Chicago’s role in the new regime consisted of more than just one visit by Milton Friedman. Since the 1950s, there had been a regular stream of bright young Chilean economists studying at Chicago on an exchange programme with the Universidad Catholica in Santiago, and they went back convinced of the need to balance the budget, tighten the money supply and liberalize trade.46 These were the so-called Chicago Boys, Friedman’s foot soldiers: Jorge Cauas, Pinochet’s finance minister and later economics ‘super minister’, Sergio de Castro, his successor as finance minister, Miguel Kast, labor minister and later central bank chief, and at least eight others who studied in Chicago and went on to serve in government. Even before the fall of Allende, they had devised a detailed programme of reforms known as El Ladrillo (The Brick) because of the thickness of the manuscript. The most radical measures, however, would come from a Catholic University student who had opted to study at Harvard, not Chicago. What he had in mind was the most profound challenge to the welfare state in a generation. Thatcher and – Friedman noted in 1988 that he had given much the same advice on inflation to the Chinese government, yet found that he received no ‘avalanche of protests for [his] having been willing to give advice to so evil a government’, despite the fact that it ‘has been and still is more repressive than the Chilean military junta’.

 

Reagan came later. The backlash against welfare started in Chile.

For Jose Pifiera, just 24 when Pinochet seized power, the invitation to return to Chile from Harvard posed an agonizing dilemma. He had no illusions about the nature of Pinochet's regime. Yet he also believed there was an opportunity to put into practice ideas that had been taking shape in his mind ever since his arrival in New England. The key, as he saw it, was not just to reduce inflation. It was also essential to foster that link between property rights and political rights which had been at the heart of the successful North American experiment with capitalist democracy. There was no surer way to do this, Pifiera believed, than radically to overhaul the welfare state, beginning with the pay-as-you-go system of funding state pensions and other benefits. As he saw it:

What had begun as a system of large-scale insurance had simply become a system of taxation, with today's contributions being used to pay today's benefits, rather than to accumulate a fund for future use. This 'pay-as-you-go' approach had replaced the principle of thrift with the practice of entitlement ... [But this approach] is rooted in a false conception of how human beings behave. It destroys, at the individual level, the link between contributions and benefits. In other words, between effort and reward. Wherever that happens on a massive scale and for a long period of time, the final result is disaster.47

Between 1979 and 1981, as minister of labour (and later minister of mining), Pifiera created a radically new pension system for Chile, offering every worker the chance to opt out of the state pension system. Instead of paying a payroll tax, they would put an equivalent amount (10 per cent of their wages) into an individual Personal Retirement Account, to be managed by private and competing companies known as Administradora de Fondos de Pensiones (AFPs).48 On reaching retirement age, a participant would withdraw his money and use it to buy an annuity; or, if he preferred, he could keep working and contributing. In addition to a pension, the scheme also included a disability and life insurance premium. The idea was to give the Chilean worker a sense that the money being set aside was really his own capital. In the words of Hernan Btichi (who helped Pifiera draft the social security legislation and went on to implement the reform of health care), 'Social programmes have to include some incentive for individual effort and for persons gradually to be responsible for their own destiny. There is nothing more pathetic than social programmes that encourage social parasitism.'49

Pifiera gambled. He gave workers a choice: stick with the old system of pay-as-you-go, or opt for the new Personal Retirement Accounts. He cajoled, making regular television appearances to reassure workers that - 'Nobody will take away your grandmother's cheque' (from the old state system). He held firm, sarcastically dismissing a proposal that the country's trade unions, rather than individual workers, should be responsible for choosing their members' AFPs. Finally, on 4 November 1980, the reform was approved, coming into effect at Pifiera's mischievous suggestion on 1 May, international Labour Day, the following year. 50 The public response was enthusiastic. By I990 more than 70 per cent of workers had made the switch to the private system.51 Each one received a shiny new book in which the contributions and investment returns were recorded. By the end of 2006, around 7.7 million Chileans had a Personal Retirement Account; 2.7 million were also covered by private health schemes, under the so-called ISAPRE system, which allowed workers to opt out of the state health insurance system in favor of a private provider. It may not sound like it, but - along with the other Chicago-inspired reforms implemented under Pinochet - this represented as big a revolution as anything the Marxist Allende had planned back in 1973. Moreover, the reform had to be introduced at a time of extreme economic instability, a consequence of the ill-judged decision to peg the Chilean currency to the dollar in 1979, when the inflation dragon appeared to have been slain. When US interest rates rose shortly afterwards, the deflationary pressure plunged Chile into a recession that threatened to derail the Chicago-Harvard express altogether. The economy contracted 13 per cent in 1982, seemingly vindicating the left-wing critics of Friedman's 'shock treatment'. Only towards the end of 1985 could the crisis really be regarded as over. By 1990 it was clear that the reform had been a success: welfare reforms were responsible for fully half the decline of total government expenditure from 34 per cent of GDP to 22 per cent.

Was it worth it? Was it worth the huge moral gamble that the Chicago and Harvard boys made, of getting into bed with a murderous, torturing military dictator? The answer depends on whether or not you think these economic reforms helped pave the way back to a sustainable democracy in Chile. In 1980, just seven years after the coup, Pinochet conceded a new constitution that prescribed a ten-year transition back to democracy. In 1990, having lost a referendum on his leadership, he stepped down as president (though he remained in charge of the army for a further eight years). Democracy was restored, and by that time the economic miracle was under way that helped to ensure its survival. For the pension reform not only created a new class of property owners, each with his own retirement nest egg. It also gave the Chilean economy a massive shot in the arm, since the effect was significantly to increase the savings rate (to 30 per cent of GDP by 1989, the highest in Latin America). Initially, a cap was imposed that prevented the AFPs from investing more than 6 per cent (later 12 per cent) of the new pension funds outside Chile.52

The effect of this was to ensure that Chile's new source of savings was channeled into the country's own economic development. For example the annual rate of return on the Personal Retirement Accounts has been over 10 per cent, reflecting the soaring performance of the Chilean stock market, which has risen by a factor of 18 since 1987.

There was a shadow side to the system, to be sure. The administrative and fiscal costs of the system are sometimes said to be too high.53 Since not everyone in the economy has a regular full-time job, not everyone ends up participating in the system. The self employed were not obliged to contribute to Personal Retirement Accounts, and the casually employed do not contribute either. That leaves a substantial proportion of the population with no pension coverage at all, including many of the people living in La Victoria, once a hotbed of popular resistance to the Pinochet regime - and still the kind of place where Che Guevara's face is spray-painted on the walls. On the other hand, the government stands ready to make up the difference for those whose savings do not suffice to pay a minimum pension, provided they have done at least twenty years of work. And there is also a Basic Solidarity pension for those who do not qualify for this.54 Above all, the improvement in Chile's economic performance since the Chicago Boys' reforms is very hard to argue with. The growth rate in the fifteen years before Friedman's visit was 0.17 per cent. In the fifteen years that followed, it was 3.28 per cent, nearly twenty times higher. The poverty rate has declined dramatically to just 15 per cent, compared with 40 per cent in the rest of Latin America.68 Santiago today is the shining city of the Andes, easily the continent's most prosperous and attractive city.

It is a sign of Chile's success that the country's pension reforms have been imitated all across the continent, and indeed around the world. Bolivia, El Salvador and Mexico copied the Chilean scheme to the letter. Peru and Colombia introduced private pensions as an alternative to the state system.56 Kazakhstan, too, has followed the Chilean example. Even British MPs have beaten a path from Westminster to Pifiera's door. The irony is that the Chilean reform was far more radical than anything that has been attempted in the United States, the heartland of free market economics. Yet welfare reform is coming to North America, whether anyone wants it or not.

When Hurricane Katrina struck New Orleans, it laid bare some realities about the American system that many people had been doing their best to ignore. Yes, America had a welfare state. No, it didn't work. The Reagan and Clinton administrations had implemented what seemed like radical welfare reforms, reducing unemployment benefits and the periods for which they could be claimed. But no amount of reform could insulate the system from the ageing of the American population and the spiraling cost of private health care.

The US has a unique welfare system. Social Security provides a minimal state pension to all retirees, while at the same time the Medicare system covers all the health costs of the elderly and disabled. Income support and other health expenditures push up the total cost of federal welfare programmes to 11 per cent of GDP. Am1rican healthcare, however, is almost entirely provided by the private sector. At its best it is state-of-the-art, but it is very far from cheap. And, if you want treatment before you retire, you need a private insurance policy - something an estimated 47 million Americans do not have, since such policies tend to be available only to those in regular, formal employment. The result is a welfare system which is not comprehensive, is much less redistributive than European systems, but is still hugely expensive. Since 1993 Social Security has been more expensive than National Security. Public expenditure on education is higher as a percentage of GDP (5.9 per cent) than in Britain, Germany or Japan. Public health expenditures are equivalent to around 7 per cent of GDP, the same as in Britain; but private health care spending accounts for more (8.5 per cent, compared with a paltry 1.1 per cent in Britain).57

Such a welfare system is ill prepared to cope with a rapid increase in the number of claimants. But that is precisely what Americans face as the members of the so-called 'Baby Boomer' generation, born after the Second World War, begin to retire.58 According to the United Nations, between now and 2050 male life expectancy in the United States is likely to rise from 75 to 80. Over the next forty years, the share of the American population that is aged 65 or over is projected to rise from 12 per cent to nearly 21 per cent. Unfortunately, many of the soon-to-be-retired have made inadequate provision for life after work.

According to the 2006 Retirement Confidence Survey, six in ten American workers say they are saving for retirement and just four in ten say they have actually calculated how much they should be saving. Many of those without sufficient savings imagine that they will compensate by working for longer. The average worker plans to work until age 65. But it turns out that he or she actually ends up retiring at 62; indeed, around four in ten American workers end up leaving the workforce earlier than they planned.59 This has grave implications for the federal budget, since those who make these miscalculations are likely to end up a charge on taxpayers in one way or another. Today the average retiree receives Social Security, Medicare and Medicaid benefits totalling $21,000 a year. Multiply this by the current 36 million elderly and you see why these programmes already consume such a large proportion of federal tax revenues. And that proportion is bound to rise, not only because the number of retirees is going up but also because the costs of benefits like Medicare are out of control, rising at double the rate of inflation. The 2003 extension of Medicare to cover prescription drugs only made matters worse. According to one projection, by the aptly named Medicare Trustee Thomas R. Saving, the cost of Medicare alone will absorb 24 per cent of all federal income taxes by 2019. Current figures also imply that the federal government has much larger unfunded liabilities than official data imply. The Government Accountability Office's latest estimate of the implicit 'exposures' arising from unfunded future Social Security and Medicare benefits is $34 trillion.60 That is nearly four times the size of the official federal debt.

Ironically, there's only one country where the problem of an ageing population has more serious economic implications than the United States. That country is Japan. So successful was the Japanese 'welfare superpower' that by the 1970s life expectancy in Japan had become the longest in the world. But that, combined with a falling birth rate, has produced the world's oldest society, with more than 21 per cent of the population already over the age of 65. According to Nakamae International Economic Research, the elderly population will be equal to that of the working population by 2044-74 As a result, Japan is now grappling with a profound structural crisis of its welfare system, which was not designed to cope with what the Japanese call the longevity society (ch6ju shakai).61 Despite raising the retirement age, the government has not yet resolved the problems of the state pension system. (Matters are not helped by the fact that many self employed people and students - not to mention some eminent politicians - are failing to make their required social security contributions.) Public health insurers, meanwhile, have been in deficit since the early 1990s.62 japans’ welfare budget is now equal to three quarters of tax revenues. Its debt exceeds one quadrillion yen, around 170 per cent of GDP.63 Yet private sector institutions are in no better shape. Life insurance companies have been struggling since the 1990 stock market crash; three major insurers failed between 1997 and 2000. Pension funds are in equally dire straits. As most countries in the developed world are moving in the same direction, it gives a new meaning to that old 1980s pop song about 'turning Japanese'. Assets at the world's largest pension funds (which include the Japanese government's own fund, its Dutch counterpart and the California Public Employees' fund) now exceed $10 trillion, having risen by 60 per cent between 2004 and 2007.64 But are their liabilities ultimately going to grow so large that perhaps even these huge sums will not suffice?

Longer life is good news for individuals, but it is bad news for the welfare state and the politicians who have to persuade voters to reform it. The even worse news is that, even as the world's Population is getting older, the world itself may be getting more dangerous.65

What if international terrorism strikes more frequently and/or lethally, as Al Qaeda continues its quest for weapons of mass destruction? There is in fact good reason to fear this. Given the relatively limited impact of the 2001 attacks, Al Qaeda has a strong incentive to attempt a 'nuclear 9/11'.66 The organization's spokesmen do not deny this; on the contrary, they openly boast of their ambition 'to kill 4 million Americans - 2 million of them children - and to exile twice as many and wound and cripple hundreds of thousands'.67 This cannot be dismissed as mere rhetoric. According to Graham Allison, of Harvard University's Belfer Center, 'if the US and other governments just keep doing what they are doing today, a nuclear terrorist attack in a major city is more likely than not by 2014'. In the view of Richard Garwin, one of the designers of the hydrogen bomb, there is already a '20 per cent per year probability of a nuclear explosion with American cities and European cities included'. Another estimate, by Allison's colleague Matthew Bunn, puts the odds of a nuclear terrorist attack over a ten-year period at 29 per cent.68

Even a small 12.S-kiloton nuclear device would kill up to 80,000 people if detonated in an average American city; a 1.0 megaton hydrogen bomb could kill as many as 1.9 million. A successful biological attack using anthrax spores could be nearly as lethal. 69

What if global warming is increasing the incidence of natural disasters? Here, too, there are some grounds for unease. According to the scientific experts on the Intergovernmental Panel on Climate Change 'the frequency of heavy precipitation events has increased over most areas' as a result of man-made global warming. There is also 'observational evidence of an increase in intense tropical cyclone activity in the North Atlantic since about 1970'. The rising sea levels forecast by the IPCC would inevitably increase the flood damage caused by storms like Katrina.70 Not all scientists accept the notion that hurricane activity along the US Atlantic coast is on the increase (as claimed by Al Gore in his film An Inconvenient Truth). But it would clearly be a mistake blithely to assume that this is not the case, especially given the continued growth of residential construction in vulnerable states. For governments that are already tottering under the weight of ever-increasing welfare commitments, an increase in the frequency or scale of catastrophes could be fiscally fatal. The insurance (and reinsurance) losses arising from the 9/11 attacks were in the region of $30-58 billion, close to the insurance losses due to Katrina.71 In both cases, the US federal government had to step in to help private insurers meet their commitments, providing emergency federal terrorism insurance in the aftermath of 9/11, and absorbing the bulk of the costs of emergency relief and reconstruction along the coast of the Gulf of Mexico. In other words, just as happened during the world wars, the welfare state steps in when the insurers are overwhelmed. But this has a perverse result in the case of natural disasters. In effect, taxpayers in relatively safer parts of the country are subsidizing those who choose to live in hurricane-prone regions. One possible way of correcting this imbalance would be to create a federal reinsurance programme to cover mega-catastrophes. Rather than looking to taxpayers to pick up the tab for big disasters, insurers would charge differential premiums (higher for those closest to hurricane zones), laying off the risk of another Katrina by reinsuring the risk through the government.72 But there is another way.

Insurance and welfare are not the only way of buying protection against future shocks. The smart way to do it is by being hedged. Everyone today has heard of hedge funds like Kenneth C. Griffin's Chicago-based Citadel. As founder of the Citadel Investment Group, now one of the twenty biggest hedge funds in the world, Griffin currently manages around $16 billion in assets. Among them are many so-called distressed assets, which Griffin picks up from failed companies like Enron for knock-down prices. It would not be too much to say that Ken Griffin loves risk. He lives and breathes uncertainty. Since he began trading convertible bonds from his Harvard undergraduate dormitory, he has feasted on 'fat tails'. Citadel's main offshore fund has generated annual returns of 21 per cent since 1998.73 In 2007, when other financial institutions were losing billions in the credit crunch, he personally made more than a billion dollars. Among the artworks that decorate his penthouse apartment on North Michigan Avenue is Jasper Johns's False Start, for which he paid $80 million, and a cezanne which cost him $60 million. When Griffin got married, the wedding was at Versailles (the French chateau, not the small Illinois town of the same name).74 Hedging is clearly a good business in a risky world. But what exactly does it mean, and where did it come from?

The origins of hedging, appropriately enough, are agricultural. For a farmer planting a crop, nothing is more crucial than the price it will fetch after it has been harvested and taken to market. But that could be lower than he expects or higher. A futures contract allows him to protect himself by committing a merchant to buy his crop when it comes to market at a price agreed when the seeds are being planted. If the market price on the day of delivery is lower than expected, the farmer is protected; the merchant who sells him the contract naturally hopes it will be higher, leaving him with a profit. As the American prairies were ploughed and planted, and as canals and railways connected them to the major cities of the industrial Northeast, they became the nation's breadbasket. But supply and demand, and hence prices, fluctuated wildly. Between January 1858 and May 1867, partly as a result of the Civil War, the price of wheat soared from 55 cents to $2.88 per bushel, before plummeting back to 77 cents in March 1870. The earliest forms of protection for farmers were known as forward contracts, which were simply bilateral agreements between seller and buyer. A true futures contract, however, is a standardized instrument issued by a futures exchange and hence tradable. With the development of a standard 'to arrive' futures contract, along with a set of rules to enforce settlement and, finally, an effective clearinghouse, the first true futures market was born. Its birthplace was the Windy City: Chicago. The creation of a permanent futures exchange in 1874 - the Chicago Produce Exchange, the ancestor of today's Chicago Mercantile Exchange - created a home for 'hedging' in the US commodity markets.75

A pure hedge eliminates price risk entirely. It requires a speculator as a counter-party to take on the risk. In practice, however, most hedgers tend to engage in some measure of speculative activity, looking for ways to profit from future price movements. Partly because of public unease about this - the feeling that futures markets were little better than casinos - it was not until the 1970s that futures could also be issued for currencies and interest rates; and not until 1982 that futures contracts on the stock market became possible.

At Citadel, Griffin has brought together mathematicians, physicists, engineers, investment analysts and advanced computer technology. Some of what they do is truly the financial equivalent of rocket science. But the underlying principles are simple. Because they are all derived from the value of underlying assets, all futures contracts are forms of 'derivative'. Closely related, though distinct from futures, are the financial contracts known as options. In essence, the buyer of a call option has the right, but not the obligation, to buy an agreed quantity of a particular commodity or financial asset from the seller ('writer') of the option at a certain time (the expiration date) for a certain price (known as the strike price).

Clearly, the buyer of a call option expects the price of the commodity or underlying instrument to rise in the future. When the price passes the agreed strike price, the option is 'in the money' - and so is the smart guy who bought it. A put option is just the opposite: the buyer has the right, but not the obligation, to sell an agreed quantity of something to the seller of the option. A third kind of derivative is the swap, which is effectively a bet between two parties on, for example, the future path of interest rates. A pure interest rate swap allows two parties already receiving interest payments literally to swap them, allowing someone receiving a variable rate of interest to exchange it for a fixed rate, in case interest rates decline. A credit default swap, meanwhile, offers protection against a company's defaulting on its bonds. Perhaps the most intriguing kind of derivative, however, are the weather derivatives like natural catastrophe bonds, which allow insurance companies and others to offset the effects of extreme temperatures or natural disasters by selling the so-called tail risk to hedge funds like Fermat Capital. In effect, the buyer of a 'cat bond' is selling insurance; if the disaster specified in the bond happens, the buyer has to payout an agreed sum or forfeit his principal. In return, the seller pays ~n attractive rate of interest. In 2006 the total notional value of weather-risk derivatives was around $45 billion.

There was a time when most such derivatives were standardized instruments produced by exchanges like the Chicago Mercantile, which has pioneered the market for weather derivatives. Now, however, the vast proportion are custom-made and sold 'over the-counter' (OTC), often by banks which charge attractive commissions for their services. According to the Bank for International Settlements, the total notional amounts outstanding of OTC derivative contracts - arranged on an ad hoc basis between two parties - reached a staggering $ 596 trillion in December 2007, with a gross market value of just over $9.5 trillion." Though they have famously been called financial weapons of mass destruction by more traditional investors like Warren Buffett (who has, nonetheless, made use of them), the view in Chicago is that the world's economic system has never been better protected against the unexpected.

The fact nevertheless remains that this financial revolution has effectively divided the world in two: those who are (or can be) hedged, and those who are not (or cannot be). You need money to be hedged. Hedge funds typically ask for a minimum six- or seven-figure investment and charge a management fee of at least 2 per cent of your money (Citadel charges four times that) and 20 per cent of the profits. That means that most big corporations can afford to be hedged against unexpected increases in interest rates, exchange rates or commodity prices. If they want to, they can also hedge against future hurricanes or terrorist attacks by selling cat bonds and other derivatives. By comparison, most ordinary households cannot afford to hedge at all and would not know how to even if they could. We lesser mortals still have to rely on the relatively blunt and often expensive instrument of insurance policies to protect us against life's nasty surprises; or hope for the welfare state to ride to the rescue.

There is, of course, a third and much simpler strategy: the old one of simply saving for that rainy day. Or, rather, borrowing to buy assets whose future appreciation in value will supposedly afford a cushion against calamity. For many families in recent years, making provision for an uncertain future has taken the very simple form of an investment (usually leveraged, that is debt-financed) in a house, the value of which is supposed to keep increasing until the day the breadwinners need to retire. If the pension plan falls short, never mind. If you run out of health insurance, don't panic. There is always home, sweet home.

As an insurance policy or a pension plan, however, this strategy has one very obvious flaw. It represents a one-way, totally unhedged bet on one market: the property market. Unfortunately, as we shall see in the next chapter, a bet on bricks and mortar is very far from being as safe as houses. And you do not need to live in New Orleans to find that out the hard way.
 

1. Rawle O. King, 'Hurricane Katrina: Insurance Losses and National Capacities for Financing Disaster Risks', Congressional Research Service Report for Congress, I January 2008, table 1.

2. Joseph B. Treaster, 'A Lawyer Like a Hurricane: Facing Off Against Asbestos, Tobacco and Now Home Insurers', New York Times, 16 March 2007.

3. For details, see Richard F. Scruggs, 'Hurricane Katrina: Issues and Observations', American Enterprise Institute-Brookings Judicial Symposium, 'Insurance and Risk Allocation in America: Economics, Law and Regulation', Georgetown Law Center, 20-22 September 2006.

4. Details from http://www.usa.gov/CitizenrTopics/PublicSafety/Hurricane--Katrina_Recovery.shtml, http://katrina.louisiana.gov/index. html and http://www.ldi.state.la.us/HurricaneKatrina.htm.

5. Peter Lattman, 'Plaintiffs Laywer Scruggs is Indicted on Bribery Charges', Wall Street Journal, 29 November 2007; Ashby Jones and Paulo Prada, 'Richard Scruggs Pleads Guilty', ibid., IS March 2008.

6. King, 'Hurricane Katrina', p. 4.

7. Naomi Klein, The Shock Doctrine: The Rise of Disaster Capitalism (New York, 2007).

8. http://www.nhc.noaa.gov/pastdec.shtml.

9. John Schwartz, 'One Billion Dollars Later, New Orleans is Still at Risk', New York Times, 17 August 2007.

10. Michael Lewis, 'In Nature's Casino', New York Times Magazine, 26 August 2007.

11. National Safety Council, 'What are the Odds of Dying?': http:// www.nsc.org/lrs/statinf%dds.htm. For the cancer statistic, see the National Cancer Institute, 'SEER Cancer Statistics Review, 19752004', table 1-17: http://srab.cancer.gov.ldevcan/. The precise lifetime probability of dying from cancer in the United States between 2002 and 2004 was 21.29 per cent, with a 95 per cent confidence interval.

12. http://www.workhouses. org.uk/index.html? StMarylebone/ StMarylebone.shtml.

13. Lothar Gall, Bismarck: The White Revolutionary, vol. II: 18791898, trans. J. A. Underwood (London, 1986), p. 129.

14. H. G. Lay, Marine Insurance: A Text Book of the History of Marine Insurance, including the Functions of Lloyd's Register of Shipping (London, 1925), p. 137.

15. Richard Sicotte, 'Economic Crisis and Political Response: The Political Economy of the Shipping Act of 1916', journal of Economic History, 59,4 (December 1999), pp. 861-84.

16. Anon., 'Allocation of Risk between Marine and War Insurer', Yale Law journal, 51,4 (February 1942), p. 674; C, 'War Risks in Marine Insurance', Modern Law Review, 10, 2 (April 1947), pp.2II-14.

17. Alfred T. Lauterbach, 'Economic Demobilization in Great Britain after the First World War', Political Science Quarterly, 57, 3 (September 1942), pp. 376-93.

18. Correlli Barnett, The Audit of War (London, 2001), pp. 31f.

19. Richmond, 'Insurance Tendencies', p. 185.

20. Charles Davison, 'The Japanese Earthquake of I September', Geographical journal, 65, I (January 1925), pp. 42f.

21. Yoshimichi Miura, 'Insurance Tendencies in Japan', Annals of the American Academy of Political and Social Science, 161 (May 1932), pp.215-19.

22. Herbert H. Gowen, 'Living Conditions in Japan', Annals of the American Academy of Political and Social Science, 122 (November 1925), p. 163.

23. Kenneth Hewitt, 'Place Annihilation: Area Bombing and the Fate of Urban Places', Annals of the Association of American Geographers, 73 (1983), p. 263.

24. Anon., 'War Damage Insurance', Yale Law Journal, sr, 7 (May 1942), pp. II 60-1. It made $210 million, having collected premiums from 8 million policies and paid out only a modest amount.

25. Kingo Tarnai, 'Development of Social Security in Japan', in Misa Izuhara (ed.), Comparing Social Policies: Exploring New Perspectives in Britain and Japan (Bristol, 20°3), pp. 35-48. See also Gregory J. Kasza, 'War and Welfare Policy in Japan', Journal of Asian Studies, 6I, 2 (May 2002), p. 428.

26. Recommendation of the Council of Social Security System (I950).

27. W. Macmahon Ball, 'Reflections on Japan', Pacific Affairs, 2I, I (March 1948), pp. 15f.

28. Beatrice G. Reubens, 'Social Legislation in Japan', Far Eastern Survey, 18, 23 (I6 November 1949), p. 270.

29. Keith L. Nelson, 'The "Warfare State": History of a Concept', Pacific Historical Review, 40,2 (May 197I), pp. 138f.

30. Kasza, 'War and Welfare Policy', pp. 418f.

31. Ibid., p. 423.

32. Ibid., p. 424.

33. Nakagawa Yatsuhiro, 'Japan, the Welfare Super-Power', Journal of Japanese Studies, 5, I (Winter 1979), pp. 5-51.

33. Ibid., p. 21.

34. Ibid., p. 9.

35. Ibid., p. 18.

36. For comparative studies, see Gregory J. Kasza, One World of Welfare: Japan in Comparative Perspective (Ithaca, 2006) and Neil Gilbert and Ailee Moon, 'Analyzing Welfare Effort: An Appraisal of Comparative Methods', Journal of Policy Analysis and Management, 7,2 (Winter 1988), pp. 326-400.

37. Kasza, One JVorld of Welfare, p. 107.

38. Peter H. Lindert, Growing Public: Social Spending and Economic Growth since the Eighteenth Century (Cambridge, 2004), vol. I, table 1.2.

39. Hiroto Tsukada, Economic Globalization and the Citizens' Welfare State (Aldershot / Burlington / Singapore / Sydney, 2002), p. 96.

40. Milton Friedman and Anna J. Schwartz, A Monetary History of the United States, 1867-1960 (Princeton, 1963).

41. Milton Friedman and Rose D. Friedman, Two Lucky People: Memoirs (Chicago / London, 1998), p. 399.

42. Ibid., p. 400.

43. Ibid., p. 593.

44. Patricio Silva, 'Technocrats and Politics in Chile: From the Chicago Boys to the CEIPLAN Monks', Journal of Latin American Studies, 23,2 (May 1991), pp. 385-410.

45. Bill Jamieson, '25 Years On, Chile Has a Pensions Message for Britain', Sunday Business, 14 December 2006.

46. Rossana Castiglioni, 'The Politics of Retrenchment: The Quandaries of Social Protection under Military Rule in Chile, 1973-1990', Latin American Politics and Society, 43, 4 (Winter 2001), pp. 39ff.

47. Ibid., p. 55.

48. Jose Pifiera, 'Empowering Workers: The Privatization of Social Security in Chile', Cato Journal, 15, 2-3 (Fall / Winter 1995/96), PP 155-166.

49. Ibid., p. 40.

51. Teresita Ramos, 'Chile: The Latin American Tiger?', Harvard Business School Case 9-798-092 (21 March 1999), p. 6.

52. Laurence J. Kotlikoff, 'Pension Reform as the Triumph of Form over Substance', Economists' Voice (January 2008), pp. 1-5.

53. Armando Barrientos, 'Pension Reform and Pension Coverage in Chile: Lessons for Other Countries', Bulletin of Latin American Research, 15, 3 (1996), p. 312.

54. 'Destitute No More', Economist, 16 August 2007.

55. Barrientos, 'Pension Reform', pp. 309f. See also Raul Madrid, 'The Politics and Economics of Pension Privatization in Latin America', Latin American Research Review, 37, 2 (2002), pp. 159-82.

56. Comparative data available from the World Bank's World Development Indicators database.

57. Laurence J. Kotlikoff and Scott Burns, The Coming Generational Storm: What You Need to Know about America's Economic Future (Cambridge, 2005). See also Peter G. Peterson, Running on Empty: How the Democratic and Republican Parties Are Bankrupting Our Future and What Americans Can Do about It (New York, 2005).

58. Ruth Helman, Craig Copeland and Jack VanDerhei, 'Will More of Us Be Working Forever? The 2006 Retirement Confidence Survey', Employee Benefit Research Institute Issue Brief, 292 (April 2006).

59. Gene L. Dodaro, Acting Comptroller General of the United States, 'Working to Improve Accountability in an Evolving Environment', address to the 2008 Maryland Association of CP As' Government and Not-for-profit Conference (18 April 2008).

60. James Brooke, 'A Tough Sell: Japanese Social Security', New York Times, 6 May 2004.

61. See Mutsuko Takahashi, The Emergence of Welfare Society in Japan (Aldershot / Brookfield / Hong Kong / Singapore / Sydney, 1997), pp. 18 Sf. See also Kasza, One World of Welfare, pp. 179-82.

62. Alex Kerr, Dogs and Demons: The Fall of Modern Japan (London, 2001), pp. 261-66.

63. Gavan McCormack, Client State: Japan in the American Embrace (London, 2007), pp. 45-69.

64  Lisa Haines, 'World's Largest Pension Funds Top $10 Trillion', Financial News, S September 2007.

65. 'Living Dangerously', Economist, 22 January 2004.

66. Philip Bobbitt, Terror and Consent: The Wars for the Twenty-first Century (New York, 2008), esp. pp. 98-179.

67. Suleiman abu Gheith, quoted in ibid., p. 119.

68. Graham Allison, 'Time to Bury a Dangerous Legacy, Part I', Yale Global, 14 March 2008. Cf. idem, Nuclear Terrorism: The Ultimate Preventable Catastrophe (Cambridge, MA, 2004).

69. Michael D. Intriligator and Abdullah Toukan, 'Terrorism and Weapons of Mass Destruction', in Peter Kotana, Michael D. Intriligator and Johp P. Sullivan (eds.), Countering Terrorism and WMD: Creating a Clobal Counter-terrorism Network (New York, 2006), table 4.IA .

70. See IPCC, Climate Change 2007: Synthesis Report (Valencia, 2007).

71. Robert Looney, 'Economic Costs to the United States Stemming from the 9/11 Attacks', Center for Contemporary Conflict Strategic Insight (S August 2002).

72. Robert E. Litan, 'Sharing and Reducing the Financial Risks of Future Mega-Catastrophes', Brookings Issues in Economic Policy, 4 (March 2006).

73. William Hutchings, 'Citadel Builds a Diverse Business', Financial News, 3 October 2007.

74. Marcia Vickers, 'A Hedge Fund Superstar', Fortune, 3 April 2007.

75. Joseph Santos, 'A History of Futures Trading in the United States', South Dakota University MS, n.d.



For updates click homepage here

 

 

 

 

shopify analytics