30 December 2008

Solving a Three-Good Utility Function

According to the principles of Neoclassical economics, we would turn to a utility function of three variables to investigate.1 Usually, the concept is explained with two goods so it can be illustrated (x and y being goods, and z—the vertical axis—standing for utility). But we can't illustrate this one fully because we are interested in cases where there are actually more than two goods determining utility.

Let U be utility as a function of x, y, and z, where x refers to everything one buys other than software, y is cheap software, and z is costly software (α, β, and γ are arbitrary constants; x0, y0, and z0 are threshold levels of consumption) .

U(x, y, z) = αln(x-x0) + βln(y-y0)+ γln(z-z0)

subject to
I = pxx + pyy + pzz .

where I is income and p refers to the price of the respective good.

The Lagrangian will be

L = αln(x-x0) + βln(y-y0)+ γln(z-z0) + λ(I - pxx - pyy - pzz)

and first order conditions will be

First we solve for x, y, and z in terms of the constants (and λ)

and then we solve for the Lagrangian multiplier λ:

And we substitute the values for x, y, and z into the equation for the Lagrangian multiplier.

Now, so far this has just been a generic solution of a symmetric 3-good constrained optimization problem, and it can be made even more general for a very large number of goods:

where gj is any good, cj is the corresponding constant I've been representing with Greek letters, I is income, and i is the counter for summation within the equation (So, for example, pj refers to the price for the good gj whose optimal amount g* you're trying to determine, while pigi refers to the amount expended on any individual good listed in the summation from 1 to n goods).

(Discussion of Findings)


1 Regarding the utility function: I prefer to use the linear expenditure model instead of the Cobb-Douglas model everyone else uses, because the CD utility function leads to rigid expenditures between x, y, and z. If a researcher wanted to perform regression analysis of "observed preferences" to establish what the coefficients were, the existence of threshold levels of consumption would correspond to y-intercepts for each good.

Labels: ,

29 December 2008

Pay-per-use your own computer?

Gregg Keizer, Microsoft specs out 'pay as you go' PC scheme, Computerworld

The idea is something that might have been a story problem in a class on welfare economics: assuming the cost of metering computer usage is negligible, discuss the merits of such a proposal. MS filed a patent for a proposal to sell computers (presumably well below the cost of production), then bill customers for both the use of installed programs and the use of computer power.
Microsoft's plan would instead monitor the machine to track things such as disk storage space, processor cores and memory used, then bill the user for what was consumed during a set period.
So you would be billed x per MIPS-hour, even though this would require you to have the highest-performing processor installed all the time. Also, it would allow you to briefly use premium softwares for hourly (?) rates.

At first blush, this does sound a lot like MS is at it again, trying to squeeze more revenue out of customers for software that is costlier and buggier. A major benefit for MS would be stimulating computer revenue by offering pay-per-use options; note there's an extremely severe recession approaching. With respect to hardware, there would be an obvious relative increase in the incentive to get the most powerful devices, since there would not be a price premium... except on the occasion that you used their full capability. Semiconductor fabricators like AMD might grumble about the price squeeze value-added retailers like Dell were imposing, but really, they'd really only need to ship a larger number of top-of-the-line chips, rather than a mix of different premium chips.

Where the idea gets interesting is software, since the object would be to create a market for much higher-end programs (most likely games, but also business applications). MS could allow users to download "riskfree" programs that had been recently developed, collect revenues, and perhaps stimulate demand. Which opens the question, what exactly would this scheme do for software demand?

Solving a Three-Good Utility Function
Section excised and put in another post


Usually discussion of utility functions present them as indifference curves between two similar goods. I prefer to think of utility functions as part of firm's production function, in the sense that there's more money to be made with an optimal expenditure on different items. But in the case of an actual business strategy, it makes sense to begin with the understanding that customers can spend money on
  1. high-end software (z)
  2. low-end software (y)
  3. everything else (x).
Usually I use the x-axis to represent "everything else" (example). Textbook writers, sometimes in an effort at humor, will select two very similar items (pizza versus hamburgers) , but assume consumers' expenditures on the two items together will remain the same regardless. I remain curious, though, as to what would happen if you're looking at a market for two similar items, in which most income will be spent on neither. If the price for one goes down, demand for the other may not necessarily go down (as it would if there were only two items).

Another deviation from usual practice is to use the linear expenditure function instead of a Cobb-Douglas function. The Cobb-Douglass utility function is unappealing to me because, while it's easy to use mathematically, it results in a fixed share of income being spent on each good. Logically, if the price of a thing is sharply reduced, you would expect people to spend a larger share of their income on that thing; spending the same amount of money as before now yields more satisfaction, so people will find more occasions to use spend more money on it, not merely buy more units. For some products, the opposite may be true (health care), in which case the threshold level of consumption can be made negative.

The threshold level of consumption is a phrase I made up to refer to what x0, yo, and zo represent: a minimum level of consumption of these respective goods. Consumption of x < x0 means that x ties up income but contributes nothing to utility. As is often the case, extreme conditions are seldom relevant: we aren't usually interested in situations where x < x0. Instead, we're interested in situations where x >> x0, and we're making a modest shift in position. Technically, a negative threshold level of consumption implies that even negative consumption of a thing contributes to utility, to say nothing of no consumption at all. That's absurd. On the other hand, the curve created by a negative threshold may realistically describe conditions in which an increase in prices leads to an increase in total expenditures.

I set up the equation so that threshold levels of consumption were positive for all goods; the price of "everything else" was fixed; high-end software yielded a higher utility per unit, and software generally had a higher utility per unit than "everything else." I found that increasing prices for y actually reduced spending (demand) for z, albeit much more slowly than reducing the price for z.

A lot of this has to do with the coefficients of the utility function: α, β, γ, x0, yo, zo, and I. The values of α, β, and γ determine the gradient of the utility function at I. When creating the graph above, I chose values for β and γ that were much higher than α; that reflects an assumption that ongoing expenditures on low-end software (not to mention high-end software) provide more bang for the buck than money spent on "everything else." That's an intensely controversial proposition, but I doubt it would face controversy at Microsoft.

The values for threshold spending (x0, yo, and zo) are naturally a mystery; high values for x0 and yo (i.e., both "everything else" and low-end software) lower z*, while high values for zo increase z*. All this means is that, if thresholds are high, a price reduction causes expenditures on the good to increase. If threshold = 0, then a price reduction causes expenditures to stay the same. For computers generally, there is strong historic evidence that falling prices have sharply increased expenditures, leading to the conclusion that the threshold value is large but is offset by a high coefficient of utility.

The effect of the original business scheme of Microsoft would lead to a shift in software expenditure from low-end software to high-end, and stimulate spending on software generally. The logic of this is intuitive: access to high-end functionality would be on tap, but users would not have to actually commit to owning the whole package. This would increase the overall utility of software per se.

Labels: , , , , ,

21 December 2008

Parenthetic Note on the Use of the Atomic Bomb

I recently wrote a post on the Manhattan Project in which I wrote:
I've been disappointed by the way the issue has been generally exploited by partisans to support opinions on other subjects—a football, so to speak, in an ongoing propaganda war. As an amateur student of history, I personally have learned that it's vain and self-deceiving to make judgments on these matters because one cannot (or will not) ever make a valid reconstruction of the understanding historical actors had of the events in which they acted...

The Manhattan Project emerged from under this avalanche of history as the prototypical project to use a massive drive to develop a "magic bullet," a technology that would end the War. Somehow, that technology has been divorced from any context. I guess people want to conjure up the amazing technical feat of not only achieving a nuclear bomb in only three years, but achieving the first nuclear bomb in only three years; and applying this to some unknown new technology, similar to the A-bomb in its revolutionary character, but reversing the moral polarity.
My post neglected to explain why I was disappointed; it was not by the fact that people try to make judgments on "these matters." It was how they go about it.

In American Hiroshima (Trafford Publishing, 2006; p.108ff), David J. Dionisi makes a fairly compelling argument that the US government was well aware of the inevitability of Japanese capitulation well before the use of the bomb. The customary evidence for this (it's been made many times before) is the 1946 US Strategic Bombing Survey, "Japan's struggle to end the war.":
The time lapse between military impotence and political acceptance of the inevitable might have been shorter had the political structure of Japan permitted a more rapid and decisive determination of national policies. Nevertheless, it seems clear that, even without the atomic bombing attacks, air supremacy over Japan could have exerted sufficient pressure to bring about unconditional surrender and obviate the need for invasion.

Based on a detailed investigation of all the facts, and supported by the testimony of the surviving Japanese leaders involved, it is the Survey's opinion that certainly prior to 31 December 1945, and in all probability prior to 1 November 1945, Japan would have surrendered even if the atomic bombs had not been dropped, even if Russia had not entered the war, and even if no invasion had been planned or contemplated.
(Emphasis added—JRM)

While this seems like the most plausible conclusion for historians, there are a few caveats I want to make here:
  1. In hindsight, historical events always seem inevitable. More precisely put, when undertaking a speculative counterfactual (e.g., what if the US had failed to develop an atomic bomb by 1 January 1946?), an analyst can invariably alter nearly all incidentals and still expect the same outcome. Certainly leaders of the defeated government would naturally want to ingratiate themselves with the SCAP authorities by affirming that there was little support for the regime among ordinary Japanese, or even intense opposition among professional commanders and managers. This would contribute to a sense by the committee that Japanese capitulation was "inevitable."
  2. The main reason cited for certainty of Japan's imminent surrender is "air power." But aside from massive conventional bombing (which failed to induce Vietnamese capitulation in the 1970's), what form would that air power take? Aerial campaigns against the major cities of Japan had led to gigantic death tolls (100,000 deaths in Tokyo alone), and even after the bombing of Nagasaki, stalwarts in the army planned a coup to seize the Imperial Palace rather than allow the Emperor to accept capitulation.
The point is that, revisionist historians have tried to make the case that not using the atomic bombs would have been virtually cost-free. This is not a plausible claim; the bombing survey, which claimed that neither atomic bomb use nor an attempted invasion would have been necessary to defeat Japan, does assume continued air power (meaning, continued use of conventional bombs against the civilian population). Precision bombing most emphatically did not exist, and the Japanese military machine did not rely on large, easily-identifiable plants such as those supplying the Third Reich armies.

Another point that needs to be made is that it would have taken preternatural political courage to refuse to use a weapon, now available, to hasten the end of such a war. I don't think this theme needs further development.

Now, I believe I have stripped away the various efforts to game the argument. Had the USA not used indiscriminate terror bombing—incendiaries and atomic bombs against civilians—then the war would have lasted longer and killed more Usonians. Use of incendiaries is not dramatically different from the use of atomic bombs; both were used in the Pacific War to kill many thousands of noncombatants.It would seem that neither was illegal in 1945, although I could be mistaken.1

The United States government made the decision in 1942 to launch a major, secretive program to create a nuclear bomb. It is here that the moral judgment was, in my view, made. After 1944, a suspension of the program was probably impossible; after 16 July 1945, non-use of the atomic bomb would have required unheard-of heroic moral courage. It was in 1942, in other words, that the temptation was embraced. During the following year, moreover, the Allies began greatly intensified bombing of Axis countries; Japan, in particular, was the target of massively lethal incendiary campaigns that targeted the entire population, not explicit military capabilities.

The problem was that the Allies, at the core of their strategic planning, adopted a scheme to win the War by enormous random massacres. Among the non-ruling populations of Axis countries—particularly Japan—this was to undermine the moral legitimacy of the Allies and their postwar order. While the Japanese Militarist regime had committed crimes against humanity, peace, and the laws of war, the Allies committed crimes against the peace on three continents, crimes against humanity, and crimes against the laws of war.2 The Nuremberg Trial Proceedings defined war crimes to include "wanton destruction of cities, towns or villages, or devastation not justified by military necessity," which probably led directly to Protocol I of the 1949 Geneva Conventions. But the question of "military necessity" was an extremely dangerous loophole. The Allies, in particular, had designed their entire strategy—their calculus of "military necessity"—around the concept of remote bombing. Wanton destruction was intrinsically a military necessity. Its logical culmination was the nuclear bombing of Nagasaki and Hiroshima.

The purpose itself of destroying entire cities to win wars is categorically immoral; the Allied powers agreed to this when they undertook the first wave of trials for crimes against the laws of war. They adopted the premise that this immorality ought to have been obvious to Axis military personnel, even though international law had not yet been promulgated to say so. There was never any risk of Japanese destruction of Usonian cities, so the Usonians could not plead that they were avenging, or protecting their own cities; and even if they were, the terms of the Convention were not to be waived even if the enemy broke them. Even if the bombing were needed to induce total Japanese surrender, it was a war crime.

The effect of the bombing was a disaster of human liberty and the moral standing of democracy. Its worst spiritual impact was on the Usonians themselves, who now accepted leaders who blithely proposed to use the atomic bomb in most conflicts to which it was a party. The atomic bomb appears to have strongly influenced the ideology of the Cold War belligerents not only toward the possibilities and dangers posed by warfare, but towards the very ideologies themselves. For example, Usonian ideology morphed from an authentically conservative doctrine of caution, continuity, and particularism, to something new and strange. Before, there was at least some serious regard for Mills-style liberalism, which was already understood to impose absolute limits on the portability of any new progressive social development. According to Tocqueville, the Usonian outlook was particularist in so far as Usonians regarded their historical legacy (short though it was) as rare, extraordinary, and endowing themselves with unique virtues, virtues that enabled democracy.

Now "democracy" was used as a symbol for something else; it referred not to any system of political rights-cum-electoral rule, but to the Usonian-led team. Hereafter, few would find it odd that Finland or India, with their authentically democratic institutions, were not in the "democratic" camp, but military juntas in Pakistan, Thailand, Greece, and Venezuela were. "Democracy" was now understood as instrumental to the survival and expansion of capitalism, because it was taken for granted that capitalism was uniquely suited to democracy, not because actual democracies favored capitalism. Added to this was the doctrine that unlimited force could be brought to bear to preserve capitalist political orders in any country of the world, regardless of their local unpopularity and their illegitimacy.

The role of the atomic/nuclear bomb in this ideological shift was its extraordinary harnessing of natural forces. Now, political triumph had nothing to do with legitimacy and free will; it was entirely an industrial problem, solved by superior application of Taylorism and enlightened management techniques. Even though the atomic bomb would not be used again in warfare, Usonian leaders openly and soberly contemplated its use in every military action they took. The failure of bombing to actually influence social outcomes was always assumed to be the result of failure to bomb adequately. So, for example, Laos was bombed more profusely than Japan and Germany (put together) during WW2, and of course has a far smaller population, area, and economy than those two countries. Yet the legend persists and amounts to conventional wisdom, that Laos fell to Communism because the United States military held back. Always at the back of the demagogues' minds was the failure to use nuclear bombs in each theater; this was always taken as evidence that the US had "fought with one hand tied behind its back."

By assuming that it could mold the political order of each nation on earth with sufficient resolve, the "democratic" ideals of the Usonian political establishment dispensed with any appeal whatever to free will or freedom. The only freedom that mattered, clearly, was free enterprise. The Manhattan Project was launched with a view to molding human will like a recalcitrant mountain top. What is astonishing is how its close cousin, the modern "conventional" bomb, failed utterly in its purpose. Instead of molding the Vietnamese, Lao, or Afghans, the atomic bomb molded Usonians. Our view of other communities became solipsistic; they could be made to actually be what our leaders wanted, if only we had sufficient resolve—where "resolve" was defined as indifference to suffering.

It would be nice to be able to speak of the atomic shadow on Usonian thought as something receding: a past historical anomaly, now reverting to a more sane view of other humans. Surely, it would seem, exposure to flesh-and-blood Arabs or Southeast Asians would arouse our natural empathy and affection for them. I don't really see signs of this, however; I think we're still in the process of collectively forgetting how we once viewed the world as only partly susceptible to technological control. In our efforts to come to grips with the calamities wrought by excessive fossil fuel consumption, urban sprawl, and digital waste, we still see the problem as one requiring more ecological redemption to fix.

  1. Area bombing ("pattern bombing") where a number of clearly separated military objectives are treated as a single military objective, and where there is a similar concentration of civilians or civilian objects, is a violation of Protocol I, Art. 51, Sec. 5a. However, this specific protocol did not exist until August 1949. Obviously, the "counterinsurgency" tactics used in the Second Indochina War (1955-1975) were profusely contrary to PI.51.

    Most rules governing conduct of combat operations (e.g., prohibition of chemical weapons) are to be found in the Hague Conventions of 1899 & 1907. The use of incendiaries was banned in the Incendiary Weapons Convention of 1980.

    It is problematic to me that the use of area bombing became illegal shortly after World War II. It certainly ought to be illegal, and the government of the United States has violated Protocol I copiously since it was passed (under the pretext that it was assisting in the suppression of an "insurgency," rather than conducting an actual war). One could grumble that the Allies (a) passed the 1949 Geneva Convention after they themselves had committed the most egregious violations of it and (b) nonetheless prepared to commit still greater violations of it. The Japanese and the Third Reich waged war savagely, but they lacked the means to wage air campaigns comparable to those of the Allies, and did not make significant preparations to do so. Obviously, the NATO and Warsaw Pact accumulated stockpiles of nuclear weapons that would have made a complete mockery of Protocol I.

    The Allies could not have failed to know that they were encoding principles, not safeguards: had the Nazis made adequate preparations for air warfare, no protocol would have spared the Allies from marauding fleets of German superbombers; and the Allies knew they were mostly safe from Axis bombing after 1943 (after which Allied bombing of Japan and Germany was stepped up immensely). The Usonians and the British, in other words, were not really retaliating for Shanghai and Coventry; the Usonians were retaliating for an attack on a military target, while the British had actually initiated use of bombing campaigns against Germany, rather than vice versa.

    Moreover, the fact remains that morals change gradually. Clearly, if area bobming was outlawed in 1949, then the Allied powers must have known that it was wrong before that time. Unlike conventions passed prior to the War, the 1949 Geneva Convention were binding on signatories regardless of compliance by the enemy. This implies that the Allies believed that terror bombings were so immoral that, even if they were effective, any civilized belligerent could not possibly engage in them. In six years, the Allies and neutral powers had gone from massive use of area bombing, to outlawing something that had been both their primary weapon in the previous conflict, and their intended primary weapon in the next.
  2. See Nuremberg Trial Proceedings Vol. 1, "Charter of the International Military Tribunal" (Avalon Project, Yale Law School); the distinct categories "Crimes against Peace," "War Crimes," and "Crimes against Humanity" are defined in Article 6. The Axis Powers committed crimes against peace by invading countries in Europe and East Asia; the Allies had earlier waged aggressive wars against nations of South Asia, Africa, and Latin America (creation of French, British, and Belgian empires; US interventions in Latin America). The Axis Powers had committed crimes against the laws of war by murdering 12 million civilians in scores of countries, sacking all of Europe, and seizing hostages. The Allies had treated prisoners quite badly, such as forcing many thousands to clear mines, or being lackadaisical about providing for them. See Wikipedia, "Disarmed Enemy Forces"; see also modern literature on French, British, Usonian, and Netherlander treatment of combatants in wars of national liberation prior to WW2 (e.g, Sven Lindqvist and Joan Tate, Exterminate All the Brutes, New Press, 1997). Finally, while the Nazi Holocaust and enslavement of foreigners practically defined the crime against humanity, Allied powers such as the United States had a system of apartheid for African Usonians, sequestered Native Usonians on reservations, and allowed employers to maintain armed gangs to terrorize employees; the British Empire inflicted devastatingly awful conditions on African and South Asian subject peoples; and the Soviet Union, I presume, needs no introduction as an egregious violator of human rights.

Sources & Additional Reading

Avalon Project Listing of International Conventions on the Laws of War

Marilyn B. Young, excerpt, Bombing civilians from the Twentieth to the Twenty-First centuries , New Press (2009)

Labels: , ,

03 December 2008

Counting the Cost: the Financial Crisis

Disclaimer: I am not an expert in this field; these are my notes as I research these topics using the usual internet/public library resources. In many cases, links have been added to subsequent posts in this blog. Apologies in advance for any mistakes of interpretation.

How much will the financial crisis cost the US taxpayer? Most of the attention has focused on the Troubled Asset Relief Program (TARP), a $700 billion package initially designed to restore financial markets by buying up troubled assets. That's understandable, but it mainly reflects Congressional debate over a smallish share of the overall government response. According to Barry Ritholtz, et al., that government response was predominantly administered by the Federal Reserve System, and is measured by capital stakes.

An additional component of the bailout, also far surpassing the TARP agreement, is outlays by the Federal Deposit Insurance Corporation (FDIC). The FDIC approved the Temporary Liquidity Guarantee Program (TLGP) in October; it provides a guarantee of non-interest bearing deposits up to $250,000 instead of the usual $100,000.

TARP and the FHA "Hope for Homeowners" programs were designed mainly to inject a stream of payments into the huge pool of obligations taken on by federally guaranteed agencies.

Here follows a review of the items on Ritholtz's list.

Federal Reserve System

The Federal Reserve System took on potentially $5.8 trillion in liabilities in response to the initial wave of banking failures. Here are the programs it has created for coping with the catastrophe.

  • Commercial Paper Funding Facility: created 7 October 2008, shortly after a disastrous meltdown of the commercial paper markets (Bloomberg; see chart). Accepts newly issued 3-month unsecured and asset-backed CP from eligible issuers as collateral in exchange for funds (3 months).
  • Term Auction Facility: created 12 December 2007. Up to 90 loans to depository institutions (thrifts, savings banks, credit unions) for emergency reserves, supplementing the usual interbank reserve lending. Not a permanent institution, but a series of auctions of funds held every two weeks. The banks successfully bidding must provide suitable securities as collateral.
  • Money Market Investor Funding Facility: created 21 October 2008 to provide liquidity to money market funds; object to prevent sales of assets in falling market (debt deflation).
  • MBS Purchase Program: created November 2008 to buy mortgage-backed securities from FNMA, and FHLMA (Fannie Mae, and Freddie Mac; known collectively as the GSEs). This was intended take the place of the suddenly-defunct market for MBS collateralized debt obligations (CDOs). This is not the same thing as the project of re-absorbing the GSEs themselves. In terms of financial risk, this is probably qualitatively riskier than the other programs.
  • Term Securities Lending Facility (TSLF): created 11 March 2008 & renewed 3 December; lends Federal Reserve holdings of US Treasury securities to NY Fed primary dealers.
  • Term ABS Lending Facility (TALF): created 25 November 2008; loans to entities buying asset-backed securities (ABS); borrowers required to not be originators of the ABS.
  • Credit Extensions (mostly AIG): large number of different interventions to salvage network of CDS counterparty liabilities; includes at least $122 billion (apparently, not the same money as the $150 billion of TARP funds authorized as of 9 November specifically for AIG). Most money ever directed by the USG to any single enterprise (NY Times).

Federal Deposit Insurance Corporation

The Temporary Liquidity Guarantee Program (TLGP) was rated at costing potentially $1.4 trillion, although so far it has not cost anything yet--it is an extremely new program, and we don't know how many banks are likely to default on their largest deposits.

The FDIC also provided Citigroup with $306 billion in loan guarantees (Bloomberg), with Citigroup required to absorb the first $29 billion in losses, and 10% of losses after that--for a potential maximum liability to the FDIC of $249.3 billion. The FDIC provided a similar loan guarantee of $139 billion to General Electric (Bloomberg). Ritholtz lists the current amount for this line item as 100% of the maximum, which presumes an implausibly disastrous collapse of Citigroup and General Electric asset value.

US Treasury

It may come as a surprise to learn that the Treasury Department's role in the federal government response to the financial crisis was small potatoes. There are two main programs, the Troubled Asset Recovery Program (TARP) and the GSE bailout (separate and distinct from the MBS purchase program).


This is where I was supposed to carefully review all of the programs and their interplay, and make some assessments. One meta-assessment is that James Hamilton of Econbrowser (7 October) was unable to say, and he really is an expert (I'm just trying to learn about this). The reason is that the financial system has certain metaphysical obstacles to consequential analysis: there's no agreement on what this relationship of debt to economic output really means.

Having examined several of the programs set up by the Federal Reserve and the Treasury, I think I can understand why the approach taken was so complex: each industry had to be treated differently. For example, while money market funds usually rely on commercial paper markets for investor return, they have constraints and problems that are distinct enough to need different approaches. But the complicated arrangement of credit triage supplied by the Fed to its [still more] complicated patient means unpredictable results and unpredictable market response.

When I was studying theories of monetary policy, like the debate over "commitment versus discretion" (see Dotsey 2008), the papers and textbooks described a world of macroeconomic stability. A bad monetary policy might lead to a monotonic increase in bond yields, but it was a well-behaved catastrophe. I understood that this was an introduction to the idea, and meant to help students understand the basic terms of the debate, but it did not occur to me that the real controversy would erupt during a multi-dimensional crisis in which many different financial markets were flying apart. Here, there were no rules to commit to: the contribution of economic theory to debate in this crisis has been to grumble about moral hazard.

What we are really discussing here is something that economic theories do not describe in any meaningful sense of the word "describe," and something more akin to a turbofan engine. Except that a turbofan engine was designed and produced by a single firm, and its operating principles are well known to the designers or mechanics. The financial system was designed by no single entity, and its relationship with the real economy is mired in doubt.

Having said that, it seems to me that a big part of the rescue program sketched above is multiple layers of liability. A homeowner borrows money from a mortgage originator, who sells the mortgage to an investment bank. The investment bank creates an SIV and sells the mortgage to it, in exchange for a stream of payments to depositors. The mortgage may be insured against default with a credit default swap, whose counterparty is a hedge fund. The hedge fund may hedge its CDS liabilities with put options on the same CDO, and the counterparty to the puts may be another investment bank that finances with commercial paper. That commercial paper, finally, finances the original homeowner's money market fund. The same liability is repackaged, leveraged, and perhaps even multiplied through options and credit default swaps.

The Fed's program, in effect, rescues each layer. Party A uses some federal funds to repay B, who uses that plus more federal funds to repay C, and so on. This increases the Fed's exposure but reduces the unit risk of that exposure.

However, it seems clear that this post can never be more than a bookmark, which I will need to revisit as the situation unfolds.

Sources & Additional Reading

Barry Ritholtz, "Calculating the Total Bailout Costs" and "$7.8 Trillion Total Bailout Commitment," The Big Picture (November 2008)
Econbrowser (James Hamilton's blog)Federal Reserve System sites:
Marco Arnone & George Iden, "Primary Dealers in Government Securities: Policy Issues and Selected Government's Experience" Working Paper, International Monetary Fund (March 2003)

"Financial Crisis – News and Resources" page at Morrison & Foerster, LLP website (outstanding!)

Andrew Ross Sorkin & Mary Williams Walsh, "AIG May Get More in Bailout," New York Times (9 Nov 2008)

Joe Weisenthal, "Explaining The AIG Black Hole," Business Insider (30 October 2008)

Neil Irwin, "Fed Prepared to Prop Up Money-Market Funds," Washington Post (22 October 2008)

Labels: , ,