24 June 2006

Telecommunications Act of 2006 (Part 1)

(Table of Contents)

When legislators announce the need for revising regulatory legislation, it's vital to understand what they believe is wrong with the old legislation. For the last 28 years such legislation has been reliably geared towards making it easier for firms to maintain a profitable relationship with customers, while removing obligations to the community. Public statements made by legislators have emphasized that media firms such as Time Warner, NBC Universal, and Viacom/CBS* have been pointlessly thwarted in their endeavor to aquire control of other media channels.

It's an astonishing thing to say, but the National Association of Broadcasters (NAB) claims that the 1996 Act did not go far enough in deregulating the industry. The basis is not, after all, so surprising: there are rival media formats (radio versus newspaper) and the newspapers enjoy an immunity from antitrust legislation that is quite extreme. The name of this legal concept is "regulatory parity," and it insists that existing regulations discriminate against various types of media. In fact, the notion that press, broadcast radio and TV, telephony, broadband, and cable TV are all so similar they are entitled to identical regulation, is a product of academic abstraction.
The simple underlying premise behind regulatory parity is that because telephone, cable and broadcast companies are competing to provide consumers communication products they should be treated equally. Why should broadcasters suffer the burdens of license obligations? The cable companies don’t have this problem. Why should cable companies suffer the burdens of local franchise commitments? The broadcasters don’t have to report to local communities. Why should telephone companies have to share their infrastructure with competitors? The cable companies don’t have to. And, of course, what the industry means by regulatory parity is getting rid of rules they don’t like.

Any serious examination of the laws applied to telephone companies, broadcasters and cable companies will reveal a complex set of rules arising from very different political and legal histories, different economic structures, different engineering challenges, and different jurisdictional relationships. As Sherille Ismail demonstrated in his article "Mapping Regulatory Treatment of Similar Services," Katz is not comparing apples and apples; there are so many important differences between broadcast, telephone, and cable companies that the simple notion of similarity quickly falls apart. Intelligent policymaking does not mean ignoring real distinctions and treating everything that seems similar as though it were the same.
The alleged danger posed, according to Katz, is that failure to achieve regulatory parity will result in regulation determining consumption patterns. The problem with Katz's reasoning is that he is arbitrarily applying an academic category--"media"--to what is, in reality, very different enterprises. The real goal of regulatory parity is to provide a legal foundation for challenging any regulation whatsoever.

Ironically, while media firms would avail themselves of a bizarre legal-cum-economic doctrine to win final and complete release from regulation, consumers would face profound restrictions on the technology available to them; satellite radio sets would have to be modified to prevent users from recording songs off the radio.

(part 2)
*Time Warner: formerly AOL Time Warner; merger in 2001 was a financial and technical debacle, resulting in one-year loss of $99 billion (2002-CNET).
NBC Universal: General Electric owns 80% of NBC Universal; Vivendi owns the rest.
Viacom/CBS: Viacom is the film-production and cable side of the old Viacom (before the late 2005 split); CBS Corporation owns the CBS and UPN broadcasting networks. Both Viacom and CBS are controlled by the National Amusements chain of movie theaters. I would argue that, following the business philosophy espoused by the financial press since around the 1970's, the breakup of Viacom into two firms controlled by the same holding company has enhanced the efficiency of both as dominating mindshare.
Sources for this post:
Wikipedia, "Telecommunications Act of 2005" [sic].

"The People's Guide" (download PDF), Center for Media and Democracy; via "Making Sense of the Telecommunications Act of 2006 -- Introducing "The People's Guide" (for You & Me)," saschameinrath.com

As of posting, this bill had not yet become law. Here was the draft (via Wiki). In Nov. '05, US Rep. Joe Barton, R-Texas, chairman of the House Energy and Commerce Committee, issued a statement outlining the motives of the overhaul.

Labels: , , , , ,

23 June 2006

Telecommunications Act of 1996 (Part 2)

(Table of Contents--Part 1)

Common Cause has published a research report on the consequences of the Telecommunications Act of 1996 ("Unintended Consequences and Lessons Learned," PDF, by Celia Viggo Wexler), that I just now discovered. Hence, part 2. Common Cause is a very important organization that warns citizens about the hidden agenda of special interest groups in sweeping legislation. Here is its assessment;
Over 10 years, the legislation was supposed to save consumers $550 billion, including $333 billion in lower long-distance rates, $32 billion in lower local phone rates, and $78 billion in lower cable bills. But cable rates have surged by about 50 percent, and local phone rates went up more than 20 percent. Industries supporting the new legislation predicted it would add 1.5 million jobs and boost the economy by $2 trillion. By 2003, however, telecommunications’ companies’ market value had fallen by about $2 trillion, and they had shed half a million jobs.

And study after study has documented that profit-driven media conglomerates are investing less in news and information, and that local news in particular is failing to provide viewers with the information they need to participate in their democracy. Why did this happen? In some cases, industries agreed to the terms of the Act and then went to court to block them. By leaving regulatory discretion to the Federal Communications Commission, the Act gave the FCC the power to issue rules that often sabotaged the intent of Congress. Control of the House passed from Democrats to Republicans, more sympathetic to corporate arguments for deregulation.

And while corporate special interests all had a seat at the table when this bill was being negotiated, the public did not. Nor were average citizens even aware of this legislation’s great impact on how they got their entertainment and information, and whether it would foster or discourage diversity of viewpoints and a marketplace of ideas, crucial to democratic discourse.
Wexler points out that, prior to 1996, there was a 40-station limit on how many radio stations one firm could own. Within a few years, a new media giant arose as a result of the total domination of local radio stations: Clear Channel Communications, owner of 1200 radio stations and 40 TV stations.1 Viacom/CBS, Disney, News Corp., and General Electric now (along with CCC) dominate TV broadcasting.2 Wexler also alludes to consolidation in the telephony business, which is now dominated by 2 non-cellular phone companies and 3 cellular companies.3 Cross-media mergers, such as holding companies owning both newspapers and TV stations in the same regional markets, were abolished by the 1996 act; this has led to a veritable information matrix, in which media firms use one outlet to promote products in another outlet.

Claims made by the industry that consolidation would lead to increased profitability, leading to increased growth and jobs, was patently absurd. At the back of this claim was the notion that different mass communications and entertainment media would challenge each other's market, so that, for example, wireless broadband would force cable/DSL providers to innovate. The frenzy of change in the market would stimulate technology improvements and transform the industry. Not only did Congress profess to believe this, they readily trumpeted the industry's claims that it was on the brink of happening (so antitrust legislation was obsolete), and yet not happening fast enough (so America was losing an opportunity to develop its technological lead).

That competition did not occur was demonstrated by the fact that rates increased while services were curtailed. In many areas, such as where I live, it was a carefully cultivated myth that DSL competed with cable for broadband customers. Only one is usually available, and cable TV customers experience the same mythical competition with satellite. The restructured industry did so not to implement new technologies and services, but to withdraw them and gut local content from TV stations. Firms like Sinclair Broadcasting actually acquired 65 stations, only to turn them into latchkey stations that shut out the market to other entrants.

Finally, the US Congress abdicated its responsibility when it gave away the digital broadcasting spectrum to broadcasters, a $70 billion reward, in exchange for accelerated development of HDTV. Instead, broadcasters used it as a cheap high throughput for still-more generic "content." Decades of FCC mandates on public service were abolished, leaving television broadcasters with zero responsibility to the public and zero accountability to the state.

In telephony, the scheme of allowing the Baby Bells to offer long distance anywhere, while baring them from doing so in their own zones so long as their own local markets were open to competition. In fact, the Baby Bells rapidly merged into two national companies, while using the courts to block entrants into their own local markets.

Conclusion: the 1996 Act was passed in a spirit of allowing technology firms free reign to compete and innovate. This ignored a century of antitrust legislation and court activity, and even the fundamental logic of market economics: that you cannot have competition without competitors. You cannot create jobs by allowing firms monopoly power, because they will consolidate operations and curtail customer service, while suppressing demand with increased prices. You cannot defend the commons by bestowing it upon the baron.

1 Clear Channel Communications: the fact that CCC owns 1200 radio stations massively understates their influence over the radio market. In addition to owning 19% of all radio stations in the USA, CCC owns Katz Media, which sells advertising on another 2,000 other radio stations (another 32% of the national market) and 360 TV stations. I cannot overstate the importance of ad spot sales to a radio stations. Additionally, CCC owns Clear Channel Outdoor Holdings, with 870,000 "display properties" worldwide. CCC wears its ideological heart on its sleeve: during the 2002-2003 build-up and launch of the Iraq invasion, CCC secretly financed pro-war (or, if you like, anti-liberal) rallies and then covered them on its media empire. CCC, which effectively owns the entire broadcasting infrastructure for country western formats, punished the Dixie Chicks for criticizing George W. Bush by banning their music. So did Cumulus Media, which owns or operates another 301 radio stations (5% of the national total); Cumulus also sponsored Dixie Chick CD-bashing rallies.

2 There are a total of 1366 commercial television stations in the USA (FCC). CBS owns 39 TV, 185 radio; News Corp. (owns Fox) owns 60 TV; GE (owns NBC) owns 28 TV stations; Disney (owns ABC) owns 10 stations. Clear Channel Communications owns 40 TV stations and 19% of the entire US radio stations by number; its closest competitor, Cumulus, owns 400 radio stations (CorpWatch). This sums to 177, but affiliate agreements cover the vast majority of the rest. For example, about 215 TV stations in the USA are affiliated with CBS news. Changing their format is very costly.

3 Verizon controls 27% of local phone services/24% of long distance; AT&T (SBC & Bell South) control 34%/40% [*]. Verizon controls 30% of wireless revenue, Cingular 28%, and Sprint/Nextel 17%. My sources on the changes, Oligopoly Watch, notes that the situation changes monthly. Most likely one of these three will presently acquire Alltel.

Labels: , , , , ,

22 June 2006

Telecommunications Act of 1996 (Part 1)

(Table of Contents)

For many years, radio and television were governed by the Communications Act of 1934. During this time, repeated modifications and legal judgments altered the effect and purpose of the legislation. For example, in the early years of its operation, the FCC (a product of the act) was preoccupied with regulating the content, including the political content, of broadcast licensees. By 1996, the staff of the FCC devoted to content matters was as large as it had been in 1934, yet there were fifty times as many licensees.

For a discussion of the Telecommunications Act of 1996 (TCA-96), it is useful to look at the philosophy of the drafters. From the point of view of an admirer,
Economides: The 1996 Act envisions a network of interconnected networks that are composed of complementary components and generally provide both competing and complementary services. The 1996 Act uses both structural and behavioral instruments to accomplish its goals. The Act attempts to reduce regulatory barriers to entry and competition. It outlaws artificial barriers to entry in local exchange markets, in its attempt to accomplish the maximum possible competition. Moreover, it mandates interconnection of telecommunications networks, unbundling, non-discrimination, and cost-based pricing of leased parts of the network, so that competitors can enter easily and compete component by component as well as service by service.

The 1996 Act imposes conditions to ensure that de facto monopoly power is not exported to vertically-related (complementary) markets. Thus, the Act requires that competition be established in local markets before the incumbent local exchange carriers are allowed in long distance service.

The Act preserves subsidized local service to achieve "Universal Service," that is, the provision of basic local service to the widest possible number of customers. However, the Act imposes the requirement that subsidization is transparent and that subsidies are raised in a competitively neutral manner. Thus, the Act leads the way to the elimination of subsidization of Universal Service through the traditional method of high access charges
This was the technology of regulation side of the legislation; content regulation was mostly covered under the Communications Decency Act (CDA), Title V of TCA-96. The CDA regulated internet content, which was problematic since the actual subject of the regulation in this case would be internet service providers (ISPs) or web hosting services. The former are exempt from liability for indecent/obscene content from 3rd parties (e.g., if I post something vile at my other blog, and your child accesses it through AOL, AOL is not liable). Oddly, efforts to prosecute spammers (including obscene spammers) and phishers have not been consequential although most spam originates from a small number of US-located servers. This seems like it would be an uncontroversial use of FCC resources. The CDA did allow ISPs complete permission to block sites they deemed risky; an ISP with a strong political motives could presumably have chosen to block domains associated with views the proprietor found objectionable. I am not aware of legal challenges to this. As for the rest of the CDA, indecency provisions have been struck down. Action against non-internet communications providers have been comparatively rare, despite the increasing tendency for prurient advertising.

According to Economides, the TCA-96 responded successfully to a new dimension of the TCIT industry, that of greatly enhanced competition. For example, the networks now were against the ropes in their competition with cable; cable also competed with local carriers and wireless for broadband internet. The main industry opposition to the TCA would have been the incumbent local exchange carrier (ILEC), or service connecting the telephone user to the global network of telephony. This sector was exposed to new and direct competition from longer-distance providers.

Among the most decisive changes wrought by the TCA-1996 was a lifting of prohobitions on further consolidation of media firms, another concession to the putative reality of increased competition of media.

(part 2)

phishing: a scam that involves sending gigantic numbers of email to people purporting to be from some financial services firm and requesting they respond with financial information such as their account user name and password. Only a small percentage of those who receive such messages will even open them, let alone take the bait, but the amount lost to fraud is large (About $1 billion annually; Wiki). Not related to the 1980's band Phish.
Sources for this post:
Wikipedia, "Telecommunications Act of 1996"

"The Telecommunications Act of 1996 and its Impact," Nicholas Economides, September 1998

Oligopoly Watch, "Oligopoly brief: Clear Channel," July 12, 2003; "Industry brief: US phone industry (Part I)," December 13, 2003; "Phone concentration," August 03, 2004

Text of Law (1996)

Labels: , , , , ,

21 June 2006

Communications Act of 1934

(Table of Contents)

The Communications Act was passed in 1934 to replace the Radio Act of 1927. As a result of it, the Federal Communications Commission replaced the FRC. As suggested by the new name, the FCC differed from the FRC in that the new entity regulated both radio and wire communications. This Act was naturally going to reflect the general tone of New Deal legislation; however, its progressive measures were fairly subtle. One was section 315 ("The Fairness Doctrine"), which was usually interpreted as compelling broadcasters to offer "equal time" to political candidates when differing points of view were broadcast. The Fairness Doctrine was an early casualty of the Reagan Administration, officially on the grounds that it actually promoted mediocrity and conformity. Another was a prohibition of commissioners from being executives of telecommunications firms.

The 1934 Act was mainly a product of prolonged use and ammendment. It was, for example, modified in 1959 to exempt news broadcasters from the "Equal Time" doctrine. In 1943 it acted to reduce network consolidation, creating ABC from part of NBC. In 1970, Congress passed the Prime Time Access Rule (PTAR), which required networks to grant half an hour each night to local broadcasters. Local broacasters merely used this widfall to broadcast reruns and game shows.

In regulating the telephone industry the FCC was far weaker. According the thesis of regulatory capture,* one would expect it to gradually fall under the control of the radio and television lobby because of the relatively large number of firms in that industry c. 1934; in contrast, telephones were already a monopoly in 1934, and the FCC played a very weak role. It issued an 8,000 page study of the industry establishing that tariffs were too high, but failed even slight efforts to break up the vertical integration of the enterprise until the 1980's. As everyone knows, a mere quarter century later the monopoly is now a duopoly of SBC and Comcast (with Sprint a runner up in wireless). This would reflect the concept that the FCC's capture by the telephone industry was not an obstacle because the telephone industry had a single interest: to avoid regulation.

Several factors stimulated the end of the "1934 regime" in telecommunications: the near-extirpation of the old structure of the industry, with clear divisions between mass communications and telephony now gone. Another was the de facto merger of information technology (IT) with telecom (TC), creating a new and now-prestigious TCIT sector. Another, evidently, was that the process of regulatory capture had so totally run its course that Congress was now infected.
* Regulatory capture: firms tend to win control over the government bodies intended to regulate them. Hence, the FCC will gradually become a handmaid of the media monopolies. Possibly proposed by Adam Smith, and later, by historian Gabriel Kolko. A really good summary of the matter is found at Writer of Fortune:
"Regulatory capture" is the name Kolko and others applied to a particular phenomenon: when regulators serve the interests of those they're allegedly regulating in the general public interest. It was known before Kolko's work, but regarded as a dysfunctional aberration that sound policy reliably enforced could take care of. Kolko put the heyday of Progressive regulation under close scrutiny and argued that in fact regulatory capture wasn't just common, it was the norm. He found no important exception to it emerging, and usually emerging very early on in the history of a regulatory agency. As the phrase "triumph of conservatism" suggests, Kolko argued that whatever liberal reformers may have intended and whatever the public may have believed, business interests took control of the actual regulatory process early on and made it work for them.

The basic mechanics of regulatory capture are straightforward. You give more attention to a particular law or agency if you feel that you have something at stake - you're more likely to know about the laws and policies that affect your work, your hobbies, and issues of particular concern to you. And if you're someone important in a business that's about to come under regulation, you have a lot at stake.

Regulators may start off hostile to their subjects, and in some cases this is very much deserved. Libertarians may grouse about, for instance, government imposition of standards for food safety, but even setting Progressive rhetoric aside, the Pure Food and Drug Act came into being in response to real concerns that business was not addressing. Whether it might have addressed them in time is another matter, and one has to be fairly detached to say that people should have waited patiently in the face of diseased meat, food contaminated by offal, bugs, and anything else that fell into the vats, and so on. And freshly regulated businesses often start off hostile to their regulators - also often with good reason, since a lot of regulators start off with a "damn them all, and hang them high" attitude.

Sources for this post: Wikipedia: "Communications Act of 1934"

"The Communications Act of 1934 was a Mistake," unsigned, The Ethical Spectacle

"The Public Interest Standard: the Elusive Search for the Holy Grail" Erwin G. Krasnow, 1997

"Radio and Television"; "The Telephone Industry"; Prof. Marc A. Triebwasser, 1998; Internet & Multimedia Studies, Central Connecticut University, New Britain, CT

Courts of Specialized Jurisdiction

Text of the Act (1934)

Labels: , , , , ,

19 June 2006

Radio Act of 1927

(Table of Contents)

Prior to 1927, radio communications were regulated by spotty and ineffectual legislation, mainly by the Department of Commerce. Legislation of the day made it virtually impossible to deny a broadcasting license, which led to competing radio stations effectively jamming each other's broadcasts. Hence, a primary purpose of the Act was to regulate transmission frequencies, power, and purity (fidelity) of the broadcasts.

According to one analyst (Goodman, 1999), one of the important motives of the Act was to modulate the flow of ideas on the radio. The language of the bill protects free speech, but this meant only that indirect methods would be needed to resist the menace posed by political radicalism. At the time, the Progressive Movement was fading into a narrowly populist, bureaucratizing force in American life. Its factions had become split, with White Protestants determined to restrict the political options of the comparatively new population of Southern/Central European Catholics and Jews. From the other side of the [old] political divide were industry advocates, who wished to use the regulation to build up a lucrative cluster of firms with the financial incentives to invest in new technology. Attempting to bridge this gap was then Secretary of Commerce Herbert Hoover, who was (by the standards of the time) fairly close to the later Progressives in spirit.*

The legal strategy employed by Hoover and the authors of the 1927 Act was to declare that radio was a peculiar variety of public utility insofar as it entered the domain of the household and hence imposed on the listener. The right of the listener to be protected from obscenity was then taken to extend not only to profane speech, but speech that challenged racial segregation. The Blease Amendment, which proposed to ban mention of evolution on the radio, was mercifully rejected; but Sen. Clarence Dill (D-WA) and Rep. Wallace White (R-ME), the experts responsible for crafting the bill and guiding it through Congress, reassured Congress by bluntly explaining that radio was not to be a forum for public debate. Communication via radio was to be declared a privilege, requiring access to public property (the air waves); hence, broadcasters obtained the right to access to the airwaves by sustaining the public interest. In testimony before Congress, White (the legal expert in the bill's drafting) assured Sen. Mayfield that the language of the bill would allow the exclusion of "fringe candidates" from "legal qualification" for participation in national debates. This was intended to prevent individuals from using radio as the foundation of a political power base.**

SCOTUS had already ruled that federal regulators had the power to restrict free speech if it violated the national interest:
Congress' limited concept of free speech was consistent with a decade of U.S. Supreme Court rulings, including opinions written by Justices Oliver Wendell Holmes and Louis Brandeis, two Progressives on the Court. Beginning with Schenck v. United States (39 S.Ct. 247) and Frohwerk v. United States (39 S.Ct. 249) and then in Debs v. United States (249 Federal 211) and Abrams v. U.S. (40 S.Ct. 17), the Supreme Court in 1919 upheld the constitutionality of the Espionage Act as a tool to quiet discontent against the U.S. effort in World War I. Such speech was illegal because its intent was to obstruct the draft and the war effort, Holmes argued in Schenck, Frohwerk, and Debs. The decision in Frohwerk shows the limits that the Supreme Court was willing to place on free speech, even if the danger created by free speech was only possibly a threat to the U.S. Holmes agreed the German-language newspaper that Jacob Frohwerk wrote for had made no special effort to reach draftees. He noted its circulation was so small that the paper had no means to obstruct recruiting. Holmes warned, however, that the paper represented a little breath that could "kindle a flame" in the "tinder box" of the German community, and, therefore, Frohwerk's writings were a threat to national security.
Speech believed to imply an incitement to revolutionary action was also regarded as overstepping the First Amendment guarantee. The conception that radical speech was an actionable violation of "the public interest, convenience, and necessity," was based on Progressive Movement conceptions of the latter:
Godfrey looked in his dissertation at Progressive influence on the Radio Act. He makes particular note of the Progressive roots of Sen, Clarence Dill, one of the architects of the Radio Act, and his identification with Progressive senators James Watson, William Borah, Robert La Follette, Hiram Johnson, and Burton Wheeler. According to Godfrey, one of the areas of Progressive influence was in the selection of the language "public interest, convenience, and necessity." These words were a way of balancing industrial control of radio against the potential for government censorship. To Godfrey, "The founders of the legislation sought to provide a degree of regulation that would preserve industrial freedom and the public interest."
Additionally, the independent commission (one sixth of the membership of which was appointed each year) was a format associated with Populist/Reformist legislation of the time. It was reflected, for example, in the design of the Federal Reserve Board.

Krasnow (1997) suggests that the main effect of the 1927 Act was felt in implementation. The FRC ruled that station programming was a crucial factor in establishing whether the station served "the public interest, convenience, or necessity."
In 1930, the FRC denied renewal of the license of KFKB, Milford, Kansas, on the ground that the station was being controlled and used by Dr. John Brinkley, the "goat-gland doctor," to further his personal interest. The "Medical Question Box," a program aired in three half-hour segments daily, featured Dr. Brinkley answering questions from listeners on health and medicine. In response to listener questions, Dr. Brinkley usually recommended several of his own prescriptions from his own pharmaceutical supply house. The FRC held that Brinkley's practice of diagnosing patients who he had not seen contravened the public health and safety and therefore, the public interest. It also found that he operated KFKB solely for his own private interests.
It's difficult to find fault with this decision. However, a disturbing aspect was the FRC's statement in its ruling that rejecting a renewal after the fact, i.e., because of the appellant's conduct before renewal, was prima facie not censorship.
The FRC also denied the application for renewal of license of KDEF, Los Angeles. That a station was used primarily to broadcast sermons by Trinity Methodist Church's pastor Reverend Bob ("Fighting Bob") Shuler, who attacked Jews, the Roman Catholic Church, law enforcement officials in Los Angeles, and many others. Shuler based his appeal on constitutional grounds, namely, that the FRC decision violated his First Amendment to free speech and his Fifth Amendment right to due process of law.

One of the issues before the Court of Appeals was whether the FRC's refusal to renew Shuler's license was a prior restraint under the then recently decided case of Near v. Minnesota, 283 U.S. 697 (1931), or just a post-publication punishment. The Court of Appeals concluded that while a citizen has in the first instance the right to utter or publish his sentiments, it is "upon condition that he is responsible for any abuse of the right."
Hence, SCOTUS confirmed the principle of the FRC to use its discretion in regard to programming.

Content-related issues dominated the regulatory action of the FCC; many educational broadcasters were forced off the air or compelled to shift to such low powers and schedules as to make them useless for adult education. This would appear to have reflected the franchise interests of the "successful elements of the industry" that Pres. Coolidge had advised Hoover to turn to for advise on how to regulate communiations.
* Readers unfamiliar with American politics need to understand that, beginning in the late 19th century, a conflict raged among Progressives, social democrats, and the hard left (Communists and Anarchists). The Progressives favored clean government and small enterprise; they usually tended to oppose social welfare legislation. Progressive initiatives sought to make capitalism work by preventing the growth of monopoly and by facilitating efficient public services. The Progressives eventually merged with the Republican and Democratic Parties (primarily the latter). The social democrats (a generic term of political art) included Eugene Debs' Socialist Party; years later, it would include the political wing of the AFL-CIO and Max Shachtman's SWP. The hard left favored a revolutionary overthrow of capitalism, and included the Communist and Anarchist Parties; a part of the hard left was the Industrial Workers of the World (IWW), whose members' lives were ruined by severe persecution, including lynchings and deportations.

The socialist-communist split in Western Europe and North America was supposed to have been bridged in the mid-1960's with the "New Left" seeking to create a united front against US foreign policy and racism. This was not terribly successful in North America; the hard left has, if anything, become mainly obsessed with its loathing of the social democratic tendencies of American politics (AKA "liberals"), while liberals sometimes attempt to throw the hard left to the wolves. An example of this would be the eagerness of the AFL-CIO's efforts to purge Communists from its ranks, or collude with the US State Department in purging Communists from non-US labor unions that it advised.

**In defense of White, et al., it must be noted that in Germany a virulent demagogue named Adolf Hitler had done precisely that. For contemporary attitudes about the demagogic powers of the airwaves, I recommend the George Cukor movie Keeper of the Flame, staring Spencer Tracy and Katherine Hepburn (1942).
Sources for this post: Wikipedia: "Federal Radio Commission"

Text of the Radio Act of 1927 (PDF), Mark Goodman, Media History Monographs, 1999

"The Radio Act of 1927 as a Product of Progressivism"

"The Public Interest Standard: the Elusive Search for the Holy Grail" Erwin G. Krasnow, 1997

Labels: , , , ,

Communications Law: USA

List of Entries

  1. Radio Act of 1927

  2. Communications Act of 1934

  3. Telecommunications Act of 1996 (Part 1)

  4. Telecommunications Act of 1996 (Part 2)

  5. Telecommunications Act of 2006 (Part 1)

  6. Telecommunications Act of 2006 (Part 2)

Labels: , , , ,

11 June 2006

Databases for Laypeople: Part 6 in a series

The Frontend
(Table of Contents--Part 5)

The component of the DBMS devoted to user interface is known as the frontend. In a CMS, the frontend is known as the content delivery application (CDA). Both have closely related functions. The more formal, proper term is "client," which is derived from when databases resided on a remote computer. The distinction is that, today, in a web-delivered DBMS, the client resides with the server on the host.

The client/frontend/CDA, then, is responsible for an immense proportion of the database's usefulness. It is responsible for managing queries, which the user enters in a form of some kind (e.g., entering a search for "database" in Wikipedia; entering an Amazon search for used DVD's with a title of Solaris with an author of Tarkovsky), and replying to the query with data exported from the backend in a form.

While the backend includes features to protect the security of data transferred between it and the frontend, the frontend is responsible for making sure data is both accessible and protected. An obvious and familiar example is the rather large CMS known as Blogger. As the author of a website on Blogger, I have to log in before I can add, edit, or delete entries; change the template; or give permission to friends to edit this particular blog. The reader can access the website, but may not edit entries. Blogger, my gracious host, may suspend my account if I use it to post pornography or participate in racketeering. The Blogger site requires a very complex, high-capacity CDA to allow millions of visitors access to hundreds of thousands of blogs; that CDA must also discriminate among bloggers editing their own sites, Blogger account-holders leaving comments at sites not their own, and readers with no Blogger account, who may not leave comments.*

I use CMS applications as examples, although readers should understand they're used mainly to introduce some ideas. Past a certain point, however, the analogy has to break down. There's really not much relevence between SQL, for example, and most CMS applications. I'm hoping you keep in mind the more literal concept of a local area network (LAN) in a workplace, with the database not available to the Internet.

*Some sites allow anonymous commenting; Blogger allows bloggers to turn that feature on and off.
SOURCES & ADDITIONAL READING: "Your First Database," Webmonkey;

C.J. Date, An Introduction to Database Systems, 7th Edition, 2000; in Date's book, "backend" refers to the server, while "frontend" refers to the client. The book tends to deal with the formal, abstract side of the DBMS topic.


08 June 2006

Databases for Laypeople: Part 5 in a series

The Backend
(Table of Contents--Part 4)

The data being managed is referred to generically as the backend. In a CMS, the backend is known as the content management appplication (CMA). The two concepts are closely related. The backend is responsible for allowing orderly import and export of data; it must permit restructuring of the data into new structures as the database grows in complexity; and it must provide for security of access. Likewise, the CMA must be able to pull up content in response to queries; and it must be able to address a large and diversely-formatted pool of data. Usually, sites that are maintained for a long time and updated regularly, such as online news sites, typically have archives that grow very fast.

The more formal, proper term is "server," which is derived from when databases resided on a remote computer. The distinction is that, today, in a web-delivered DBMS, the server is likely to share space on the host with the client (or front end).

(Part 6)

SOURCES & ADDITIONAL READING: "Your First Database," Webmonkey; C.J. Date, An Introduction to Database Systems, 7th Edition, 2000


07 June 2006

Databases for Laypeople: Part 4 in a series

Hypertext and the Relational Database
(Table of Contents--Part 3)

It's somewhat unorthodox to introduce the relational database as a form of hypertext.* But consider the parallels: in order to allow a user to access data in a relational database, the RDBMS has designated key fields for each record that are also a referring field someplace else. In each hypertext document, a part of the document is its address (analogous to the key field), which will appear in the referring hypertext document as a part of a hotlink.

The structure of information in a relational database may be understood, for the time being, as if it were a page in Wikipedia. Each record may be compared to a Wikipedia entry, with some of the fields (or words in the article) linking to other records in other tables. This analogy is not perfect; for example, an article about prime ministers of Peru will link to other "records," or articles, about other prime ministers of Peru. In contrast the object of a relation between two different fields is to link two different types of record--records kept in different tables. That's a flaw in the analogy, and in fact in cases where data records are used professionally--like patents or legal rulings--the hypertext online references do employ distinct tables for structurally different forms of data.

Search portal for the US Patent Office
In order to over come this flaw in the analogy, we need to imagine something with a more professionally focused, but still easy to intuit: a hyperlinking, searchable, US Patent Office-style web page. Like a wiki, it is edited constantly, in this case by users/editors who have applied for a patent, or who are responsible for authorizing those patents. Unlike the most famous wiki, viz., Wikipedia, the records are fundamantally different in character and are stored in different "tables" depending on whether they are Utility (i.e., a new technical innovation that fulfills an identified purpose), Design (i.e., just a refinement of applied art), Plant (e.g., a flower), Reissue, Defensive Publication, or something else. Like many multi-user databases, this allows different users different types of permission: everyone may read Wikipedia or the patent data available above, but only a small number may edit the information. Applicants may edit their application; the USPO staff may edit the status of the application, based on office rulings.

Finally, as with all wikis, a database is edited by many people concurrently, who may sometimes have updates that conflict directly with each other. In some cases, this can have disastrous consequences!

In a database, a record with exactly one related record in another table is pretty rare. One example is the office where each employee has one and only one computer, which is associated strictly with that employee. With database design and capabilities, the possible exceptions to this have to be considered: a computer used by potentially several different employees, or an employee who needs access to several workstations in the office, will make a one-to-one relation impossible for that particular pair of tables. On the other hand, if it does exist, then the fields for "computer" are likely to be part of the employee record.

Typically we think of the page that links to many other pages as being the home page, or site-map. For example, imagine a simple web page with a list of links to articles (example). In database design, the relationship might involve contacts for one's vendors: our imaginary event-arranging company might have half a dozen contacts at Aramark, but each contact will be for only one firm. Aramark has many employees, but each employee works only for one firm.

Much of the time, one encounters "promiscuous" relations: again, with our imaginary event-management company, clients and events can have a many-to-many relationship as well. For example, a book show might be held several times a year, each year. Each event features scores of firms selling books. For purposes of billing, the event-management company has to be able to locate information about each client (or each vendor) related to each event, and for planning each event the staff needs to access information about each event related to each client.

A chart showing the relationship between the entities "Clients" and "Events" might look like this:
The problem of relating an unknowable number of clients to a corresponding, but unknowable, number of events, is solved by having each client on the client table linked to a client:event table. In this way, the "real" master table includes the minimal information about each client:event record.

(Part 5)
* Somewhat unorthodox--but not unheard of. See "From Database to Web Site: Transforming a PC Relational Database to a World Wide Web Resource," by Jane A. Keefer, West Chester University:
Once the discographic file name parameters are established, active hypertext linking is available between all performers and titles simply by using the primary key for each record as the value of the HTML Anchor Name tag. Since each primary key is unique, the correct reference is assured and this device illustrates the functionality of the relational file structure as an early form of hypertext technology, much as the card catalog with its main and added entries can now be seen as a crude effort at relational data structuring.
The blog and the wiki are both examples of cgi-administered databases.


06 June 2006

Databases for Laypeople: Part 3 in a series

Why Relational?
(Table of Contents--Part 2)

In order to organize data in a way that it can be arranged freely by the computer user, programmers developed the relational DBMS (RDBMS). In an RDBMS, the data is organized into tables based on the type of information the data represents. For example, suppose you have a company that arranges events like weddings, corporate picnics, or fundraisers. Your company will have a set of customers, some employees, some suppliers, and some contractors. In a non-relational database, you need to have a table, rather like an Excel spreadsheet, with a row for each record (or example of whatever is on the list) and a column for each field (e.g., name, phone number). In addition to all the contacts, you will have want to have a table for events. The events, of course, will have one or more customers who paid for it, plus one or more contractors (caterers, florists, et al.). You might have another table that says which employee is working what event, and some tracking of payments to suppliers for what event.

If you want to look up the names of contractors for an event--say, Rosa Luxembourg's wedding--then you want to have a table that includes the names of events, plus three or four fields for possible contractors who might serve at the event. If you want to call those vendors and tell them Rosa is getting married a week later than initially planned, you have to open another table for the vendors--unles, naturally, your master event list is so immense it can include all the vendor's phone numbers as well. Supposing you need to include a cell phone number and an office number for each vendor--which seems likely---it's possible a change in the area code would leave you franctically updating many different tables. The master event table would have a stupendous number of fields , causing it to require a long time to open or search, and the information might be wrong.

Here, a hypothetical listing of vendors for our hypothetical firm.

For that reason, you would want a DBMS that simply pulls up the data you request, and displays it all conveniently. That's what an RDBMS does.

But how does an RDBMS work?

At the core of the RDBMS is the concept of hypertext. Hypertext is the technology of organizing text through links, like the World Wide Web (WWW). If you start off researching the concept of a Riemann surface in Wikipedia, for example, you will find an entry like this:
In mathematics, particularly in complex analysis, a Riemann surface, named after Bernhard Riemann, is a one-dimensional complex manifold. Riemann surfaces can be thought of as "deformed versions" of the complex plane: locally near every point they look like patches of the complex plane, but the global topology can be quite different. For example, they can look like a sphere or a torus or a couple of sheets glued together.

The main point of Riemann surfaces is that holomorphic functions may be defined between them. Riemann surfaces are nowadays considered the natural setting for studying the global behavior of these functions, especially multi-valued functions such as the square root or the logarithm.

Every Riemann surface is a two-dimensional real analytic manifold (i.e., a surface), but it contains more structure (specifically a complex structure) which is needed for the unambiguous definition of holomorphic functions. A two-dimensional real manifold can be turned into a Riemann surface (usually in several inequivalent ways) if and only if it is orientable. So the sphere and torus admit complex structures, but the Möbius strip, Klein bottle and projective plane do not.

Each bit of blue underlined text links, obviously, to another article in Wikipedia. The relational database works the same way. Rather than actually have all of the data, such as detailed information about each vendor, event, employee, supplier, and customer appear on some colossal event table, many of the fields for each event are linked to a corresponding record on another table. The vendor field will be linked to a record in the vendor table; the customer field will be linked to a record in the customer table.

Likewise, just our example of the Wikipedia entry on Riemann surfaces linked to complex manifold, the entry for which links back to Riemann surface. In our database example, this would include the link from the event record to customer; say, the event is a convention of textbook publishers, so the customers include several publishers. Each publisher, in turn, needs to be linked to the events it has participated in. In my next entry I'll discuss the way relational databases handle this.

(Part 4)


03 June 2006

David Friedman on Japan's Economic Miracle

      The Misunderstood Miracle:
      Industrial Development and Political Change in Japan
      David Friedman, Cornell University, 1988

Generally speaking, explanations of Japan's extraordinary economic performance (1955-1975) fall into two categories. One is the market regulation hypothesis, while the other is the bureaucratic regulation hypothesis.1 According to the first, the industrial "take-off" of Japan was the result of soound, free market policies. According to the second, Japan's successes reflect the wisdom of a symbiotic relationship uniting state bureaucracy, industrial management, and finance.

(A third common explanation, common after WW2, was that the Japanese had certain ethnographic traits that were ideal for industrial efficiency. Social scientists no longer take these ideas seriously.)

Friedman quotes from Johnson frequently; in part this is because, while he is critical of Johnson's conclusions, he is influenced by his methods and body of evidence on MITI. Johnson characterized postwar Japan as "a developmental state," in which the state and polity are primarily absorbed in the task of development. In Japan, it is generally accepted that the leadership was concerned heavily with economic strength, and suffused the whole of society with its goals. Since the object was to catch up to other nations, like the United States, on certain measurable accomplishments, a market-planning system was not entirely necessary; for one thing, much of the price mechanism was determined by Japan's open economy.

Friedman examines the actual record of intervention by the Japanese bureaucracy, beginning with the prewar version of MITI, MCI.

In the late 1920's several measures were taken to stimulate industry. The country was known to suffer from military vulnerabilities, and the scope of the country's industrial output was alarmingly small. In 1929, the Ministry of Commerce and Industry was founded; it would be revived after WW2 as MITI (1949), with similar policies. With the passage of the Major Industries Control Law (1931), MCI staff tried to induce the industrial firms to merge. It was widely assumed at the time that the future was in mass production of homogenous goods using large, costly, specialized machinery.

To put matters simply, Friedman describes the endeavor as a failure. He provides an exhaustive review of the machine tool industry, will all of th e legislation and all of the industrial responses. In no case did the MCI policy work as planned. In each case, the bureaucracy was determined to cause mergers and acquisitions. The bureacuracy was determined to get firms to select a product and focus on that. The bureaucracy was determined to restrict market entry to prevent cutthroat competition in which firms sold at below cost. At no point did any of these efforts succeed.

For example, by 1941, the Japanese authorities—in essence, the military ideologues—were desperate to improve the ability of domestic industry to produce heavy equipment. Their vision was US-style Fordism in trucks, tanks and armored vehicles, artillery, and shipping.1 To this end, the government instituted the töseikai (control councils) as mandatory "cartels." The government set objectives, and the töseikai enforced those objectives. Please recall that Japan already ha a significant share of enterprise under the control of zaibatsu, or family-dominated conglomerates. The zaibatsu were blamed by the militarists for having managed to avoid building up heavy industry during the earlier years of Japan's industrialization, preferring instead to either import heavy machinery from Europe or North America, or else channel investment into light manufacturing and industrial services. Organizationally, they consisted of great hierarchies of firms, with elaborate holding arrangements culminating in the family trading company (sögö). The zabatsu, in other words, were imagined to have created these giant cartels with government assistance (early in the Meiji Restoration) and squandered their market power on short-term profit. Yet the töseikai merely detached the lower-tier zaibatsu subsidiaries. Otherwise, they were entirely thwarted by the existence of several power, business associations formed by industrial managers of the small and mid-sized firms.

Consequently, there was little correlation between any of the military's wartime objectives and actual execution. And Japan was, indeed, a totalitarian society during the period 1941-1945!

Friedman includes a case study of regional contributions to the success of the small, flexible manufacturing firms. In particular, he examines the case of Sakakai, a small town in the highlands of Nagano Prefecture. Sakakai is actually one of those famous success stories that one occasionally hears about; while visually not as prepossing as either the anime character of the same name, or bullet trains wizzing past Mount Fuji, it is a very durable and admirable achievement.3 The town is filled with literally hundreds of the smallest possible industrial firms, some with as few as three employees, yet these firms possess the highest levels of technical sophistication anywhere. Friedman explains the role of the shökökai (chambers of commerce), which have been responsible forsecuring government loans for fledgling manufacturers, and insuring those firms against business downturns.

One of the more interesting conclusions that one can draw from Friedman's study is that Japan's industrial successes (closer in character to those of Taiwan and the Republic of Korea, than to the US model) was the result of unintended, but neverlessless benign, consequences of voluntary industrial cooperation (within regions) and MITI's persistant efforts to "rationalize" industry. Japan has succeeded by learning to imitate a school of sardines rather than a whale.

(Cross-posted at Hobson's Choice)
NOTES: 1 Market regulation hypothesis: David Friedman's terminology. Proponents of this [furnished by Friedman] include William W. Lockwood, The Economic Development of Japan (Princeton Univ., 1954); Elizabeth Schumacher and G.C. Allen, The Industrialization of Japan and Manchukuo (MacMillan, 1940); and more recently, Gary Saxonhouse and Kozo Yamamura, Law and Trade Issues of the American Economy (Univ. of Washington Press, 1986).

Bureaucratic regulation hypothesis: the main proponent of this is Chalmers Johnson, in MITI and the Japanese Miracle (Stanford Univ., 1982). Another version of the theory appears in T.J. Pempel's "The Bureaucratization of Policy Making in Postwar Japan," American Journal of Political Science, 1974. This argues that the Japanese bureaucracy used cartels as the key instrument of industrial policy, thereby leading industry to rationalize production.

Finally: to the best of my knowledge, author David Friedman is not the person as David D. Friedman, the anarcho-capitalist.

2 It's interesting to note that, according to Clive Pointing (Armageddon, Random House 1996), Allied shipping losses to Axis submaries were around 6 million tonnes, whereas Japanese shipping losses to Allied subs were around 8 million tonnes. The OKW initially expected their sinking of Allied ships would exhaust the ability of US and British shipyards to keep up. Obviously, the comparative burden on Japan was much greater.

3 For those interested in how this phenomenon has played out in other locales, Mark Lazerson has published a study "A new phoenix?: modern putting-out in the Modena knitwear industry" (1995).

Labels: ,

02 June 2006

Databases for Laypeople: Part 2 in a series

(Table of Contents--Part 1)

Most working databases include information in arrays that are far too complex to be viewed as a table, chart, or diagram. Database management software (DBMS) are there to retrieve very specific data on request. An example of this in an actual industrial setting would be the chemistry lab, which collects an immense amount of data on the chemical composition of samples. With potentially hundreds of thousands of chemical samples on file, the lab's client will require the specific results for a particular site collected for a particular day, plus, mandatory records that quality control was applied to the relevant sample batch.

So the DBMS has to be capable of retrieving data. The method of doing this is known as a query. The program will ask you specific questions about the data you want, then fetch it.

One the DBMS has gotten the information you want, however, it has to be in a useful form. Earlier, in order to illustrate the 3D database, I went to the website of the International Labour Organisation (ILO). The data comes back in a gigantic Excel Spreadsheet, which requires some aptitude with manipulating and transposing tables. In contrast, the St Louis Federal Reserve Bank includes a table and chart, different formats for downloading, and so on. Other programs offer detailed reports at the push of a button: MS Access, for example, can be programmed to churn out a boilerplate report complete with elegantly formatted tables. Apache Medical Systems, for example, uses patient databases to aid in diagnostics and clinical trials.
Open Clinical: The APACHE I system was developed by William A. Knaus, an intensive-care physician at George Washington University Hospital, Washington DC, and colleagues from 1978 on. They began collecting and computerizing the experience of intensive care patients from dozens of hospitals. The computer considered each patient as a complicated sum of several variables: diagnosis and physiological abnormalities on admission to the ICU, age, pre-existing medical problems, etc. The system was designed as a way to judge how the hospitals were doing in terms of the mortality rate of its patients.

A physician can give the computer system 27 easily obtained facts, and the program predicts that patient's risk of dying in the hospital. The system is also useful in answering the question: is treatment making a difference? Studies have shown that about half the deaths in American Intensive Care Units now occur after a deliberate decision has been made to stop "heroic" measures. While APACHE does not make such decisions, its advocates say it helps those who must make them ponder the issues in the fairest and most realistic way.
For our purposes, this is a glimpse ahead. It sheds some light on the value of databases: they can make information, especially large amounts of complex information, work for the organization.

In addition to queries to locate data and reports to display them in a usable form, DBMS must allow the data to be input and updated, even while somebody else is using the database from another terminal. This is a part of the problem of database security.

As we shall shortly see, database security is an extremely important, burning issue; it trumps nearly every other consideration from the point of view of DB managers.

(Part 3)


01 June 2006

Databases for Laypeople: Part 1 in a series

(Table of Contents)

A database is a collection of information that needs to be managed in some way. This could be as simple as a list of names with phone numbers. You can also have several values, or fields, per record. So, for example, here is a two-dimensional table in which each country is a record, and each year is a field. Needless to say, this table only is concerned with one matter: the entire human population in a particular year.

In the chart below, the database includes population for several different age categories, for each year, for each country. This requires a 3D-database because each field has 3 dimensions: year, age, and country. Obviously, I could add an endless number of dimensions.

The organization of data can befar more complicated that this; there can be a need to find, for example, the population of working women in Europe aged 40-44 years.

Complexity in the data organization can arise in another way. Another standard organization of data is hierarchy (like a family tree; see chart below). As it hapens, the family tree is the standard illustration of a hierarchical database: each relationship is a parent-child relationship. Each parent may have several children, but each child has only one parent (in database terminology).

Click for larger image

However, in real-life applications, it's not unusual to have many interlocking hierarchies. Consider the case of a chemistry lab that performs tests on soil. The lab runs three different tests on each sample, but the samples for a particular client are distributed among different runs of each test. The instruments, in other words, have information about the contents of each sample in each batch, and the database management software (DBMS) has to link the samples to different customers:

This chart is extremely simple compared to an actual lab, where "Company A" may be a sample firm that collects samples for a different companies and requires separate lab reports for its clients. In this case, the challenge for the database designer is getting several interlocking trees (or hierarchies) of data to connect, with each member sometimes occupying different levels in the hierarchy.

(Part 2)


Databases for Laypeople: Handy Links

(Table of Contents)

Relational Database Management Systems (Christopher Browne)—Mostly Linux-oriented

Philip and Alex's Guide to Web Publishing (Philip Greenspun)

University of Florida—Department of Pathology: DBMS Page (defunct)

University of Texas—Information Technology Services (ITS) "Introduction to Data Modeling"

Webmonkey—"Your First Database" (Jay Greenspan)

Wikipedia entries: Database; Database Models; Relational Model; RDBMS; Comparison of relational database management systems; Comparison of object-relational database management systems;

BOOKS: Gavin Powell, Beginning Database Design, Wrox, (2006); Ryan Stephens & Ron Plew, Teach Yourself SQL in 24 Hours, SAMS (2003)


Databases for Laypeople: Table of Contents

  1. How Data is usually structured
  2. Things that Database Management Software (DBMS) needs to do
  3. Why Relational DBMS?
  4. Hypertext and the Relational Database
  5. The Backend
  6. The Frontend

Sources & Handy Links