20 July 2007

Ruby

Ruby is an open-source object-oriented general purpose programming language. It is implemented with an interpreter, although compilers have recently become available for Ruby [*].

The main selling point of Ruby is that it is exceptionally easy to learn. However, it also has the big advantage of the Ruby on Rails application, which has made the Ruby language especially useful for developing interactive websites for organizations. Another popular package that competes with Ruby (after a fashion) is Drupal. One feature I've noticed, though, is that, the Ruby on Rails application has as its main selling point the Ruby language; the whole point is that one has a language that is very simple and an integrated development environment (IDE) that automates most of the coding process anyway. In contrast, Drupal is a completed CMS; there's no need to know PHP in order to use it.

Ruby was developed by Yukihiro "Matz" Matsumoto, a Japanese national. The basic design concept was to have the interpreter do most of the work of the programming, and to make the syntax as predictable as possible ("principle of least surprise"). One strategy, for example, was to have everything be an object. Mostly Ruby is used to manage domain-specific databases, and the definition of variables or data items is sometimes a judgment call. In Ruby, everything is an object, including integers (normally "primitives" in object-oriented languages).

Another distinctive feature of Ruby is that the class structure (taxonomy) of objects does not support multiple inheritance; objects may only belong to a single subclass. Procedural syntax is supported, but all methods defined outside of the scope of a particular object are actually methods of the universal Object() class. Since every other class is a subclass of this one, the changes are inherited by all objects of each class.
SOURCES & ADDITIONAL READING: Ruby, "What's Ruby?"; Wikipedia, Ruby (programming language); Ruby-on-Page; "Why's Poignant Guide to Ruby" (highly unorthodox);

Developer Shed, "Web Development: Ruby on Rails";

Labels: , , , , , ,

19 July 2007

XML-2

(Part 1)

Disclaimer—I am not an expert in this topic, but a student. I am hoping that my notes may be useful to others researching the subject. Those of you who notice errors, please feel free to either contact me or leave a comment below.

The extensible markup language (XML) is a type of language used for creating web pages or, in some cases, GUI displays for non-web applications. XML is a very, very commonly used example of a domain-specific language. That's because XML allows a domain, such as a website, to include definitions of its terms. The same language, in another domain, will have different definitions and hence different results. By contrast, the hypertext markup language (HTML) used for coding in very simple static web pages is not domain-specific; anywhere you use it, it will produce the same results.

One of the peculiarities of XML that makes it so attractive for all manner of GUI views is that programmers define the tags. So, for example, suppose you are creating a chart that displays information about classical music concerts. You need to include the the name of the work, the composer, librettist (if there is one), the performing ensemble, the conductor, soloists, venue, date, and sponsor. You might also need to include the price of tickets. You could have a tag marked <work>, another marked <composer>, and so on (I've included an invisible period to prevent Blogger from converting them to actual tags). You would then have a stylesheet specifying what the tags mean.

Another advantage of this is, naturally, is that a computerized filing system can identify the data automatically. You may think having different typefaces for all those things is too busy and visually displeasing; but of course you can then define the tags to all look the same. The program will still be able to search for those items. Another advantage is that some systems of information, like chemistry or music, have peculiar systems of notation that arise in publications. As one might expect, there are implementations of XML for specific trades: XHTML, RSS, MathML, GraphML, Scalable Vector Graphics, MusicXML (all links to Wikipedia). The first one, XHTML (Extended HTML) is a melding of HTML functions under XML rules, allowing more efficient processing. XHTML has more demanding rules for use than HTML does, but it can be used efficiently as a universal code for creating GUI's since it can be processed far faster than HTML can.

When all the tags and their meaning have been defined in the stylesheet, we say the programmer has created an XML application. Part one includes mention of some commonly used applications; XHTML is the best known, but there are naturally hundreds of others recognized by the W3C standards group, and presumably tens of thousands created by organizations for proprietary use. Creating a new XML application is scarcely more difficult than locating an existing one that's right for any particular organization, especially when XSL exists to facilitate transformation into something the browser can display.

XML does impose some rules about how data is represented. There are two senses in which this is meant: well-formedness and validity. Well-formedness rules apply to the definition of attributes and elements (XML predefines none). First, when an HTML file contains errors, the interpreter tries to establish what the programmer meant to say. This actually creates wildcat HTML, in which coders write HTML based on how a browser will display it. Other browsers are designed to display it a different way, so the page will not display properly. XML nips this wildcat forking in the bud by insisting that malformed pages trigger an error.

Well-formedness rules include specifications about what tags are legal. All tags must close and, in XML, tags are cases-sensitive; tags also must be closed in order reverse to when they were opened. (See here and here for some basic rules of well-formed tags). The rules for correct coding in XML is much more exacting than in HTML, which allows XML tags to be treated as actual code rather than as scripts to be interpreted; hence, there is a requirement for a rigorous syntax and execution.

Markup languages are about more than merely tags; there are also elements, which include the basic building blocks of a document. In HTML, it's possible to create a document with no elements at all; for example, one could simply open Notepad, type
<html>Hello World! </html>
and save as .html, and that could be a web page. You could even include a few tags, e.g.,
<font color="#9966FF"></font>
and the result would be a perfectly splendid web page. In XML, this is not a valid way to create a document. Everything in the page must belong to an element, and there is a hierarchy of elements. An element may be a listing of some kind, or the body of text, or footnote text, or a title, or salutation. An element is opened by a tag, and must always be closed by one, unless it's an empty element (in which case it has the closing slash thus, <hr/>). Incidentally, those using Blogger may have noticed, in the Edit Html view, that <img src="*"> is always changed to <img src="*" /> after switching views or after saving; the reason is that Blogger is programmed to convert HTML 4 code to XHTML.


XML elements have to take a hierarchical form with a single root.

Elements, like objects, have to be organized into a single hierarchy of classes and subclasses, with each document element nested within higher-order elements. Attributes are the distinguishing features of elements; they are enclosed in quotation marks within the element tags (e.g., <font size="2">). For certain elements, such as pictures, some attributes have to be defined. XML defines no elements, but XML applications do.

Entities are named units of storage in XML. Conceivably, the entire document may be an entity, including associated files defining the XML elements. However, this is trivial example of an entity. Internal entities are defined in the document and may include something as simple as a symbol, or perhaps a footer. External entities include a universal resource identifier (URI) , which identifies precisely where the content of the entities is found. In some cases, the advantage of using an entity is that it may be used as a variable; changing the value in one place changes it everywhere it appears in the resulting document. Also, internal parameter entities can be used in the associated files to change what is a legal element.

Another important component to XML documents is the document type declaration (DTD), which includes a list of the elements required in a valid document; or, if there are multiple kinds of documents defined, specifies what must be included in each. DTD's are not the same thing as style sheets that define the tags; DTD's define the elements that exist, what attributes are allowed, and what entities are accepted. A valid XML document has this declaration, and conforms to it. The DTD is sometimes included within the page it describes, but more frequently it makes sense for the DTD to be a file (*.dtd) that serves the whole domain. Examples of DTD code can be found here.

Notes
wildcat:
something which is spontaneous and in violation of discipline. An example would be an illegal mine, created by miners secretly digging a side tunnel off the main pit. Wildcat mines are a major problem in China, and involved in hundreds of deaths each year. Other examples include "wildcat strikes," which are not authorized by the trade union, but may be merely a mutiny by a group of workers in one location. Here, obviously , a wildcat HTML is just a form of HTML that is not recognized as an HTML standard and which creates an effect not designed into the browser.

forking: the creation of a new, modified version of a software by a group other than the original publishers. For example, if someone takes GNU Emacs, alters it to recognize more character sets, and then launches it as (say) "Water Buffalo Emacs," that would be a fork. Hereafter, WB Emacs might compete for developer time and energy with GNU Emacs. See hacking.
________________________________________________
SOURCES & ADDITIONAL READING: Wikipedia, XML; W3C Tutorials, XML; Jan Egil Refsnes, XML files;

BOOK: Eliot Rusty Harold, XML 1.1 Bible, Wiley Publishing (2004)

Labels: , , , , ,

18 July 2007

Drupal

Drupal is an open-source content management software (CMS) used to create blogs, wikis, or news sites. It was developed in Belgium as "Drop" (from dorp, for village), and later incorporated features from the GNU Project. It is written in PHP, a programming language very commonly used for CGI web applications; and it uses a database structure (in which content and template settings are stored in tables as a backend).

Other CMS applications, such as bitweaver, have the same technical features of Drupal listed above: open source, PHP, CGI, database storage, and multi-draft storage. However, Drupal is much more commonly used; for example, the Air America Radio and Onion websites are powered by Drupal. In fact, typically Drupal sites tend to be associated with community activism, non-profits, or political campaigns. Partly this is because Drupal is free software, but is well-suited to professional implementation. It's somewhat difficult to install and modify, which precludes non-IT people from tinkering with it, but for those seeking employment as web designers, it can be readily used to create an attractive and dynamic site. Drupal is scalable to include e-commerce and multi-function websites; for example, a news site that sells promotional material (mugs, tickets, etc.) but also includes lengthy archives of past articles, and a comment section. Drupal sites can be very large, i.e., with a very large number of pages and editors; and it can be very robust.

Part of the great strength of Drupal is the good choice of technology used: PHP has overtaken Perl as the preferred language of CMS applications (Perl applications often face severe installation problems on web hosts running Apache [*]). While there are many CMS applications written in PHP, the majority generate static pages, because that's the easy way to go; it's only when a dynamic website is mature and the organization discovers it needs more features, that it realizes it needs to switch to the database format. And finally, it's object-oriented, which means that an installation of Drupal can be enhanced by plug-ins.
ADDITIONAL READING & SOURCES: Lewis-Bowen, Evanchick, & Weitzman, "Getting started with Drupal" (2006);

Drupal blog (many contributors—deals with responses to problems, issues, & questions);

Labels: , , , , , , , ,

14 July 2007

Ruby on Rails (RoR)

(Sometimes known as "Rails")

As the name implies, a open-source software application whose core includes the Ruby language; incorporates other program elements to assist in the creation of Ruby-coded, database-oriented websites. It can be roughly described as a special type of CMS, except that, while a CMS is essentially a dynamic website creator, Ruby on Rails is supposed to allow a designer to substantially add website functions that the developers may not have thought of. Nearly all web applications are CRUD—reate, read, update, and delete information from a database. RoR automates this as much as possible.

The architectural principle of RoR is the model-view-controller (MVC) template, which is somewhat similar to the basic database concept itself. The MVC schemata divides the structure of the application (and the data it manages) into a model, a view, and a controller. The model can be described as a map of the domain-specific data. Put another way, the database is a body of data stored in a select location or group of locations (domain), and the model maps data to a diagram (the way a map has a fixed relation between points on the earth's surface and points on the paper).

The view component of the software is responsible for rendering the model (for navigating the data), getting updates from it, sending user commands to the controller, and fulfilling controller view selection. The controller translates user interactions with the view into work performed by the model. The view module passes user gestures to the controller, which then maps those actions to the model. The MVC schemata is used with object-oriented databases, in which the view module may need to change a lot. Another important point is that MVC standarizes much of the OO-database management system so that naming conventions are highly predictable.

Ruby on Rails uses this system to define its object classes: the base class for the model is ActiveRecord::Base, and each of the tables in the database is a subclass of objects.[*] This actually allows one to automatically generate, with a single command, a new web application with at least one view and one controller (with four methodsadd, delete, login, and logout).

Another interesting feature is that RoR does not require a separate code file for each URL; Rails uses an architecture in which the controller and a view (an ERb template in an .rhtml file) together serve a number of actions. Each action handles some of the URLs in the modeled domain. These defaults are so comprehensive, they are known as opinionated software, and naturally can generate most of the look and feel of the web application spontaneously.

Rails is a fairly recent product; it was first released in 2004 and, like GNU/Linux, can be substantially modified by the distributor. One of these will, in the future, be Apple: Rails is now a standard component of the Mac OS X operating system.

SOURCES & ADDITIONAL READING: Ruby, "What's Ruby?"; Wikipedia, Ruby on Rails and Model-View-Controller (MVC);

Developer Shed, "Web Development: Ruby on Rails";

Labels: , , , , , , ,

10 July 2007

Interpreters

Over a year ago I posted about integrated development environments (IDE's), and mentioned in passing that interpreters were
An "interpreter" is a program that executes another program line by line, rather than translating it into assembly or machine code. When an interpreter runs a program, it responds to errors differently than would a computer running the compiled version of the same program. Since the compiled program is merely a translation, an error would potentially cause the computer running the program to crash. An interpreter can stop and send an error message.
That was, of course, profoundly inadequate. I'd like to post just a little bit more about what interpreters do.

Strictly speaking, any programming language may be implemented with an interpreter or a compiler; although some exceptions may apply. Disagreement exists over whether Java is an interpreted language or a compiled language, with some (Lowe, p.11) saying it is compiled, and others (Cohn, et al.) saying it is interpreted. Perl may be implemented with a compiler; so may PHP; Lisp; and Python. I don't pretend to be an authority on the subject, but there's a basic concept in logic that the statement "All x is y" is false if even a single example of xy can be found. I am vulnerable here on the grounds that disagreement may exist over whether a thing is a compiler or an interpreter.

There are several reasons why a language may be implemented with an interpreter rather than a compiler. First, the obvious reason is that you may want a developer environment that allows you to debug the program. With a compiler, you simply get a message, "Runtime error." It might tell you more, but a really sophisticated interpreter can help you find the actual spot where the error occurred, and even help correct your spelling. Since the compiler's function is to translate the entire program into byte code that the machine can read "in one shot" as it were, debugging with a true compiler is a little like finding a needle in a haystack.

Another reason is that an interpreter may be easier to update and be creative with. Ruby was developed with certain syntax innovations ("principle of least surprise"—POLS), and of course it was a lot easier to create an interpreter that could run an increasing number of commands, than a compiler with a fully-revised complement of libraries, ported to that specific model of microprocessor. Also, a compiler generates machine code, or data in the ones and zeros that the microprocessor actually understands. In contrast, an interpreter can be written entirely in a high-level programming language like C, without any knowledge of machine code.
________________________________________________
How do Interpreters/Compilers Work?

There are several similarities between compilers and interpreters at the operational level. The code that is sent to the compiler/interpreter for execution is called the source file; sometimes, programs written explicitly for use with an interpreter are called scripts. Both interpreters and compilers include a scanner and lexer. The scanner module reads the source file one character at a time. The lexer module divides the source file into tiny chunks of one or more characters, called tokens, and specifies the token type to which they belong; the effect is rather like diagramming a sentence. Suppose the source file is as follows
cx = cy + 324;
print "value of cx is ", cx;
The lexer would produce this:
cx  --> Identifier (variable)
= --> Symbol (assignment operator)
cy --> Identifier (variable)
+ --> Symbol (addition operator)
324 --> Numeric constant (integer)
; --> Symbol (end of statement)
print --> Identifier (keyword)
"value of cx is " --> String constant
, --> Symbol (string concatenation operator)
cx --> Identifier (variable)
; --> Symbol (end of statement)
The ability of the lexer to do this depends on the ability of the scanner to document exactly where each token occurs in the source filer, and its ability to scan backwards and forwards. Sometimes the precise meaning of the file depends on its position with respect to other token types. For example, operators may contain more than a single character (e.g., < as opposed to <=). The lexer may have to pass a message to the scanner to back up and check to see the identity of neighboring characters.

The parser receives the tokens + token types from the lexer and applies the syntax of the language. The parser actually requests the tokens and assesses their appropriateness with respect to the syntax of the language, and sometimes demands additional information from the lexer module.
Parser: Give me the next token
Lexer: Next token is "cx" which is a variable.
Parser: Ok, I have "cx" as a declared integer variable. Give me next token
Lexer: Next token is "=", the assignment operator.
Parser: Ok, the program wants me to assign something to "cx". Next token
Lexer: The next token is "cy" which is a variable.
Parser: Ok, I know "cy" is an integer variable. Next token please
Lexer: The next token is '+', which is an addition operator.
Parser: Ok, so I need to add something to the value in "cy". Next token please.
Lexer: The next token is "324", which is an integer.
Parser: Ok, both "cy" and "324" are integers, so I can add them. Next token please:
Lexer: The next token is ";" which is end of statement.
Parser: Ok, I will evaluate "cy + 324" and get the answer
Parser: I'll take the answer from "cy + 324" and assign it to "cx"
Indents are used here to indicate a subroutine. This illustrates what the interpreter/compiler must do in order to add cy and 324. If the parser gets a token that violates the syntax, it will stop processing and send an error message.

The next module is the Interpreter or, with compilers, the Code Generator, which actually executes the code. With interpreters (as opposed to compilers), this is sometimes part of the parser; the parser interprets and converts the statements into bytecode (i.e., intermediate language, passed off to a compiler). In the case of the compiler itself, the code generate produces machine code that can be executed by the microprocessor.

(Special thanks to Scorpions4ever)
SOURCES & ADDITIONAL READING: Wikipedia, Interpreter (computing), Interpreted Language;

BOOKS: Doug Lowe, Java for Dummies, Wiley Publishing (2005); Cohn, et al., Java Developer's Reference (1996)

Labels: , , , ,

05 July 2007

The FTC & net neutrality

(Series on Communications Law, USA)

I was alerted by a series of posts that alleged that the Federal Trade Commission (FTC) had abandoned net neutrality. This is not exactly accurate, and I'd like to start over and explain to readers (in brief) the meaning and status of net neutrality. Top FTC officials are hostile to the concept of net neutrality, since the FTC regards its primary mission to devise and promote legal standards that favor export revenue. Should Internet neutrality be enshrined into law, the FTC believes it will be undermining the telecommunications sector's ability to capture rents from its infrastructure. This issue has come up during a period where the FTC is entirely beholden to the telecoms, and it has continuously advertised its position that it believes what is good for AT&T is not only good for America, but the epiphany of justice as well.

However, as an arbiter, the FTC needs to validate its position in trade law; it may not rule by whim. Hence, it has published endless "studies," which amount to editorials that might have been written by attorneys for the major telecom firms. It is Congress that must decide.

The Internet uses a system of packet switching to transmit very large amounts of digital information over existing telephone, coaxial cable, and DSL lines. In the past, when people used telephone lines solely to communicate orally, the effect was analogous to a train, which occupied the track (so to speak) for the entire duration of the phone conversation. If we pretend that all roads consist of one lane, and that they are interchangeable with railroads (so that, for example, it were possible for trains to use—and tie up—the highway for half an hour at a stretch), then the analogy is nearly perfect. Only one train headed for one destination may occupy one track at a given time. This is compatible with the immense loads that trains—or telephone conversations—carry. A 100-car train may carry about ten thousand tons of freight; a telephone conversation, a continuous stream of rich audio data. In contrast, digital transmissions need only communicate a finite string of bits. This is equivalent to thousands of little Vespas buzzing onto and off of the highway. Even when an internet connection is active, its connection to the server can be analogized to an intermittent traffic of a few hundred scooters. Naturally, other scooters can fit in between them with ease. During the course of an internet session of (say) two hours, the volume of data transmitted may well be equivalent to a telephone conversation of one or two minutes. That means that several scores of internet connections may have the same telephone load as a single telephone call.

Of course, the TCP/IP protocol makes this possible by arranging the data into packets of fixed duration, which then flow through like cars through a busy city center. The Internet Protocol acts like a system of traffic codes and signals that coordinate the packets so that they flow smoothly. The interesting thing about this is that, with improvements in data compression technologies, it is (ironically) possible to use the Internet to transmit audio files as bit packets, more efficiently than as analogue streams—the thing that telephone lines were created to do. Another point to bear in mind is that the Internet and the IP protocol have evolved over time so that most traffic now occurs over broadband connections, in which data is transmitted hundreds of times more rapidly. The physical constraints of 1990's-era telephone lines have been superseded by ethernet and coaxial cable, but this merely means that the load of potential data that can be transmitted has physically increased, without a drastic shift in the prevailing rules.

Now, in the past the IP system has been utterly, relentlessly neutral. The analogy to cars moving through a vastly busier traffic network, with vastly increased capacity, still holds. Stoplights don't award faster access to the cars of wiser and busier people, at the expense of cruisers and idle wastrels. In fact, the physics of vehicular traffic is somewhat different from that of internet connections; so in my TRON-like universe, packets marked "Priority A" are awarded with closer spacing (and hence, greater volume) than packets marked "Priority B."

The vast majority of internet connections are provided by telephone companies, which created the system of "pipes" based on the presumed mixture of conventional telephone calls and broadband internet connections. The premium on telephone service is so huge that it makes telephone monopolies immensely profitable; from a financial/business perspective, internet service is just a way of getting additional revenue at little marginal cost. The problem is that services such as voice-over-internet protocol (VoIP) would mean that the main revenue stream for telephone companies would be cut off. Instead, people would use the phone company for internet connections, if that. A huge number of Usonians already get their phone service through their wireless service anyway. With technologies like Bluetooth and Wi-Fi, a Skype subscriber can actually use her cell phone at work or at home, and get VoIP service as if she were communicating through a headset plugged into her computer. That would effectively cut out the PCS companies as well as the landline telephone companies.

One way of getting their money back is for telephone companies to use a strategy known in basic economics as "discriminatory pricing." This means that people using the Internet more heavily (as, for example, those who get movies or VoIP through it) would pay a premium, but get better service. Better service, in this case, would mean VoIP packets would get a higher priority.

It's interesting to note that a mass migration of telephony and television services to IP channels of delivery would lead to a new business model under which broadcasters networks, cable TV providers, and telephone companies would all become virtually indistinguishable businesses, all providing a mixture of IP-related services. In the same way that all financial services in the United States were allowed to merge into nationwide supermarkets of finance, it seems likely that all entertainment and communications services in the country would become a single amorphous sector of the economy. And just as supporters of the financial supermarket concept argued that there would be increased competition among financial service providers, so members of the telephone companies are arguing that they face increased competition from VoIP (which is usually delivered over phone company lines). In order to support innovation in Internet multimedia, some form of discriminatory pricing will be required.

Where this is alarming is when the ISP's and search engines collude to apply a discriminatory pricing model to websites. We're already accustomed to search results offering the lowest prices on "The Damned of the Earth."* In the future, search engines like Google or Yahoo! would have to tweak their search algorithms to reflect the new primacy given to bandwidth-intensive services, like... CNN, Fox News, and so on. In effect, the Internet would become extremely skewed towards commercial media, and net-based activism would be vastly more difficult.

However, at this time, I am not aware of there having been any recent developments on potential legislation.
SEE ALSO: Video on net neutrality (RSNL&A)

*Not a reliable result; The Damned of the Earth is a notorious book by Frantz Fanon, and also a line from the French version of "The Internationale." I recall conducting a search in the hopes of finding the text online, and instead was offered "The lowest prices on..." OK, I thought it was really funny.
SOURCES & ADDITIONAL READING: "We Still Need Net Neutrality Legislation," David DeJean (4 July '07); "FTC Net Neutrality Report Tortures Logic," Net Neutrality, Policy Blog; "FTC abandons net neutrality," Vnunet.com; "Navigating Between Dystopian Worlds on Network Neutrality" (PDF), speech by FTC Commissioner Jon Leibowitz (Feb 2007) & "FTC Chairman Addresses Issue of 'Net Neutrality'," FTC (Aug 2006); Oligopoly Watch, "Oligopoly and network neutrality" (21 Jan 2006);

Labels: , , , , ,

03 July 2007

Competition and Homologization

"Homologization" is a term I have coined to refer to a fairly common trend in technological change. The term is derived from "homology," a term in logic in which a person argues that two things are not merely analogous—i.e., sharing similar patterns—but share a common identity or root. So, for example, someone might draw an analogy between the Internet and the system of roads; but everyone understands the two things are so dissimilar that the analogy only illustrates a peculiar pattern common to both. On the other hand, the same person might go much further in comparing the Industrial Revolution and the Internet Revolution, arguing that the two were essentially the same phenomenon occurring twice (that someone would not be me).

The homologization of two or more businesses consists of them becoming essentially the same business, albeit through somewhat different media.

Finance, Insurance, & Real Estate (FIRE)
The most famous example is the Usonian financial services sector, which had been split into several separate businesses after the financial crash of 1929. Even before the Crash, the banking sector had been partitioned geographically, which of course divided capital markets from local savings banks. Between 1933 and 1999, commercial banking, stock brokerage, investment banking, insurance, and real estate were barred from mixing into each other. A commercial bank like Citibank was barred from selling insurance or underwriting issues of new equities, nor could it offer brokerage services to customers. In the years between 1996 and 2003 there was a frenzy of M&A activity as the entire financial sector effectively merged into a few financial supermarkets.

Homologization of the US FIRE sector posed an interesting paradox: each company within that sector was now free to enter other businesses heretofore closed off to it. Commercial banks could now sell insurance; investment bankers could offer underwriting services to a wholly different clientèle. Looking at this from another angle, this reflected increased competition: in theory, each bank was now facing competition from all insurance companies, all brokerages, and vice versa. This posed a rather interesting paradox: virtually all firms in the FIRE sector were in favor of changing the regulations to allow homologization. It was the brass ring of pro-business legislation. All of the financial press praised the repeal of the Glass-Steagall Act as if it were the sine qua non of happiness. Yet the same businesses and the same business press argued, at the exact same time, that the new competition created by homologization imposed extraordinary new burdens on that same victorious FIRE sector. That meant that still more regulatory tweaking was required.

Financial services in the USA and other industrial nations tend to share certain state-like powers and benefits that make them utterly different from non-bank firms. For one thing, commercial banks have the power to create money. Investment banks have the unique power to underwrite capital issues under limited liability laws. Brokerages have exclusive access to capital markets, which are—in turn—made possible by limited liability laws. Financial services, perhaps most importantly of all, are governed by accounting laws that are the rest of us; they are allowed to bear far greater leverage against capitalization than non-financial firms. The last feature, common to the whole sector, reflects its role as a premier state surrogate: it can borrow so much money because it guarantees the greater part of the nation's sovereign debt.

I point this out because I want to make the point that the financial sector already is a part of the national polity; with the events of 1999, it was absolved from two layers of social responsibility. It was liberated from prudential restrictions on what businesses it could undertake, and it was absolved of [most] community banking regulations.

The increased-competition side of financial homologization has been, in my view, an obvious bust. The banks tended to merge with each other, and bought insurance companies, brokers, and investment banks. They did not "invade" each others' business with enhanced services. Mostly, they did increase convenience through internet automation or ATM's. However, savings/commercial banks withdrew from the auto loan markets in favor of home equity loans. In other words, the actual bundle of services offered to customers was shuffled about between banks and 3rd parties (like auto dealers). A branch bank does offer services unavailable in the early 1990's; mostly they are not services a consumer ought to use. It has abandoned useful services as well, creating new monopolies.

Telephony, PCS, ISP, Cable, & Network Broadcasting
The Internet Revolution has led inevitably to the homologization of media. Like the banking sector under the McFadden Act, the old media was regionally segmented; modern media is internationally homogeneous. However, another curious development was the short history of the ISP. In the early 1990's, the number of genuinely autonomous internet service providers (ISP's) was immense, because running an ISP required particular capital and skills that could be delivered anywhere. Telephone companies still regarded their business as telephony, and were concerned mainly with the booming personal cellular service (PCS) industry. By 2000 or so, telephony and PCS were mostly united in odd international cartels, with occasionally-overlapping service areas. The vast majority of people used their telephone company as their ISP, although a few specialized firms like Earthlink continued to survive as autonomous ISP's.

Cable television remained divided from the telephony/PCS/ISP part of communications, as did network broadcasting from both. This changed somewhat as cable companies were snapped up by computer companies like Microsoft, and as laws on cross-holdings or market consolidation in media were abolished. Fox News has enjoyed favorable treatment by Congress, and Clear Channel Communications has transformed the media delivery system in this country. Today these two companies have merged cable, radio, and television "content" production, while MSN* has merged software, ISP, and other media categories.

The same paradox has arisen here: all of the media firms involved insist they are experiencing greater competition. Telephony faces competition from VoIP; DSL faces competition from cable; cable faces competition from YouTube, network television faces competition from content-producing cable conglomerates; and so on. Yet market concentration by single firms has exploded in all markets concurrently. All participants insist, and the FTC insists on their behalf, that competition is much greater and there is no longer any reason for public interest regulation.

In the case of the media industry, it's fair to point out that the process of homologization was driven by technology. It's harder to make this claim in the FIRE businesses, where no technological breakthrough comparable to the Internet has occurred. In FIRE, there has been far less acrimony among the industry members; rather, the abolition of Glass-Steagall has permitted polyamory in the sector. In the new media industry, there is clearly a struggle between rival commercial interests, with each purporting to defend the public interest.

A final note: homologization is occurring in other industries and has in still others. It leads to an interesting aftermath, where the entire mix of products available is changed. Initially, the customer suffers through deception: even very well-informed customers are vulnerable deception by gigantic institutions interested in getting them to make sucker bets. Whether the situation improves depends in large measure on if the customer remembers that she is also a citizen.
*MSN is now known as Windows Live.

Labels: , , ,