30 September 2005

Firefox, Opera, and Internet Explorer

Generally I've used either Netscape Navigator or Mozilla Firefox as my preferred browser. Both are free downloads; Navigator comes with a suite of other applications that have occasional usefulness, as, for example, Composer (HTML editor). Firefox is similar, but includes a number of interface improvements I hold in high esteem.

The Mozilla Foundation is an organization that promotes open-source software. Oddly, the non-profit organization owns the for-profit* corporation, Mozilla Corporation (since Aug '05), that actually develops the browser Firefox and the e-mail client Thunderbird. Firefox was released Nov '04 and has been downloaded 90 million times. Like Netscape's Communicator Suite, Mozilla's Suite includes an HTML editor. The source code for the editor, browser, email client, and news reader are all taken from Netscape's products, although Mozilla has substantially improved it. In fact, "Mozilla" was the original code name for Navigator, and was a contraction of "Mosaic Killer" (i.e., the original NCSA browser). The Mozilla staff are essentially the old crew of Netscape, which has been dissolved. Mozilla sourcecode is not only used in Firefox, but also Camino (for Macintosh).

Internet Explorer has stopped releasing new versions for some time; the main reason people like me have decided to abandon IE is its "Open X" component, which automatically installs programs when prompted by the webpage one happens to be visiting. Naturally, this exposes one to both obsolescence and viruses. Avant Force offers Avant, an interface layer on top of IE, which does stuff like support tabs; it is not a true browser. In addition, there are browsers based on the KDE Project's KHTML (Apple Safari) and on ODA (Opera Software Opera). Opera is designed to accommodate small screens, such as those on PDAs.

I was motivated to research these programs as a result of "Mozilla suffers growing pains" and "Firefox loses momentum" (The Register). The latter article has a totally misleading headline, IMO. It declares that Mozilla is running out of MS-IE "haters," but the text of the article explains that most of the competition is coming from Opera. The reason, in turn, that Opera is so successful is that Opera 8.0 (released April '05) was (a) targetted at the W-CDMA combatible cell phones being adopted en masse in the EU and NE Asia, this really requires the Opera browser. Moreover, beginning 20 Sep '05, Opera was availabe free without ads.
*Actually, Mozilla Corp. is not for-profit, but taxable. This article at the American Bar Association's website is pretty helpful in explaining the distinction. Basically, the difference is that a non-profit may have certain revenue-generating activities that it will farm out to a conventional corporation. The conventional corporation, being wholly owned by the non-profit, cannot be characterized as a "for-profit" since it necessarily shares the non-profit's mission. But it is taxable.

Labels: ,

28 September 2005

What Really Matters: Batteries

While it's the new capabilities that make headlines,...

(BCWS Consulting/3G Newsroom) Despite the hype about mobile data services and internet access on the go, what users really want is longer life for their mobile batteries. At least that is what market research firm TNS found in their recent study of almost 7,000 mobile users in 15 countries. The vast majority (75%) of respondents replied that better battery life is the main feature they want from a future converged device.

If we have a future in the USA that doesn't look like that of Argentina, then that future probably involves the production and global export of massively more sophisticated, powerful energy delivery systems AT THE CORE of our economy. This will require development of a lot of stuff that doesn't even have conceptual competition.

An Exposition of the Problem of Technology

One of the reasons why philosophy is a useful topic of research is that it allows one to learn how to compare ideas on dissimilar subjects and see if any important innovation has been made. Another thing it allows is for people to analyze what the solution to a particular problem is, partly by creating an abstract system of rendering a quandary as a set of statements, then allowing one to deduce what the resolution to that quandary is as another set of statements. For an illustration of this, I recommend this extremely handy exposition of logical fallacies (link goes directly to an example of precisely what I mean). Notice the example illustrates how one can examine a statement to see if it supports another idea.

Because of the abstract nature of philosophy, it's possible to examine a broad range of ideas to see if they have any originality. Likewise, it's possible to establish if something suggested as an example of a future dilemma is, or is not, just a play on words. So, for example, a reccurring question in political science is the comparative merits of a multi-party system: is it worthwhile for voters or political activists to seek political system that allows multiple parties to flourish, or are such systems unstable and unproductive? Since a political system is a technology of organization, what are the frontiers of that technology, i.e., what are some advances on this technology that we can look forward to? And since most improvements in the technology of human activity comes from cross-pollenation with the technologies of other human activities, what are some things we can expect will influence our "technology" of economic organization?

In future posts I'll be looking at the efforts of various people to "cross-pollenate" technologies from the realm of telecommunications and information technology (TCIT) into the reddress of social problems. For now, this post has been very abstract, because we're in the phase of outlining the method I'll use:
  1. Outline the nature of the social problem

  2. Express this as a philosophical proposition

  3. Deduce the "resolution" "symbolically"

  4. Discuss the tangible forms this might take
In the process, I'll be referring to Nicholas Georgescu-Roegen's The Entropy Law and the Economic Process (1971) and Mathematical Optimization and Economic Theory, by Michael Intriligator (1971). These might seem odd choices, but I'll explain why in a bit.


A Farewell

When I first set up this weblog I included in the links another weblog called "Slowbiz" because it included articles that actually reflected on the philosophical significance of new technologies. Here's an extract from one of them:
I was overjoyed to discover (thanks to Elsie Maio) that someone has extended the Slow concept to money; and to the concept of capital, no less. Woody Tasch's piece on Slow Money argues that the venture capital space needs a more appropriate way to think about the potential returns of investments in sustainable and renewable industries.

Equally interesting is the Investor's Circle, of which Tasch is CEO, which is a venture capital fund which puts the "slow money" where its mouth is.

Since 1992, Investor's Circle has been providing start-up capital to ventures in the fields of renewable energy, sustainable agriculture, organic food businesses and other "green"-oriented businesses.

While Tasch has specifically presented the idea of slow money in the context of funding food businesses which do not fit the "fast" returns of traditional venture capital investment, it is easy to extend his argument that investment needs to be context sensitive to the notion that both captital and profit need to be conceived more broadly.

We need a notion that ties together investment capital, human capital, intellectual capital and social capital, not as separate goods, but as interwoven strands of a stronger conception of "return on investment". Slow Money seems like great a place to start.

I was interested in this especially since the author is an entrepreneur, and his views on the normative future of capitalism is worth some respect.

I'd like to address the topic of this essay in a wee bit. However, since this blog was last updated in May, I'm removing it from the sidebar.

Twilight of Palm OS?

The Palm OS was the first PDA* OS to win widespread recognition (in 1996). Since then, the Palm's fortunes have gradually declined relative to RIM's Blackberry and the widely-adopted Pocket PC format. Besides Palm, Sony also produced devices that used the Palm OS until 2004. Now that Palm has announced a Treo based on the Pocket PC format, there is widespread fear that the Palm OS itself is doomed:
Possibly called the Treo 700w (or maybe the Treo 670 - details are scarce!), the new phone will initially only be available from Verizon Wireless, running on the carrier's EV-DO broadband network.

A series of 'first look' photos on Engadget shows the new Treo to be slightly narrower than the Treo 650, but at the cost of what looks like a smaller 240x240 pixel display, instead of the usual 320x320.

Confirmed specs include Windows Mobile 5.0, a one megapixel camera, EV-DO, Bluetooth and 64MB of memory.

Palm users still waiting for the Wi-Fi card categorically promised at the Treo 650 UK launch in April will be mightily miffed to learn that a SD Wi-Fi card worked straight out of the box with the Windows Treo - a classic example perhaps of why people are leaving the Palm OS.
The Pocket PC is usually regarded as a format set up for cheapness, while Palm consumers expect first-cabin treatment.

This is an interesting reversal of the situation with Apple, where it was the distinctive hardware that got cancelled. In fact, it looks like a test of the concept a company introducing new competition for its other divisions.
UPDATE: Engadget (via PDA Buzz) has a page of detailed photos. Man, those guys are obsessed!

*PDA: personal digital assistant; OS: operating system. Some readers may be irritated because I linked to the Wikipedia article on Palm "Pilots"; here is the Palm OS article. There are lots of rival claimants to Palm Pilot's claim to being the first successful PDA, such as the Sharp Zaurus (in 1993, in Japan); however, these devices did not have the exponentially greater sales enjoyed by the Pilot.

Labels: , , ,

26 September 2005

ICHIM (Part 2)

Another item at ICHIM I thought was especially interesting was Me-Ror v.2 (Vadim Bernard). Here, the user sits in a booth and evidently moves a hand about: This installation gives an account of a research on a prospective setting for the video image. By combining an infra-red camera and an infra-red source placed in the same axis, it is possible, by filtering the visible light, to obtain an image in levels of gray where the light intensity of the image is a function of the depth of the scene. The closer the filmed object is, the more it is lighted up. In addition one films the same scene, but this time in color. By combining the two images captured in a single flowing video, one obtains a new format of video to four layers : three layers for color (RVB) and a layer for the depth (3D). This technology was, IIRC, used by the USAF back in the early 1980's to analyze stationary images for depth.

This was an idea I had had long ago, of future interfaces in which a user moves a hand through a hologram, such as a Munsell Color Space, and the computer reads the motion of hand and eye. There are many implications of this sort of interface, one of which is the inversion of space; instead of the future PDA user being spacially confined to the tiny device she is using, the device uses (say, holographic technology to) project object and response-observation into whatever space the user needs.


25 September 2005

New Interface Developments

Via We Make Money, Not Art, I learned about the ICHIM 05 Digital Culture and Heritage Conference and Exhibit in Paris, France. Since this blog is devoted to the evolution of interface, I was naturally very interested in many of the exhibits.

WMMNA was most enthusiastic about the z7 rocking computer (see fig. 1; the screen shows the computer in use). Well, yes, that's certainly novel in the sense that I, too, sometimes like to adopt absurd, fidgety positions while trying to read on the back seat of the bus, but... I see there being a lot of buyer's remorse associated with this product.

"Body-as-Interface" had several projects on interface/display methods. Only the third, "Character" really interested me: The text reacts like a flabby and organic shape, according to the body movements of the writer.

The typing speed defines the letters' width, the pressure on the key sets the letters' thickness, the typing rate defines the strength of the curves (the more regular the typing, the more squared the letters.) The values of the thick and thin strokes are given by the average length of the words (the longest the words are, the more contrast there is between the thin and thick strokes).
While just a gimmick in itself, I can see future versions of this incorporating hand-eye monitors and innovations in displays, especially 3-D displays such as holograms. This brings me to the next exhibit.

In "Screen/Surface," artists display transitory media, such as bubble surfaces and globally distributed web cams. This display (fig 2) is the Bit-Fall Simulator. It's a computer screen created by falling drops of water standing in for pixels, and while it is visually compelling, it's not the most interesting work of creator Julius Popp. That goes no doubt to the self-conscious robots Adam and Eva, mechanic representations of cognitive studies. Both robots, circular in design and only different in their inner complexity, are placed in a reduced environment. The robots are limited to one degree of freedom, the (double meaning) rotation about themselves. The robots' motion, rolling on two wheels mounted to a wall, is archived by moving an inward facing actuator. This changes the robot's balancing point, forcing the body to turn to a new balanced position. The robot's turning and rolling is a visualization of the controller's learning progress -- a picture of the emerging body-consciousness.

23 September 2005

What is IPTV?

Email alert from In-Stat:

IPTV in China will experience only moderate adoption before it takes off in 2008, reports In-Stat. With the emergence of salient applications and a maturing technology, the market is expected to get a boost from the 2008 Olympics hosted by Beijing, the high-tech market research firm says. This is expected to result in 4.5 million subscribers and US$231.3 million in set top box (STB) revenue in 2008.

Internet Protocol Television (IPTV) is television whose content is delivered using internet protocols, regardless of the actual transmission medium. At this point in history, a very large share of internet users get online access via a coaxial cable, just like cable TV. Often I watch segments of the Daily Show online; it's via a TV cable that I use exclusively for transmitting digital signals, not the analog signals used on conventional TV broadcasting (or transmission of cable TV). The difference between the two is that IPTV employs a different format.

Television delivered through the internet is not necessarily IPTV, however. Internet television relies entirely on internet protocols (i.e., a conventional modem) to transmit programs. IPTV requires a set top box to handle the higher throughput of realtime transmission.

Technically, what the box does is run the signal through a set of microprocessors to obtain a more refined interpretation of the signal. One way of using a coaxial cable , or Ethernet, to transmit the far larger volumes of data required in real-time TV broadcasting is to have a computer subject the signal to far greater "scrutiny" (my word), allowing the service provider to embed more data into it.

The main provider of IPTV in Asia is presently now TV, based in Hong Kong.

Labels: , ,

22 September 2005

Operating System (2)

(Part 1)

There are actually a very large number of available operating systems, although the number is generally shrinking. OS's specialized to run on a particular model of hardware naturally fold when the computer architecture becomes obsolete. Those with a small or negligible body of useful applications, of course, don't last either. The success or failure of individual OS's is frequently determined by the corporate strategy of big players; for example, Microsoft's conquest of the market was due not to technical superiority, but selection by IBM. A highly-acclaimed system engineered to maximize the potential of the Power PC chip used in the post-1995 Apple Macintoshes, BeOS, was thwarted as a result of Apple's acquisition of breakaway software developer, Next. Restored CEO Steve Jobs insisted on a unitary Macintosh platform, with one vendor, one system architecture, one OS, one API, no clones, etc. There was no compelling technical reason why it chose a NextStep over BeOS, or even why it did not allow the two to coexist. The decision was political.

Operating systems fall into different categories based on designer philosophy. In a few cases, those that were too limited in their versatility or adaptiveness simply became obsolete as their hardware ceased to be made; or else dragged down the entire system by being uncompetitive. But OS's at either extreme have survived. One involves the ideal of the computer as an appliance so easy to use, even the most tech-averse person could use it happily: the computer as toaster. Such computers involve no choices for their user, come ready to use out of the package, and offer little scope for user maintenance (e.g., micro-ITRON). At the opposite extreme are open systems, that require a tremendous amount of comprehension on the part of the user, including batch file editing; that offer a myriad of choices, such as in data security utilities; and that have many different user interfaces (e.g., for Unix: X-Windows, Solaris, NextStep, Common Desktop Environment, K Desktop Environment, and so on.).

Major Issues of Consideration
The tendency of an OS to become widespread or survive depends upon platform availability (are there machines that can run the system?) , platform durability (are the machines running the system going to be around for a while?), and institutional selection (are vendors or institutions going to chose the system? Are they going to make the investment of time and money to make it run as needed?).

The most obvious example of this is the case of MS-DOS, which was designed for a specific microprocessor and chipset, which was packaged into a system distributed by the world's largest computer company. IBM's awesome market power ensured that anything it released would dominate the market. Naturally, this inspired a lot of wistful speculation: what if IBM had emphasized connectivity instead of speedy development? The importance of its decision imposed a path of technical development that sidelined many promising technical developments.

Other systems, like the PDP-11 and S/360, have been influential by virtue of their great duration. The immense cost of database maintenance, accumulated over many decades, has led many institutions to soldier on with one of these systems 40 years after they were originally built (although I expect anyone still using a PDP-11 is either a hobbyist or a curator). Systems that survive a long time have the fascinating feature of spawning programs that are later adapted ("ported") to other technologies; for example, the previously-mentioned PDP-11, with its popularity among universities, became a testbed for numerous operating systems including Unix, VMS,* and lots of computer science Ph.D. dissertations. Of course many of these OS's matured on subsequent systems; that's my point. The ability of OS's to leap from architecture to architecture led to a parallel zoology of exotic software that existed in emulation shells, but also would re-appear as the basis for something very prominent, such as the Java programming language.

The third principle—the importance of strong institutional support—is illustrated by the demise of BeOS. BeOS had some great talent and brilliant ideas behind it. Originally developed for the AT&T Hobbit processor (Hobbit BeBox prototype), the team quickly ported it to the Power PC (PPC 603 BeBox).** This made it a candidate for the new Mac OS, and in fact, Mac & Be signed a deal allowing Be to distribute clones of the Mac bundled with Be OS. Later, when Apple acquired Next, and Steve Jobs resumed his leadership at Apple, all licenses were terminated and Be was no longer allowed to bundle its software onto Macintoshes. Be then ported its OS a second time, to the Pentium chips (since they allowed ready dual-processor implementation). Still, without any larger institution committed to the BeOS, the company folded in 2001; it was acquired by Palm for $11 million, then launched a lawsuit against Microsoft that bagged another $25 million; subsequently, an applications developer for the BeOS platform, Gobe, took over the publication of BeOS itself. This lasted less than a year before control of the BeOS license passed to a German start-up, yellowTab, and then to Magnussoft. While BeOS users are a passionate, committed lot, they are not an institution; and therefore, prone to fractiousness. Some other BeOS enthusiasts created Haiku, which dilutes the user community.

* VMS is now known as OpenVMS (since 1991). Since it was released the same time as the VAX-11/780, i.e., in 1978, it had to be developed on a PDP-11. Here is a comprehensive list.

** Be abandoned the Hobbit because it had problems interfacing with the digital signal processor. AT&T later ditched the Hobbit, which was evidently developed to run PDAs.

ADDITIONAL READING & SOURCES: "Overcoming a Standard Bearer: Challenges to Nec's Personal Computer in Japan" (PDF), David Methe, Will Mitchell, Junichiro Miyabe, & Ryoko Toyama; "Competing through Standards: DOS/V and Japan's PC Market" (PDF) Joel West and Jason Dedrick; "How open is open enough? Melding proprietary and open source platform strategies" (PDF), Joel West;

"What is Mac OS X?" Amit Singh.

Technical Specifications for Mac Pro; save any work you have before opening this page, because the page crashed my sessions of Mozilla Firefox.

Labels: ,

20 September 2005

Look Back in Curiosity: After Napster

I was pleased to find a report at the Free Expression Policy Project on the history of audio file sharing in general and Napster in particular.
Since the late 1990s, “peer-to-peer” sharing of popular music has been the copyright industry’s most visible concern, and the Napster case was its first big attempt to stop it. As with DeCSS, the industry decided, at least initially, to go after the technology that enables filesharing rather than users of the technology who actually engage in copyright infringement.
This technology was Napster, a program-cum-service conceived by Shawn Fanning, whose idea was to allow owners of recorded music to share songs online. I happen to be the sort of obnoxious bore who loves to inflict his musical tastes on guests ("Here, check this out...this is Pharoah Sanders with Trilock Gurtu"), and this allows one to meet up with people who share my freakish tastes and passions.

Napster was promptly sued by a most perverse manifestation of "the suits," namely, the nihilist musical group "Metallica." Metallica's lawyers convinced the court that Napster was wholly responsible for any illegal copyright infringement that might have been perpetrated by Napster users, although courts had long ago ruled that VCR usage was legitimate. Even upon appeal, the court issued an injunction requiring Napster to guarantee that there was not a single piece of copyrighted material on its website.

However, the legal volley on Napster backfired. On the one hand, Napster's centralized system of storing files for exchange was easy to "take out" through legal action; but networks like Grokster, Morpheus, and KaZaA, which linked persons wishing to share files, did as well. On the other, in April 2003 the federal courts ruled with MusicCity (the developer of Morpheus) that holding the file sharing software liable for copyright infringement was illogical. In response to the lawsuit brought by 28vultures media companies, the federal court ruled that file sharing software is indeed analogous to VCRs and photocopy machines.

At the risk of being obvious, another obstacle to the media companies was the fact that the internet is universal, not merely American; file servers for KaZaA, for example, are believed to be located in Denmark and are less than essential for file sharing. Moreover, hardware options for file sharing are also proliferating; the most serious challenge is, of course, the advent of widespread 3G telephony. So as a crowning irony, even if Morpheus, et al., were banned by the courts, softwares like these are already obsolete.


What is a Media Adapter

The digital media adapter is simply a device that allows you to take digital files from, say, your stereo or tivo, and transfer them to your computer. Your computer, of course, can be an iPod. Now that a lot of home entertainment systems include an MP3 player, it's not unusual for people to transfer songs from their computer to their home stereo, or want to display snapshots on the big screen TV.

This one, on the left, is the LinkSys Wireless-B media adapter. It links television sets and radio receivers to your computer. CNet points out that it "can’t handle video, Internet radio, or audio CDs; requires Windows XP and .Net framework; hogs host PC’s resources; incompatible with firewall..." Readers kicked it to the curb. Far superior was the Apple Airport (Obvious question: then why, JRM, did you included a photo of the inferior product? The reason is that the LinkSys looks like what it does, thereby helping readers intuit what a media adapter is. The AirPort, while the better device, looks like another small beige box).

Moving right along, one feature of intimate computing that interests me is the multifarious uses to which users adapt them. One is the adoption of consumer electronics to enhance productivity; another is the use of media playback devices, such as the MP3, as new forms of artistic creation. One plausible outcome of these category mergers that the media adapter is a member of, is increased use of "found" images (via, say, Flickr or PBase) and found audio.

Closely related to the media adapter is the media receiver, a stereo receiver that handles a wide variety of media formats like MP3, etc. The difference between an adaper and receiver is that you can plug the speakers or (rarely) a television set directly into the media receiver. The adapter and the receiver are both likely to employ Wi-Fi to link entertainment components.

Most reports I've read, understandably, deduce that the massive adoption of other media formats, like CDs, DVDs, MP3s, and so on, is unlikely to happen with this family of gizmos because people are more likely to get the features built into mainstream equipment. Also, it may be several years before humans get accustomed to broadcasting Grokster, Morpheus, or KaZaA directly to their home stereo system. More likely, people will prefer to download to their iPod, or, for older users with legacy equipment in mind, to a CD. This, being widely anticipated by original equipment manufacturers (OEMs) will keep prices high.

Labels: , ,

19 September 2005


Multitasking refers is the ability of a computer operating system (OS) to perform multiple tasks at the same time. The ability of a computer to multitask is an attribute of its kernel. A multiuser OS is not the same thing as a multitasking OS; a true multiuser OS must support multiple user privilege domains, as well as multiplexed input and output devices.

Cooperative multitasking requires that programmers of applications anticipate interruptions and allow them; it is used on smaller processors, such as those running the early releases of Windows. The programs running must send "calls" (messages) to the kernel that announce, in effect, that the program can now yield to another running program. There are obvious problems with cooperative multitasking, such as the fact that computer programs may neither know nor "want" to share resources. It is now considered an obsolete technology.

Pre-emptive multitasking is where the operating system allocates time slices among programs that are running on the processor. All memory is addressed in the same way, so that no application can "know" if a memory address is L1 or L2 cache, or even some address on a hard drive; the kernel, with its monopoly of this sort of information, can easily thwart any effort by an individual application to preempt rival programs. (Memory addressing that uses the same address format for all memory types is called "flat memory.") Today, nearly all systems support preemptive multitasking.

Multithreading is another form of multitasking in which the tasks running concurrently share information by sharing a memory space (i.e., a range of memory addresses). Threads are a method by which a program splits its operation into multiple concurrent processes. The processor may actually switch among threads (time-slicing), or it may delegate the different threads to different processor cores, so that they are literally simultaneous. However, the output of the threads is passed to the same memory space. Moreover, threads do not carry state information.

Threads represent a type of program fork (i.e., a splitting off) which is minimal; all of the threads forked off a single process share state information. Individual threads communicate with each other through shared memory, ports, virtual sockets, message queues, and distributed objects. Since the threads share memory addresses and state information, implementation of these methods of communications require techniques different from multiprocessing.

Multiprocessing is an attribute of CPUs that imposes special demands on the OS. This involves the use of multiple processors in a single CPU; today, many microprocessors actually are designed to contain multiple processing units on the same chip. However, even a CPU has many cores and can sustain many threads in a truly concurrent mode, there is still a distinction between processes and threads. While threads are essentially very similar subroutines which share memory space and have no state information, processes are different from each other, address different memory spaces, and carry considerable state information. A process includes a loaded version of the executable file, its stack, and kernel data structures.

Processes represent a fork of the running program (i.e., a splitting off) that contains most of the essential features of the program itself. The process is ended with an exit system call, and while it is running it is known to the operating system by a unique process identifier (PID). Processes communicate with each other using named pipes, sockets, shared memory, and message queuing.

There is a logical structure here. In the first case, of cooperative multitasking, the kernel and the host CPU have abdicated responsibility for the allocation of resources between programs running on it. This was because the hardware lacked a periodic clock interrupt or else didn't have a memory management unit (MMU). Software engineers adapted to the demands of customers for software that could share a processor without monopolizing resources. The kernel still had to create the call stack and assign memory addresses, but it would do so on the assumption that the program it was working on at that particular instant would run in perpetuity; the program itself had to yield.

Newer processors were designed to multitask, which meant they now had complex MMUs. The kernel now had asymmetric information and took no cues from the programs it was running. The way in which the processor arranged the call stack or assigned flat memory

At a still higher level of sophistication, the CPU be running a program that contains mutually related processes, such as a graphics program. These multiple threads are stacked so their data inputs and outputs are actually arranged in same memory stack; the kernel's job is to know which belongs to what.

ADDITIONAL SOURCES: Wikipedia entries for multitasking and pre-emptive multitasking, message queuing, and thread; eLook Computer Reference, "Pre-emptive multitasking" vs. "Cooperative multitasking"; IBM, "Parallel Environment for Linux—Glossary"; Liemur "Threads: Inter-Thread Communication"; [Apple] Developer Connection, "Thread Communication";

Operating System (1)

Operating systems are often described as the layer of computer software between the physical circuitry of the machine itself, and the programs that the user wants the computer to "run." Essentially, the operating system consists of a set of programs that manage the physical resources of the computer, such as logic, memory, and processors. Part of the operating system must also orchestrate the running of the programs on the system resources.

In order to understand how an OS works, it's handy to consider what happens when one turns on a computer. Some set of instructions is required to make sure that the computer processor (a) can locate other programs required to function. So we have the OS kernel that loads when the computer is turned on, and which tells the processor to load additional programs like the memory management unit (MMU), the graphics manager, and print drivers. As the programs are loaded and the computer processor begins to run them, the new programs carry out a larger and larger number of tasks until, after about 10 minutes, the computer is fully operational. This process is known as bootstrapping.

(This comes from an old expression, "to lift oneself up by one's bootstraps." It is used to refer to a small, weak, or simple process systematically bringing about a big, powerful, or complex process. This is what happens during bootstrapping. The kernel of the OS launches larger, more complex programs, which can, in turn, allocate progressively more and more of the computer's resources.)

When a computer is turned on for the first time, it is likely to include a basic input-output system (BIOS) stored in special, permanent memory chips on the motherboard. Permanent memory is very expensive, and computer engineers like to leave a lot of options available to the computer retailer; one of these is where you want the computer hardware to go for that all-important kernel. In the 1980's, it was usually the floppy disk drive; later, in the 1990's, the CD-ROM drive. Most recently, in the work environment, it's been the network interface card (NIC). The BIOS is sometimes described as a layer of software underneath the operating system; it's also known as firmware, because it lies between software and hardware.

As mentioned above, the kernel is the core of the operating system (hence, the name—as in, a "kernel" of wheat). By this, we mean, it's the part of the operating system that actually organizes the tasks that the computer is supposed to do. The complexity of the modern operating system is determined in large measure by the variety of resources that the OS must manage—e.g., multiple processors, graphics cards, multiple types of system memory—and the variety of tasks that it is likely to manage, such as the drivers for all of these resources. The kernel typically responds to system calls, or messages that an application sends to the kernel requesting a system resource.

Kernels used in modern (post-1975) operating systems are capable of running more processes than the computer is "officially" capable of doing, by prioritizing system memory and processor space. Hence, abundant secondary memory is used for the overflow from overly-taxed primary memory; without virtual memory, the OS would have to disallow more than an few application windows. The method used by an OS to handle multiple tasks is a basic feature of its architecture. In some cases, it is designed to follow a schedule ("pre-emptive multitasking") otherwise, the kernel arbitrates among competing system calls to award slices of processor time or memory space ("cooperative multitasking"). The purpose of the kernel is to allocate the memory and CPU resources to each of the programs running on the computer. The kernel is also responsible for arranging tasks and communications between them. In order to run a program, the kernel is responsible for establishing an area of the CPU's available memory (known as an "address space," or range of memory addresses) and also a "call stack," or serial arrangement of instructions.

The design of a kernel is not only the most technically demanding aspect of programming, it is fraught with controversy. The multitasking strategy can have unforeseen consequences, while the method employed by the kernel to set up a call stack is decisive in its ability to run multiple processes effectively.

Compilers & Assemblers
Compilers and assemblers exist to translate program languages into instructions that can be followed by the computer hardware. Compilers translate high-level programming language into machine code, while assemblers translate low-level programming languages into machine code. High-level programming languages include Basic, Fortran, Pascal, and others, all of which include conventional vernacular words (e.g., English, Russian) and familiar mathematical operators. Assembly code is a symbolic representation of machine code, or the binary code of 1's and 0's, that is actually intelligible to the computer. Assembly (or low-level) programming languages typically are specific to the machine they were designed to run on, and hence tend to require highly specialized expertise to use.

Compilers are a different program from the operating system kernel (especially if that program is a microkernel architecture), but the compiler nonetheless is so essential to the functioning of the OS that it is closely related. In some cases, such as MS-DOS, the compiler accesses the system hardware directly; since DOS programs nearly always run on Intel 80x86-derivatives, the matter of which machine code the compile must translate its language to never comes up. In other cases, such as Unix, the compiler communicates with the kernel through system calls, and it is the kernel that must communicate with the hardware.

An operating system usually requires many compilers since programs are typically written in language optimized for that particular activity; hence, ordinary computers must recognize many different programming languages. Assemblers, in contrast, typically are specific to a processor type.

I'm going to be discussing shells in far greater detail in the sequel to this post, but the shell is a "layer" of programming code that interfaces with the kernel and with applications the computer is running. In essence, the shell displays the program output, and cues the user for input.

During the 1980's it became common for shells to incorporate a graphical user interfaces (GUI), such as Mac OS, MS Windows NT, and Solaris. This actually served to make the OS easier to use, and also to protect the kernel from user error. In contract, command-line interfaces, such as DOS, require the user to interface with typed commands. Both have their enthusiasts, and it ought to be pointed out that the most common GUI, MS Windows, has a special DOS "shell" (actually, not a true shell so much as a DOS emulator in the Windows shell). GUI's represent the file structure and data contents of files using graphical analogies, such as little pictures of manilla file folders, or sheets of paper. CLI's, in contrast, use alphanumeric characters.

For common usage, "shell" is almost always used to refer to the command-line interfaces in which one interacts directly with the OS. It is almost never used to refer to GUI's, except by way of analogy (as I have done here).

Libraries are collections of standard subroutines used by many or all applications that a computer is likely to run. OS's with comprehensive libraries can sharply reduce the obligatory size of applications, since applications only need to invoke a subroutine rather than include it as part of each application.

Utilities are programs that help the user manage the hardware, software, network connections, and files. For example, a compression utility allows the user to save files of any format as smaller files; but it does not allow one to open the files or edit them. (It does, however, allow one to decompress them). Another type of utility is the task manager in Windows (see illustration below), which allows the user to terminate non-responding processes.

Typically applications begin as 3rd-party software, which are later integrated into the OS.

ADDITIONAL READING & SOURCES: Wikipedia entries for types of "operating systems: OS/360VM/CMS & MVS (IBM mainframes); SDS 940 (Scientific Data Systems); CP/M (most early microcomputers); MCP (Burroughs); EXEC 8 (Univac); OpenVMS (Digital Equipment Corp.); Unix; MS DOS, OS/2, Windows, NT (Microsoft); Mac OS, Mac OS X (Apple); Mach Kernel (NeXT); X Windows, Solaris (Sun Microsystems);

Wikipedia entries for OS concepts: BIOS; bootstrapping; command line interface (entry could use a clean-up); graphical user interface; kernel; NTLDR (Windows NT); nonvolatile BIOS memory; library; shell; utility;

IBM Journal of Research & Development; Triangle Digital Support: "Pre-emptive Multitasking Explained"; "Shared Libraries - 'Linkers and Loaders," John R. Levine;

Reshaping Narrow Law & Art: post about IDE's discusses compilers and assemblers

Labels: , , ,

17 September 2005

Massive Losses to Identity Fraud

From the Register:
Reported ID theft losses represent only the tip of an iceburg, dwarfed by fraudulent losses run up by crooks assuming completely fictitious identities, according to analysts Gartner.

It reckons ID theft will claim 10m US in 2005 resulting in losses of around $15bn from 50m accounts. By comparison "victimless" fraud - bad debt run up in the name of non-entities - will hit $50bn this year. [...] US banks are so keen to recruit new customers they will open up accounts on the basis of identification from only a pay-as-you-go mobile phone bill (a type of account that is even easier to open) without checks on the validity of supplied social security numbers. Once a bank account is open crooks will pay bills religiously, eventually earning enough trust to obtain credit cards with higher and higher limits.

A lot of populist readers out there may be tempted to run out and try this. It strikes me as an example of the moral hazard that I feared might arise in the latest bankruptcy bill. For those of you unfamiliar with recent US congressional activity, in March our Congress passed a bill that made our bankruptcy laws much tougher on debtors, but also facilitated the formation of unsecured liabilities on the part of banks. Any honest economist (as opposed to the prostitutes employed by the financial sector) would advise against this, on the grounds that it stimulates banks and other financial intermediaries to make it easier for unworthy borrowers to borrow, since the lenders know they will have an easier time of it collecting—on the government's dime, of course.

In order for an economy to function, it is necessary for borrowers to be financially responsible. But under the new legislation, all of the onus was placed on the borrower, while lenders send out almost 2000 pieces of mail for each new account. That this would lead to endemic fraud, seems like a reasonable thing to expect most economists to anticipate. Didn't happen.

About the iPod

By now, I presume nearly all readers have heard of the iPod. Here's a page at Apple illustrating the device.

This is essentially an MP3 storage device that stores up to 10,000 songs (assuming a Pop music format; I suppose for classical fans like me, that doesn't translate to 10,000 symphonies). The iPod was released in 2001 and made an impact partly through its immense popularity as a musical playback medium, partly through its popularization of MP3's as a medium for propagating music (in essence, allowing hobbyist-bands to enter the market with free samples of their songs; also, a revival of the political ballad, recorded to influence opinions rather than make money for the artist), and partly through its groundbreaking commercials.

However, the iPod was also more expensive than existing alternatives; while it also allowed one to carry one's entire music collection everywhere, it also required one to digitize that collection from an existing collection. Since then, the iPod's price has declined, and Apple has introduced a mini-iPod, the iPod Nano (Daniel's News Space).

Unlike the iPod and the iPod mini it replaces, the iPod Nano employs flash memory rather than the tiny hard disk drives developed as PC cards for laptops. The Nano will store only 1,000 songs, reflecting the discovery from practice that a thousand is the mode for users. I've been a little startled at the strong emphasis on reduced size; it seems to me that there will perhaps be joint efforts between Apple and digital camera producers to incorporate the iPod into cameras, or perhaps other PDA devices. At this point, I'd think it's handier to have a single device that does many things, than to have half a dozen gizmos, all quite small, with different features.

Besides, cross-marketing could get a lot of marginal users to become accustomed to using "the other half," i.e., if you get an iPod-Olympus digital camera, and you normally don't listen to music so much, having the iPod there all the time might get you habituated to using it.


14 September 2005

This is Me 2 Years Ago

In regards to my previous post, this writer expresses some extremely idealistic views about the internet and its power to challenge bureaucracies. One difference is, the linked author has better writing skills, but I presume that's obvious.
This partisan practice of waging malicious and unprincipled disinformation campaigns is so far beyond bad journalism that it resembles mental illness more than it resembles rational political discourse. Although we must exercise extreme caution before characterizing the behavior of any political group as pathological, there are situations in which such a characterization is sadly accurate. In this particular instance, there is a well-documented pathology, complete with classic psychological manifestations like the idealization of self, the demonization of others, the gross distortion and outright denial of factual realities, and the projection of one's own unacknowledged motives and behaviors onto others.
Me c. 2003 says, right on! But the blogger network has proven more a bureaucratic intranet (i.e., a tool for transmitting memos and corporate mission statements, or career assignments) than a tool for dissent.

Technology and Bureaucracy (Part 3)

(Part 1)

In his excellent book, The Soul of a New Machine, Tracy Kidder remarks that the computer is a "status quo" technology because it facillitates the running of bureaucracies; additional uses, such as matching the power of bureaucracies to do mass mailings or archive research data, are relatively unthreatening to the bureaucracy they are used against. Kidder, unfortunately, doesn't dwell on this point; the review I linked to, while generally favorable, correctly objects that it is by and large a heroic portrait of the people who introduce new technology products. The point, though, about technologies helping either bureaucracies or their opponents, has been a fascinating one to me for decades.

While the stand-alone minicomputer that Kidder wrote about was obviously suited mainly to making bureaucracy more efficient, it's a little harder to see how the internet, let alone, 3G and PDAs, would do so. For the last three years I've tended to harbor a furtive hope that blogs would strike a mighty blow against bureaucracy's steady encroachment on all forms of reality. At night, as I lay in bed waiting to fall asleep, I wondered what utter defeat would look like: a cyberspace in which useful websites used technology that required costly developement tools to implement, shutting out people who did not make money from their sites; or else, internet service that furnished users with "smartbrowsers," browsers with search-engines that confined the user/subscriber to selected sites.

I have to say that I think the PDA, so far, looks like it is turning into this. The cost of implementing new 3G technology has been so great, and PCS providers have sunk so much into market entry, that suspect those providers are the devils in a faustian bargain with governments that sold them wireless channels. They're going to recoup that money through commericals, not user subscriptions. That, and consumerist quietism, are strangling us.


Technology and Bureaucracy (Part 2)

(Part 1)

Blogs, independent news gathering, and the connected PDA are all challenges to bureaucratic power. Bureaucracies, whether belonging to firms, NGOs, or governments, are essential tools for the services those organizations render, but they have to be accountable. If not, they become a complete menace to human freedom and safety.

This is so obvious I feel embarrassed having to say it. Put abstractly, it is non-threatening enough, even vapid. Applied to specific situations, like the military conduct at Guantanamo Bay, one is likely to be called a traitor by high ranking government officials. Likewise, libertarians don't win my respect when they essentially reason that firms that ruthlessly suppress independent information about themselves, are somehow entitled to because "they're private." In other words, libertarians seem anxious to establish that, in the power struggle between the corporation, state, and individual, if the corporation wins absolute control, then it's absolutely OK. No matter how the corporation uses that power. Such prima facie defenses, in my opinion, prove immutably that the people who propound them are fanatics, as dangerous to freedom as fascists are.

And the fact is that this is one of the reasons why "intimate computing" has failed to pose a significant challenge to bureaucracy. Bureaucracies seem to have recruited robotic defenders, not merely of them, which should be understandable, but of their worst excesses. The tiny minority of blogs or independent media sources that actually do show genuine signs of intellectual curiosity, are bogged down by human "bots" that rabidly attack them and slime them. The affair of "Rathergate" is quite illustrative; while companies like Clearchannel can set up AM radio stations to broadcast the "movement conservative" message everywhere in the country, they require a steady stream of petty scandals to demonize their enemies. I occasionally monitor AM radio stations, and I've noticed that they don't actually promote a positivist or normative message at all—not anymore. Nowadays, it's all hate all the time—hatred not merely of "leftists," "feminazis," "the homosexual agenda," or the "liberal elites," but also of conservatives who depart, however briefly, from the "movement." In order to personally slime even the most impeccably credentialed conservatives, "movement conservatives" require a machine that digs up dirt or circulates slander that isn't even true. In this respect, the power of the internet to hunt down minions and duplicate talking points, seems an insuperable weapon of bureaucrats to protect themselves.

(Part 3)


13 September 2005

Technology and Bureaucracy (Part 1)

Bureaucracies, whether belonging to corporations or governments (or to NGOs), are far more similar that most ideologues imagine them to be. One of the objections to anarcho-capitalism (usually referred to by its adherents as "libertarianism") is that it regards only government power as insidious; corporate power, enhanced as it often is by a weak state, is regarded as harmless or "natural," and therefore, legitimate. Self-identified "libertarians" are likely to claim relations with a corporation are voluntary, whereas those with a state are not. But employers in a labor market nearly always have identical employment policies, and the ability of frustrated employees to "seek employment elsewhere" is usually not even academic. It's certainly easier for a determined millionaire to evade taxes on all but a token share of his income, than it is for most wage employees to do anything whatsoever to countervail the power of their employer over their lives.

This is not a leftist rant (for one thing, I'm not a "leftist" and I suspect this asymmetry of power is simply inevitable). However, I do want to alert readers to the fact that we live in times when the mere observation of facts is presumed to expose one to the allegation of thought-crime. It's disturbing to me, even now, that about two thirds or so of all blogs with any interest in public matters, display precisely zero intellectual curiosity. Rather than serve as a journal of the personal explorations of the writer, they serve as billboards for entirely Manichean world views. This is so even for bloggers who are professors or college students; the blogs don't express a point of view, so much as a body of beliefs that the reader must "take or leave." Facts that, however superficially, deviate from this belief system are accepted as evidence that whoever presented them is actually a "troll" or a stooge of "the other side." I'll be referring to this point in my next post.

12 September 2005

Some notes on the Katrina Aftermath

Over the last month I've been extremely reluctant to post because I prefer to perform some research on the items I post, and compose a short essay. And over this time, my attention was directed intensively somewhere else. Another matter, of course, was the enterprise of finding for meaningful things to post about. I'm hoping that, starting today, I'll be able to resume regular posting.

As with nearly everyone with a blog in the USA, I've been preoccupied by the recent tragic events in the Gulf States of Louisiana, Alabama, and Mississippi. Since that time, I've done a substantial amount of research in things like hurricanes, emergency preparedness, and urban development in the region in order to bring myself up to speed, and I have observed several things that I believe are relevant to intimate computing.

The first is that intimate devices played a surprisingly minor role in the post-Hurricane phase of Katrina's onslaught. Aside from a few anecdotal exceptions, I looked in vain for evidence that PDAs or 3G phones had significantly affected rescue operations. Boing Boing, for example, had a series of posts on the hurricane and the suffering it inflicted
Boing Boing, like Indy Media, links to a story by Jacob Abblebaum about setting up a radio station in the area. The point, of course, was that radio transmission technology that had been miniaturized to lunchbox size before the Vietnam War, was the technology required for independent communications.

Another point was the complete indifference of PDA developers to the calamity. During the Tsunami that struck Banda Aceh and Sri Lanka last winter, journalists had filed relatively lame stories to the effect of "People with cell phones are using text messages to fill the gap left by destroyed landlines and (in some cases) the Indonesian military's crackdown." However, cell phones and pagers have been such a stopgap since at least 1986, when pagers were used to coordinate demonstrations that ousted Ferdinand Marcos. Since then, the security police in countries like Uzbekistan (where Ceaceascu-like dictator Islam Karimov perpetrated his Timişoara-style massacre in Andijan). The "jack-booted thugs" or heavy-handed tactics of mid-20th century repression are no longer needed, and we already need to understand that, so far, gains in technology made since the cell phone became widespread, have on balance abetted bureaucracies, not the forces of accountability for those bureaucracies. And I'm sorry to have to say that, but it seems like an immutable reality.

Labels: , ,