21 January 2008

Intimate Computer: Description and Analysis

"Intimate computing" is a term I began using to refer to personal digital assistants (PDA's) and 3G cellular phones. PDA's are electronic devices that are designed chiefly for storing or retrieving printed information, which are also very small. The most common are the RIM Blackberry, the PalmOne Tungsten and (formerly Handspring, now PalmOne) Treo, and the PocketPC (all links are to image searches). Initially PDAs amounted to beefed up calculators (Sharp Wizard), but now your PDA is likely to come with cellular functionality. Likewise, 3G cell phones refer to cell phones with a lot of the features commonly encountered on PDAs.

The thing about intimate computing that I find interesting is not the amazing features that have been incorporated into these devices, but their social impact. My training is in economics, so naturally I've been interested in the way that the contemporary 3G/PDA/GPS affects people's behavior. An incidental feature of this particular branch of technology is that intimate computers are experiencing amazing technical ferment. In the early 1990's, Palm Pilots were essentially organizers; then, they and their competitors incorporated cellular telephony, digital cameras, GPS, musical playback, and television. Market penetration was much greater and the cultural influence of intimate computing has become significant.

The same usage was adopted by Lamming & Flynn (1994):
Our vision of the PDA is not confined to a device the size of a notebook, but includes something much, much smaller, perhaps the size of a watch or piece of jewellery — a device that can be worn and taken everywhere. Indeed, we expect to see elements of the PDA embedded in most current portable devices — cell phones, for example. These tiny PDAs will include wireless communication facilities allowing them to collaborate with other similar devices and nearby services. Our interest is in understanding the opportunities presented by a world in which we can rely on a large proportion of our users having a powerful computer with them at all times.

One possible consequence of wearing a computer is that it can be much more useful to you personally. Since it always accompanies you, and nobody else, it makes special sense to tailor its behaviour to your own special needs. Moreover, because it will be involved in many of your activities, it can become intimately familiar with them, and adapt to them like a personal assistant.
Crucial to this concept is an understanding of the term "intimate." The word is often used as a surrogate for "sexual," but it is also used for "personal" and "knowledgeable." A person can have a very personal relationship with another, in the sense that the two people share details of their private lives that are normally hidden from public view; or, conversely, someone can possess detailed and comprehensive information about someone/something else. A polymath well advanced in years might have a nurse who attends to her decrepit bodily functions, and yet not know that she developed a new branch of mathematics when she was in her thirties. She might have an admirer who knows all about the latter, but not about the former.

"Intimate" can mean either or both. Presumably an intimate computer can manage your triple life (as pianist, courtesan, and bank robber), because it can not only store information about performances, trysts, and swag, but because it is physically present at all times, small, and contextual. The GPS detects when you're in the Warsaw Philharmonic Hall, when you're in the Blue Angel Motel, and when you're in the basement of the bank, and provides you with relevant data accordingly. The data storage is designed around episodes rather than conventional branches. When the user is close to certain devices (e.g., a particular computer, printer, cellphone) the intimate computer records salient details of each operation the user performs using the device. For example, if the user makes a phone call, the calling and called numbers would be recorded along with the start time and call duration.

In the survey by Bell, et al. (2003), several writers are cited describing a less benign image of intimate computing:
"Intimate computing" has also occasionally been used to describe a different kind of intimacy – that of closeness to the physical body. In 2002, the term appears in the International Journal of Medical Informatics along with grid computing and micro-laboratory computing to produce "The fusion of above technologies with smart clothes, wearable sensors, and distributed computing components over the person will introduce the age of intimate computing." Here "intimate computing" is conflated with wearable computing; elsewhere intimate computing is even subsumed under the label of wearable computing. Crossing the boundary of skin, Kurzweil paints a vision of the future that centralizes a communication network of nanobots in the body and brain. He states "We are growing more intimate with our technology. Computers started out as large remote machines in air-conditioned rooms tended by white-coated technicians. Subsequently, they moved onto our desks, then under our arms, and now in our pockets. Soon, we'll routinely put them inside our bodies and brains. Ultimately we will become more nonbiological than biological."
Here, of course, the quotation from Kurzweil also conflates machinery with technology (RNL&A).

Bell (2004, p.2) makes the argument that what makes computing intimate is not the physical potential of the machinery involved, but the modality of its use: she cites the use of the internet for dating, erotica, and research of religious beliefs. She alludes to the familiar cliché that people often divulge personal details in the public space that they would hide from those who know them. But of course, people browsing for erotica online expect to do so anonymously; it's precisely the extreme (if illusory) anonymity of cyberspace that makes it so attractive. The other is, of course, that search engines are both efficient and emotionally inert. It's a bit analogous to inserting prayers written on scraps of paper between courses of the Wailing Wall. The prayer is anonymous in the sense that anyone who sees it will almost certainly be a total stranger, with no investment in the information it contains. Bell also cites cell phones available to SW Asian subscribers which help the user locate the direction of Mecca for prayer (One example is the F7100 Qiblah). This is actually something that Muslims have done for as long as there have been Muslims: use the latest technology to assist in their religious rites (aircraft for the hajj, astronomical instruments for establishing the moment Ramadan began, and so on). It seems highly likely that there are indeed culturally peculiar ways in which computing becomes intimate, but examples are hard to find.

(She sites other, more compelling examples of differing cultural norms. In some societies, people enjoy pornographic magazines in trains and cybercafes because doing so at home would insult their families and violate their homes. In the USA, of course, social sanctions against possession, not to mention public consumption, of pornographic magazines is quite severe.)

I've mentioned the idea of computers being intimate by
  • being useful for a broad range of personal uses (camera, phone, organizer, navigator);
  • using detailed context about the user to make themselves more useful;
  • and being culturally adopted for private functions that usually are hidden from public scrutiny.
Other senses of intimacy might pertain to
  • the increased use of haptic interface for PDA's and cell phones, not only for data input but also pleasure and output;
  • the implementation of ecologically sustainable/biologically appropriate technology, so that the bureaucracy associated with design, production, and sale of technically advanced products is not so immediately dependent on coercion.

Sources & Additional Reading:

Genevieve Bell, Tim Brooke, Elizabeth Churchill, & Eric Paulos, "Intimate (Ubiquitous) Computing" (PDF), Intel Research (2003)

Genevieve Bell, "Intimate Computing?" (PDF), IEEE Internet Computing (2004)

Mik Lamming & Mike Flynn, "'Forget-me-not': Intimate Computing in Support of Human Memory" (PDF), Rank Xerox Research Centre, Cambridge Univ., UK (1994)

Labels: , , , ,

Virtual Geographic Environments

This post links together several ideas linked to virtual geographic environments (VGE), mainly having to do with motivations for creating them.  Notice the word "creating" can refer to the initial setup of a VGE. like Google Earth, and also to creating "posts" or uploads of data by unrelated parties: in the case of Google Earth, this includes posting of photographs of spots by visitors.

Click for larger image

CAPTION FROM GOODCHILD (2007): "A Google Earth mash­up of the area of Soho, London. The contemporary imagery base has been obscured by an 1843 map from the David Rumsey collection.  Superimposed on this are the deaths (green) from cholera in the outbreak of 1854, and the water sources."
Google Earth is actually a replica of an existing landscape that can be used for navigation or sharing of information. "Second Life" is an imaginary landscape, analogous to maps of Middle Earth found in copies of The Lord of the Rings. Much of this post will be referring to a paper by Prof. Michael F. Goodchild,  "Citizens as Sensors: the World of Volunteered Geography" (PDF).1 Goodchild excludes VGEs that do not correspond to literal geographies, although in the future this may be hard to do.

Nevertheless, the events of 1507 [viz., the naming of two continents "America"] provide an early echo of a remarkable phenomenon that has become evident in recent months: the widespread engagement of large numbers of  private citizens, often with little in the way of formal qualifications, in the creation of  geographic information, a function that for centuries has been reserved to official agencies. They are largely untrained and their actions are almost always voluntary, and the results may or may not be accurate. But collectively, they represent a dramatic innovation that will certainly have profound impacts on geographic information systems (GIS) and more generally on the discipline of geography and its relationship to the general public. I term this volunteered geographic information (VGI), a special case of the more general Web phenomenon of user generated content.[...]
The thing that's extraordinary here is actually the flow of data contributed; voluntary and amateur research has been used by geographers all along.2  But that consisted of thematically-specialized data; now we're talking about factual information about a gigantic number of points on the map, such as business reviews that are embedded in a digital map (allowing you to find relevant services).

Maps such as OpenStreetMap are wiki equivalents of Google Earth. Having used both, I've noticed that OpenStreetMap has a different mix of information that makes it useful for, e.g., people investigating real estate development in a selected area.

Digital maps allow the creation of "mash-ups" that incorporate a wide range of data from different sources. Goodchild mentions a map of an 1852 London Cholera Epidemic (p.6; links added):
For example, Figure 5 shows a Google Earth mash­up of the Soho area of London during the 1854 cholera outbreak made famous by Dr John Snow (Johnson, 2006). It combines a street map of London from 1843 (from the online private collection of David Rumsey, a San Francisco map collector) with online data on the water sources and cholera deaths from my own Web site. Readily available software makes this kind of mash­up remarkably easy (see, for example, Brown, 2006)
The advantages of this for epidemiological research or geology seem pretty obvious. Another application is planning by businesses or households planning vacations.  Emergency relief--whether by NGOs or by governments--will be another big beneficiary.

Likewise, smartphones can be equipped with GPS so they can assign coordinates to photos.  VGEs like Flickr Map can plot these on a map, allowing visitors to see photos of an area they want to visit (is it snowy on Mountain Loop Highway this time of year?).

While these developments are encouraging, they face a situation similar to journalism-cum-blogging:
It is easy to believe that the world is well mapped. Most countries have national mapping agencies that produce and update cartographic representations of their surfaces, and remote­sensing satellites provide regularly updated images. But in reality world mapping has been in decline for several decades (Estes and Mooneyhan, 1994). The U.S. Geological Survey no longer attempts to update its maps on a regular basis, and many developing countries no longer sustain national mapping enterprises. The decline of mapping has many causes (Goodchild, Fu, and Rich, 2007). Governments are no longer willing to pay the increasing costs of mapping, and often look to map users as sources of income. Remote sensing has replaced mapping for many purposes, but satellites are unable to sense many of the phenomena traditionally represented on maps, including the names of places
So it remains to be seen if "citizen mapping" can fill a role unavailable to "citizen journalists," who cannot take the place of actual journalists with actual bureax and actual budgets.

This is more than an arch observation by someone often skeptical of the big projected windfalls from intimate electronics. Digital mapping includes a lot of new forms of data that need to be current and documented in order to be useful.
In the mid 1990s the U.S. Federal Geographic Data Committee published its Content Standards for Digital Geospatial Metadata, a format for the description of geographic data sets. The project was very timely, given the rapid increase in the availability of  geographic information via the Internet that occurred at that time. Metadata were seen as the key to effective processes of search, evaluation, and use of geographic information.  Nevertheless, and despite numerous efforts and inducements, it remains very difficult to persuade those responsible for creating geographic data sets to provide adequate documentation. Even such a popular service as Google Earth has no way of informing its users of the quality of its various data layers, and it is virtually impossible to determine the date when any part of its image base was obtained.
Goodchild is struck by the question of motivation--why would people fill in the gaps on sites like Flickr and Wikimapia?  I have to admit, as someone who blogs intermittently, to being puzzled by my own motivations.

  1. Prof. Michael F. Goodchild, National Center for Geographic Information and Analysis and Department of Geography,University of California, Santa Barbara.
  2. See John McPhee, Annals of the Former World, Farrar, Straus and Giroux; 1st edition (2000); see in particular the chapter, "In Suspect Terrain." This book, alas, is exceptionally difficult to find anything in: McPhee includes a "narrative table of contents," which is basically a very large haystack to find a needle in.  


"Volunteered Geographic Information (VGI)," Digital Urban blog (14 Jan 2008); based on Michael F. Goodchild, "Citizens as Sensors: the World of Volunteered Geography" (PDF), Workshop on Volunteered Geographic Information, UC Santa Barbara (December 2007).

Welfare Efficiency and Morality (2)

(Part 1)

In concluding my thoughts on Reckoning with Slavery, I turn to a passage from that book that I thought was especially excellent. It's by economists Paul A. David and Peter Temin, and it's a chapter entitled, "Slavery: the Progressive Institution?"
What is one to make of this effort to separate "economics" from "morality"? A dichotomy works two ways: the authors' insistence that economic matters should not be permitted to becloud issues of "pure morality" also suggest that no prior ethical judgments have contaminated the "purely economic" findings upon which Time on the Cross is based. But do the methods of welfare economics enable one to carry through an ethically neutral re-examination of the comparative social efficiency of the system of slavery? Is it possible to conduct the sort of "value-free" inquiry which Fogel and Engerman appear to envisage as establishing the economic facts concerning the consequences of this particular institutional arrangement, the objective historical truths about which moral judgments subsequently may be made?

The brief answer is that modern welfare theory is quite incapable of supporting such an undertaking. Not only does the central analytical concept of the "welfare efficiency" of a specific pattern of resource allocation have a distinct ethical content, but the ethical premises upon which it rests makes this a peculiarly inappropriate framework within which to comprehend systems based on varying degrees of personal involition.

The notion that questions concerning the allocative efficiency of alternative economic arrangements usefully can be separated from concerns with other aspects of those arrangements, such as the distribution of wealth, income, and ultimately of the human happiness that may be derived from them, is fundamental in modern economic welfare theory. But this notion rests on the idea that maximization is good-that a state of the world in which more of an inherently desired thing is available to be (potentially) shared by all is "better," in some widely shared sense of the word, than states in which there is less. For by moving to such a state at least some individual could be given more of what he desired (made "better off") without necessarily rendering anyone worse off. To such a change reasonable men freely would assent. Economists describe states where any individual's further gain must come at someone else's expense to be welfare-efficient, or Pareto-efficient; and a move toward such a position is said to be "Pareto-safe."

Pareto efficiency, then, is not an ethically neutral concept. It rests on the premise that each individual's desires (preferences among goods, and between goods and leisure, and goods today versus goods tomorrow) should be allowed to count. Thus Pareto-safe moves are ethically safe for the "scientific" economist to recommend only because maintaining the new position I:' presumably would require no coercion. Indeed, it is because one presumes that all commodities "consumed" are- voluntarily chosen, and all efforts and sacrifices made for the production of commodities are freely rendered, that the commodities ethically can be called "goods." But, once the presupposition of autonomous individual preferences is seriously questioned, it becomes unclear how truly voluntary "choice" is. The serious possibility that what individuals seem to want may be systematically shaped by what they have been allowed to have therefore undermines the ethical foundations of normative welfare analysis. If people who had been long enslaved eventually "chose" to continue in the security of their chains, should we unhesitatingly say that this test revealed bondage to be a "better" condition than freedom?

Welfare analysis based on the search for Pareto optimality not only subscribes to the complex ethical character of that criterion, but "counts" individual preferences only as these can be expressed through market behavior. Recommendations of Pareto-safe changes in the pattern of resource allocation therefore must implicitly accept the past and the existing distribution of income and wealth, the institutional working rules, and the larger social and political power structure. The criterion applies to consensual, "no injury" changes from whatever status quo has come to prevail as a result of the past economic and non- economic processes.

But because the prior specification of property rights can, and usually does exercise a powerful role in determining whether a particular change is deemed Pareto-safe, the rule of unanimity itself carries a strong bias in favor of the status quo. A slave set free might not be able, given his prior lack of training, to earn sufficient income both to compensate his master for the loss of his services and improve his own economic welfare. The two parties could not agree on manumission. Yet if a prospective master were obliged fully to compensate a free man for the welfare loss entailed in entering perpetual bondage, it is unlikely that the two could agree to that change either. So in determining which, between slavery and freedom, is the more welfare-efficient economic system, the thing that may well matter most is whether the new economic historian will start from an ethical resumption of the human right to freedom, or accept a factual status quo which finds a people already "stolen" and held in bondage.

Modern welfare economics is grounded on the supposition that '" all market and non-market transactions of interest between -individual actors are voluntary. Involuntary transactions, in which goods are wrested from unwilling "sellers" or forced upon unwilling "buyers," amount to theft and extortion, respectively. Such a theory is not helpful for deriving precise statements about the welfare consequences of changes which entail the introduction or further extension of involuntary transactions of the sort essential to slavery. As the ethical premise that each individual's preferences must count underlies the notion that the only "Pareto-safe" (welfare-efficiency justified) changes are those to which there would be unanimous assent, it is difficult to use this apparatus to assess the comparative economic welfare efficiency of slave and free societies. For in imagining the -change from one to the other you must acknowledge that the entailed redistribution of property rights violates the ethical premises for making formally justifiable statements about the resulting change in social welfare. When people are enslaved, welfare necessarily is transferred to their masters, and there is no ethically neutral way to compare the welfare-efficiency of the resulting institution with the set of outcomes characterizing an alternative institution, under which that particular interpersonal welfare transfer need not take place. Any such comparison would require weighing the slaves' losses against the masters' gains.

"Slavery: the Progressive Institution?" p.228
It is my view that economics has failed to assimilate this lesson, and become a theodicy of coercive capital.

Labels: ,

19 January 2008

Welfare Efficiency and Morality (1)

Recently I finished reading a book by Paul A. David, et al., entitled Reckoning with Slavery. This book is primarily a scrutiny of Time on the Cross: The Economics of American Negro Slavery, which was published in 1974. Robert Fogel won the 1993 Economic Nobel Prize, largely as a result of Time on the Cross (which he co-authored with Stanley Engerman). Fogel was honored because he putatively established the superior power of economics to perform historical research; in effect, his book alleged that non-economist historians of slavery, such as Kenneth Stamp, had created a totally false impression of the past because of their reliance on subjective and politically-charged sources.

Time on the Cross addressed the period of slavery in the USA, especially after 1830. Fogel & Engerman argued that, while slavery was morally wrong prima facie, otherwise it was a benign institution. More specifically, they argued that:
  1. Capital invested in slaves/slavery had high rates of economic return; i.e., apart from the personally lucrative returns of individual owners and traders, slavery was claimed by F&E to have produced tremendous material benefit relative to the economic inputs used
  2. Slavery in the antebellum period was a flourishing institution; there was little risk of a severe recession in the slave economy--to say nothing of an "imminent collapse" (as, for example, per J.R. Hummel)
  3. The slave economy was economically efficient, i.e., it was very successful at producing what people wanted, given their scarce incomes.
  4. Slavery contributed to a progressive Southern economy, which was organized along rational and meritocratic lines.
  5. Slavery provided relatively favorable conditions for slaves
These arguments are advanced in very strong forms. Point (3) may sound like a reiteration of (1); in fact, the distinction is largely between comparisons of technical efficiency, which measures output relative to inputs, and economic efficiency, which compares achieved results to potential results, in the light of participants' tastes (more detail here).

Item 5 is deeply disturbing to me and to the authors of Reckoning. First, it's a bit as though F&E had proposed to write a detailed account of the Holocaust without so much as considering the input of Jewish European inmates/victims of the event, on the grounds that survivors' testimony is partial, biased, and acrimonious. Imagine if they proposed instead to rely exclusively on the testimony of SS officers running the camps, using (perhaps) data submitted by legal defense teams at the Nuremburg Tribunals. Second, in the actual event, F&E insist on making sweeping dismissals of the prior impression on the basis of extremely flimsy evidence (namely, a single plantation owner's diary on a single plantation over the course of three years, 1839-1842). The diary of Bennett Barrow is the sole foundation for several astonishingly sweeping and determined contentions made by Time, including the (arithmetically incorrect) claim that "the average slave" was whipped (flogged) only 0.7 times per year.1

A startling feature of F&E is their own copious flogging of the racist allegation; a Google book search reveals that they accuse others of being racist 33 times, or an average of once every 8.18 pages. In some cases the accusation is directed at Ulrich B. Phillips, an early 20th century historian.2 But the allegation is also directed at abolitionists, whom F&E regard with undiluted and undisguised loathing (rather out of place in a book supposedly promising a cool, dispassionate look at the numbers). F&E argue that negative descriptions of slave conditions were degrading to the victims, and the attacks on slavery's technical efficiency was nothing less than an attack on Black labor.3 The internecine conflict over how "racist" certain antebellum interpretations may be, opened up an ideological space for a spectacularly virulent apologia for the most monstrous racist crime of all recorded history.

However, there is actually another realm in which F&E's revision of slave history runs off the rails, and it is one which is of much more urgent concern to contemporary readers. That has to do with their attempt to push the morality of slavery back into the woodwork, and examine it "objectively" as a path of economic development. I'll be addressing that in part 2.
  1. Herbert Gutman and Richard Sutch, in "Were slaves imbued with the Protestant Work Ethic?" Chapter II of Reckoning with Slavery (p.57-61) establish that, as always, F&E overestimated the number of slaves Barrow owned, may have underestimated the number of reported floggings (and definitely failed to mention that Barrow used other punishments as well), and got an erroneous figure. In reality, Barrow's plantation not only flogged slaves an average of 1.19 times per worker per year, it flogged at least one slave every four days. As a result, slaves lived under conditions of constant intimidation.
  2. Ulrich B. Phillips (1877-1934) was a fairly prominent historian who performed comparative research of slavery in Jamaica and the Antebellum South (American Negro Slavery, Life and Labor in the Old South). He argued that slavery was not brutal at all, but represented a paternalistic and waning system of economic organization. Phillips' account agrees with the later F&E in saying that slavery was benign (if morally "problematic"), but disagreed that it was technically efficient.
  3. This is really twisted logic: again, analogous to alleging that critics of the Nazi's "ghetto economy" were antisemitic, because they attacked the productivity of Jews forced to labor at piecework in the new ghettos. However, there exists a bizarre quandary over the debate on slavery. Kenneth Stampp was and is the most influential historian on the history of slavery; his book The Peculiar Institution (1956) shocked the nation with its account of a savagely oppressive system. Stanley Elkins' Slavery: A Problem in American Institutional Life (1959) extensively likened slavery to the concentration camps.

    This created something of an irrepressible conflict among historians. If Elkins was right, then the African American was descended from a group of people whose personalities were shaped by a totality of barbaric cruelty (inflicted by White masters). Some reasoned that this was too close to Ulrich Phillip's image of the "Sambo," who lacked agency or volition. Elkins himself sought to explain why African Americans [mostly] accepted their lot. For a combination of historical reasons and unlucky timing, Elkins became something of a villain to some Black militants, who were uncomfortable with his claim of residual social pathologies.

Labels: ,

08 January 2008

Multicore Processors

Several years ago Motorola released the POWER4 microprocessor with a dual core. This had two cores which were basically PowerPC chips sharing a memory controller. Since then (2001), it has become commonplace for microprocessors to have large numbers of cores. Most introductory articles I've read mention that multicore processors allow multiprocessing on a single die (the advantage to this is explained below), but it already happens that single-core versions of most RISC chips already offer multitasking within the same processor. Also, it has become commonplace for chips to incorporate mixtures of CISC and RISC architecture: most of the architectural innovations are definitely RISC-inspired. As a fairly random example, the single-core PowerPC 604 had six parallel execution units and could issue four instructions concurrently (Wikipedia).

Why is a Single Die better?

Microprocessors are fabricated on a slice of silica wafer; during CMOS lithography, there are scores of chips on each wafer, which are subsequently detached. A reduction in the area of the chip has a substantial impact on the price of fabrication. Also, one of the most important gains in semiconductor technology over the last thirty years has been reduction in transistor size. Not only is it possible to fit millions of transistors onto a chip the size of a thumbnail, the same chip is likely to store millions of bytes of cached data, in addition to the registers. As a consequence, one approach for developers has been to package newer chips two or more to a die; interface between the two (for parallel processing) can be much more efficient than between two chips on separate dies, and there is a potential for plug compatibility: the new two-core chip can be plugged directly into an earlier model motherboard.


On the other hand, once a developer has perfected a processor mask, it's a fairly minor enhancement to incorporate two, four, or even 16 on a single die. Furthermore, multithreading in a single processor core can slow the process down in comparison to a single thread enjoying a monopoly of memory space, ports, virtual sockets, message queues, and distributed objects. Moreover, engineering has moved to exploit the potentials of multiple cores in the design of future cores. For example, since the 1980's there was a tendency to make microprocessors universal and unitary. Universal, in the sense that the processor would incorporate nearly all, or potentially all, of the functions of the computer itself (short of input, output, and power supply). Unitary, in the sense that the processor had a completely integrated system of program execution.

Now the evolutionary direction has reversed. A vestigial concept was the "bit-slice" architecture, in which 2-4 bits of the ALU resided on separate chips, i.e., a 32-bit word computer system might have 8 chips for the ALU, each including the instruction set, but only for 4 bits. It's not difficult to imagine chip designers opting for revitalized bit-slice architecture, rather than parallelism, since there are diminishing marginal returns to parallel processing.

*In the Apple Macintosh world, the contemporary Motorola 68040 chip likewise ingested an entire floating point unit (FPU).

AMD website, "AMD and 90nm Manufacturing: Paving the Way for Tomorrow, Today" (2008)

Elmer Epistola, "Semiconductor Manufacturing," SiliconFarEast (via Wikipedia; date unknown)

Tom R. Halfhill, "The Future of Multicore Processors" (31 Dec 2007)


02 January 2008

Home Tech: Washing Machine Problems (2)

(Part 1)

I didn't want for there to be more to this. My research on the Kenmore revealed that I had made a big mistake; the HE 3.1 is a notorious mess, prone to bearing failure (more on that below). This became apparent after replacing the belt several times, staring, spinning the big wheel, and so on. Finally I was looking at it with the "tub" full of soggy clothes that the machine had failed to spin. With the tub full, it was obvious.

In the diagram on the right, the first picture illustrates how the actual tub assembly is supposed to "look." The second is cut away to show the tub (the inner cylinder) tilted. In fact, the tilt is just enough to pull the belt off. After putting the belt back on about four times I realized what was wrong; by then, however, the belt broke.

I learned later that this is a common problem with Kenmore machines. The bearing that connects the tub to the wheel is weak and prone to leakage, which of course accelerates its destruction. One thing I'm concerned about is the possibility that the bearings at the front are now ruined.