19 September 2005


Multitasking refers is the ability of a computer operating system (OS) to perform multiple tasks at the same time. The ability of a computer to multitask is an attribute of its kernel. A multiuser OS is not the same thing as a multitasking OS; a true multiuser OS must support multiple user privilege domains, as well as multiplexed input and output devices.

Cooperative multitasking requires that programmers of applications anticipate interruptions and allow them; it is used on smaller processors, such as those running the early releases of Windows. The programs running must send "calls" (messages) to the kernel that announce, in effect, that the program can now yield to another running program. There are obvious problems with cooperative multitasking, such as the fact that computer programs may neither know nor "want" to share resources. It is now considered an obsolete technology.

Pre-emptive multitasking is where the operating system allocates time slices among programs that are running on the processor. All memory is addressed in the same way, so that no application can "know" if a memory address is L1 or L2 cache, or even some address on a hard drive; the kernel, with its monopoly of this sort of information, can easily thwart any effort by an individual application to preempt rival programs. (Memory addressing that uses the same address format for all memory types is called "flat memory.") Today, nearly all systems support preemptive multitasking.

Multithreading is another form of multitasking in which the tasks running concurrently share information by sharing a memory space (i.e., a range of memory addresses). Threads are a method by which a program splits its operation into multiple concurrent processes. The processor may actually switch among threads (time-slicing), or it may delegate the different threads to different processor cores, so that they are literally simultaneous. However, the output of the threads is passed to the same memory space. Moreover, threads do not carry state information.

Threads represent a type of program fork (i.e., a splitting off) which is minimal; all of the threads forked off a single process share state information. Individual threads communicate with each other through shared memory, ports, virtual sockets, message queues, and distributed objects. Since the threads share memory addresses and state information, implementation of these methods of communications require techniques different from multiprocessing.

Multiprocessing is an attribute of CPUs that imposes special demands on the OS. This involves the use of multiple processors in a single CPU; today, many microprocessors actually are designed to contain multiple processing units on the same chip. However, even a CPU has many cores and can sustain many threads in a truly concurrent mode, there is still a distinction between processes and threads. While threads are essentially very similar subroutines which share memory space and have no state information, processes are different from each other, address different memory spaces, and carry considerable state information. A process includes a loaded version of the executable file, its stack, and kernel data structures.

Processes represent a fork of the running program (i.e., a splitting off) that contains most of the essential features of the program itself. The process is ended with an exit system call, and while it is running it is known to the operating system by a unique process identifier (PID). Processes communicate with each other using named pipes, sockets, shared memory, and message queuing.

There is a logical structure here. In the first case, of cooperative multitasking, the kernel and the host CPU have abdicated responsibility for the allocation of resources between programs running on it. This was because the hardware lacked a periodic clock interrupt or else didn't have a memory management unit (MMU). Software engineers adapted to the demands of customers for software that could share a processor without monopolizing resources. The kernel still had to create the call stack and assign memory addresses, but it would do so on the assumption that the program it was working on at that particular instant would run in perpetuity; the program itself had to yield.

Newer processors were designed to multitask, which meant they now had complex MMUs. The kernel now had asymmetric information and took no cues from the programs it was running. The way in which the processor arranged the call stack or assigned flat memory

At a still higher level of sophistication, the CPU be running a program that contains mutually related processes, such as a graphics program. These multiple threads are stacked so their data inputs and outputs are actually arranged in same memory stack; the kernel's job is to know which belongs to what.

ADDITIONAL SOURCES: Wikipedia entries for multitasking and pre-emptive multitasking, message queuing, and thread; eLook Computer Reference, "Pre-emptive multitasking" vs. "Cooperative multitasking"; IBM, "Parallel Environment for Linux—Glossary"; Liemur "Threads: Inter-Thread Communication"; [Apple] Developer Connection, "Thread Communication";


Post a Comment

<< Home