10 July 2007

Interpreters

Over a year ago I posted about integrated development environments (IDE's), and mentioned in passing that interpreters were
An "interpreter" is a program that executes another program line by line, rather than translating it into assembly or machine code. When an interpreter runs a program, it responds to errors differently than would a computer running the compiled version of the same program. Since the compiled program is merely a translation, an error would potentially cause the computer running the program to crash. An interpreter can stop and send an error message.
That was, of course, profoundly inadequate. I'd like to post just a little bit more about what interpreters do.

Strictly speaking, any programming language may be implemented with an interpreter or a compiler; although some exceptions may apply. Disagreement exists over whether Java is an interpreted language or a compiled language, with some (Lowe, p.11) saying it is compiled, and others (Cohn, et al.) saying it is interpreted. Perl may be implemented with a compiler; so may PHP; Lisp; and Python. I don't pretend to be an authority on the subject, but there's a basic concept in logic that the statement "All x is y" is false if even a single example of xy can be found. I am vulnerable here on the grounds that disagreement may exist over whether a thing is a compiler or an interpreter.

There are several reasons why a language may be implemented with an interpreter rather than a compiler. First, the obvious reason is that you may want a developer environment that allows you to debug the program. With a compiler, you simply get a message, "Runtime error." It might tell you more, but a really sophisticated interpreter can help you find the actual spot where the error occurred, and even help correct your spelling. Since the compiler's function is to translate the entire program into byte code that the machine can read "in one shot" as it were, debugging with a true compiler is a little like finding a needle in a haystack.

Another reason is that an interpreter may be easier to update and be creative with. Ruby was developed with certain syntax innovations ("principle of least surprise"—POLS), and of course it was a lot easier to create an interpreter that could run an increasing number of commands, than a compiler with a fully-revised complement of libraries, ported to that specific model of microprocessor. Also, a compiler generates machine code, or data in the ones and zeros that the microprocessor actually understands. In contrast, an interpreter can be written entirely in a high-level programming language like C, without any knowledge of machine code.
________________________________________________
How do Interpreters/Compilers Work?

There are several similarities between compilers and interpreters at the operational level. The code that is sent to the compiler/interpreter for execution is called the source file; sometimes, programs written explicitly for use with an interpreter are called scripts. Both interpreters and compilers include a scanner and lexer. The scanner module reads the source file one character at a time. The lexer module divides the source file into tiny chunks of one or more characters, called tokens, and specifies the token type to which they belong; the effect is rather like diagramming a sentence. Suppose the source file is as follows
cx = cy + 324;
print "value of cx is ", cx;
The lexer would produce this:
cx  --> Identifier (variable)
= --> Symbol (assignment operator)
cy --> Identifier (variable)
+ --> Symbol (addition operator)
324 --> Numeric constant (integer)
; --> Symbol (end of statement)
print --> Identifier (keyword)
"value of cx is " --> String constant
, --> Symbol (string concatenation operator)
cx --> Identifier (variable)
; --> Symbol (end of statement)
The ability of the lexer to do this depends on the ability of the scanner to document exactly where each token occurs in the source filer, and its ability to scan backwards and forwards. Sometimes the precise meaning of the file depends on its position with respect to other token types. For example, operators may contain more than a single character (e.g., < as opposed to <=). The lexer may have to pass a message to the scanner to back up and check to see the identity of neighboring characters.

The parser receives the tokens + token types from the lexer and applies the syntax of the language. The parser actually requests the tokens and assesses their appropriateness with respect to the syntax of the language, and sometimes demands additional information from the lexer module.
Parser: Give me the next token
Lexer: Next token is "cx" which is a variable.
Parser: Ok, I have "cx" as a declared integer variable. Give me next token
Lexer: Next token is "=", the assignment operator.
Parser: Ok, the program wants me to assign something to "cx". Next token
Lexer: The next token is "cy" which is a variable.
Parser: Ok, I know "cy" is an integer variable. Next token please
Lexer: The next token is '+', which is an addition operator.
Parser: Ok, so I need to add something to the value in "cy". Next token please.
Lexer: The next token is "324", which is an integer.
Parser: Ok, both "cy" and "324" are integers, so I can add them. Next token please:
Lexer: The next token is ";" which is end of statement.
Parser: Ok, I will evaluate "cy + 324" and get the answer
Parser: I'll take the answer from "cy + 324" and assign it to "cx"
Indents are used here to indicate a subroutine. This illustrates what the interpreter/compiler must do in order to add cy and 324. If the parser gets a token that violates the syntax, it will stop processing and send an error message.

The next module is the Interpreter or, with compilers, the Code Generator, which actually executes the code. With interpreters (as opposed to compilers), this is sometimes part of the parser; the parser interprets and converts the statements into bytecode (i.e., intermediate language, passed off to a compiler). In the case of the compiler itself, the code generate produces machine code that can be executed by the microprocessor.

(Special thanks to Scorpions4ever)
SOURCES & ADDITIONAL READING: Wikipedia, Interpreter (computing), Interpreted Language;

BOOKS: Doug Lowe, Java for Dummies, Wiley Publishing (2005); Cohn, et al., Java Developer's Reference (1996)

Labels: , , , ,

22 October 2006

Unified Modeling Language (UML)

In order to program objects, a standard is required for ensuring that the objects will not interfere with each other. A modeling language can be loosely described as a "meta-language," or abstract representation of language for software designing purposes. Initially, modeling language was considered to be part of the programming technique; programming teams might follow basic guidelines so they could communicate with each other. Gradually, the various modeling languages converged into a standard that is universally taught. One of the benefits of this has been the creation of an open-source library of software objects and tools that can be adapted readily to a core program.

In 1997, the Object Management Group (OMG) released the first version of the Unified Modeling Language (UML), which probably contributed to the subsequent popularity of OOP. Initially, the paucity of open-source objects posed a problem; in order to create large numbers of mutually compatible applications based on objects, one needs an immense number of program objects for all the detailed subroutines that a full-fledged application comprises. Object-oriented programming is especially unsuited to the conventional variety of intellectual property rights, since proprietary objects can only be used by the original developer, or else, require complex licensing agreements. The UML seems to have had its greatest impact in the rapid and impressive development of the online content management software (CMS) since '04, most notably Drupal.

The UML is a set of freely available standards (download) that are roughly analogous to any number of industrial methodologies (TQM, etc.). From these standards have evolved a large number of UML development tools, most notably the UML diagrams.


Click on image for larger view

The illustration above is a screencapture of a program called Pacestar UML Diagrammer, whose purpose--shockingly enough--is to generate UML diagrams. There are 13 diagram types; the one shown in the active window above is a class diagram, with a case diagram in the background. All thirteen are listed below, with links to an excellent site explaining their purpose. As one realizes the complexity of the symbolic language that was developed for UML,one begins to understand the sweeping importance UML has had on modern (post-2004) software design. Firstly, UML 2.0 (released that year) was the industry standard for symbolic analysis of the operation, architecture, and functionality of object oriented software; secondly, new software applications tend to be object-oriented; and thirdly, OOP was becoming much more popular than it had been in the past precisely because UML was increasing the ease of OOP relative to softwares for which something like UML did not--or could not--exist.
__________________________________________
Object Constraint Language:

OCL is a formal system of semantics which is used for establishing the correctness of a "statement" in a programming language. As the name implies, it may constrain an object, by excluding certain types of statements.
Pollice: For example, it could help you indicate that, to be assigned a room, a specific course must have at least six students enrolled. With OCL you could annotate the association between the Course and Classroom classes to represent the constraint, as shown in Figure 1. As an alternative to the note shown in this figure, you could use a constraint connector between the Course and Classroom classes.
Pollice's article is very enthusiastic about OCL, which is often necessary for an instructor (I usually find I can learn a concept faster if I believe, or convince myself, that the concept is brilliant). He illustrates the difference between a set of semantic rules, which is what OCL is, and an actual language (which would have an explicit syntax and vocabulary). Human languages have surprisingly universal semantics, something that is entirely untrue in either mathematics or programming languages; there, semantic rules vary depending on the logical relationships being manipulated. According to Pollice, OCL has very mathematically-oriented semantics, which makes it especially powerful since mathematics has evolved a very profound, comprehensive semantic structure, whereas programming languages tend to have very rudimentary rules of syntax that are peculiar to each one).

In contrast, many programming languages have semantic rules that are comparatively closer to formal English (e.g., COBOL); this is actually something of a waste for programming objects, an object usually performs a very specific mathematical or logical operation that has no use for the arbitrary and alien constraints of human semantics, which are designed to describe tangible reality. OCL rules for what constitute acceptable language for objects under each objects peculiar conditions and contraints are, things that ought to be applied as understood by the programmer as a tool; the programmer ought not to try to master the entire codex of OCL rules. The benefits of expanding one's knowledge of OCL is that one can learn to think formally, thereby expanding one's power to discern appropriate design approaches.

(OGL SOURCE: Gary Pollice, "Formally speaking: How to apply OCL")
__________________________________________
CRITICISM OF UML:

Needless to say, for anything as influential as UML has been, there are criticisms; what's surprising is how mild they are. For the most part, these consist of inadequacies and omissions in the language. Scott W. Ambler laments that UML is a long way off from true computer-aided software engineering (CASE), and developers are still obligated to develop proprietary extensions to it in order to generate executable code or derive UML models from existing code.

Another criticism is the proliferation of diagrams; while several new ones have been added for UML 2.0, it seems that the large number reflects the sort of committee-induced compromise between incompatible design approaches: include the tools to do both.

This complaint also arises with the logic. UML authorizes the use of OCL semantics, English (detailed semantics) and its own peculiar set, there's an argument that the varied semantic structures defeat the purpose of any. It's unclear if this is necessarily a flaw, though, since different objects may require different semantic structures.
__________________________________________
NOTES:
Complex licensing agreements: frequently an application published for any particular market has many features that any one user is unlikely to use. Plug-ins may well be an option, but in cases where they are not, there is a problem of pricing licenses for proprietary software objects when the developer of the main program expects only 10% or so of users to ever use the feature.

Diagram types: these are (1) Class, (2) Component, (3) Composite structure, (4) Deployment, (5) Object, (6) Package, (7) Activity, (8) State Machine, (9) Use case, (10) Communication, (11) Interaction overview (UML 2.0), (12) Sequence, & (13) UML Timing (UML 2.0);


UML Timing diagram; click for source

__________________________________________
ADDITIONAL READING & SOURCES: Wikipedia entries for Object-oriented programming: Object modeling language, Unified Modeling Language (UML); Executable UML;

Unified Modeling Language (UML) page; OMG; Mandar Chitnis, Pravin Tiwari, & Lakshmi Ananthamurthy, "UML Overview"; Bruce Powel Douglass, "UML 2.0 Incrementally Improves Scalability And Architecture"; Scott W. Ambler, "Be Realistic About the UML: It's Simply Not Sufficient";

Labels: ,

10 October 2006

Hacking

The term "hacking" is often used to refer to the act of editing or fixing flawed computer code. Typically a hack is interpreted as a patch or clever work-around. I have also seen the term used to refer to the creation of forks of computer programs; this latter sense means, the programmer (legally or not) edits the source code of a program so its functionality is different. The programmer then circulates this new, edited version under a new name.

In many cases, a computer programmer requires a specialized program to do this; for example, there are a lot of programs that are used to automatically generate HTML, JavaScript applets, and other quasi-programs. Often they have unsatisfactory quirks, and programmers create programs to follow them around and clean up (or hack) the script. So programs and bots can be surrogate hackers too.

However, it is also the case that "a hack" was often slack at MIT for a prank. Usually hacks (in this sense) were very elaborate pranks that required an immense amount of work.

Another sense of the term "hack" derives from the older jargon, crack. The term "safe cracker" is perhaps well-known to avid readers of pulp fiction; it refers to the trick of opening a safe by manipulating the locking mechanism, rather than blowing it up. Likewise, a "code cracker" was someone who specialized in finding patterns in encrypted data, and thereby decoding it. Applied to computer terminology, it naturally referred to the ability of specialists to defeat or cripple a computer system. One obvious motive for doing this would be crime: a cracker could, for example, crack the security of a bank and change his account balance to whatever he thought he could get away with. Or he could vandalize the system of an organization he loathed.

This has unfortunately created a certain confusion of terms. One of the things any hacker could naturally do quite well is create malware, such as spam bots. Spam bots could conceivably be useful; it's just that they aren't. So the term "hacker" came to be associated with negative, destructive use of skills that are intrinsically valuable.

Richard Stallman introduces a third, closely related, sense of the term hack: the introduction of a novel, potentially useful or entertaining idea. His example includes the trick of eating with more than two chopsticks in one's hand. While this is not very useful, he mentioned that a friend was able to eat with four in his right hand, using them as two pair. It appeals to a sense of playfulness and appreciation for originality.

Labels: ,

15 September 2006

Object-Oriented Programming

Objected-oriented programming (OOP) is an enormous topic, in part because it embraces many collateral topics, and because it is a nexus among all those topics. Object-oriented programming incorporates objects, which are basic program "building-blocks." In conventional programming, the program consists of a single, very large, number of instructions to the computer. But in OOP, the program is a population of objects with discreet methods and communication nodes. Each object acquires data

A class of objects is united by basic standardized attributes or functions. Each system of OOP requires a system of classifications for objects, rather like biological taxonomy. The structure of classes will of course include subclasses for each class (and so on); a subclass is said to inherit the attributes of its class, and add others that differentiate it from other subclasses of the same class.

An example might be the database of a university, which has instructors, students, support staff, departments, and courses. There are three classes of objects: persons, organizations, and courses. The class of persons has subclasses of staff, students, and instructors.


The subclass of student inherits various attributes, such as possessing an address and phone number, but differs from the instructor in that the instructor teaches classes, and the student is enrolled in them. We might conceivably have all the violet boxes (classes) united under a class school, with attributes all of the classes inherit--e.g., all are members of an organization. I should mention that the semantics of computer programming is very different from that of spoken languages, and only occasionally coincides with that of math. So, for example, my example of a class system has unfortunately assumed human language semantics. Arguably a department might be defined as a type of person that inherits all person characteristics, but is distinguished from an instructor solely insofar as the instructor can only "have" one course at a time.

An obvious example of programming semantics violating either math or human language semantics is the command, x = x + 1; it merely tells the computer to replace the value x with (x + 1) in the register. In OOP, it is not unusual for incompatible geometric objects to be treated as related, e.g., the class circle belongs to the class point, with the additional characteristic that it has a radius. Presumably the subclass rectangle belongs to the class point as well, but has height and length.

A method is one of the things that the object does. As an object, we expect that the way in which an object executes its mission is encapsulated, i.e., hidden from the "view" of other computer programs. This means that computer programs are not expected to hack the behavior of objects, for altering their methods. A crucial component of methods is how they are bound to their variable. If the variable is defined in the code as (say) an integer, or perhaps a floating-point value, then it is statically bound to the object. If variables are declared in way that allow them to vary, they are dynamically bound.[*]

When objects communicate with each other, while they are encapsulated, there are also different levels of access; members of the same class will have more access than those that are not, unless they are friends--i.e., objects assigned special access by the programmer. Access allows the use of pointers among objects, or links to specific variables (specifically, pointers point to a section of memory space where a variable is stored; objects with pointers to a particular variable can monitor that memory space and "know" what to do with the contents.

Another aspect of objects is polymorphism, which is the ability of objects to take on the characteristics and responses of other objects. Typically, short introductions like this one use highly improbable analogies, such as referring to objects as dogs or cats; so I am especially grateful to Prof. Müller for giving us a break and actually explaining the concept seriously. It turns out there are two conceptions of polymorphism in OOP. One refers to the ability of objects to use multiple definitions of variables, so that a function can check for multiple conditions for a value being true in an single step (e.g., a number being within a desired range of another; which may require a test for it being in range, and being the number desired. This is actually something conventional programming languages can do, of course, but usually it is a bit more complicated.

The other form of polymorphism is the inheritance of pointers and methods. By this, we mean all the objects of a subclasses will inherit certain standard methods from their superclass, although the way these objects respond to the same stimulus will vary depending on the object. A subclass of objects will respond to the same pointers as those of the class above it (the base class). The purpose of this is to allow a formal array of permissions to variables created and stored by objects.

Benefits & Criticisms

It must be noted that while object-oriented programming is extremely common, it is not without controversy. What really pushed OOP into the mainst ream was the use of graphical user interfaces (GUI's); modern applications running in a GUI are typically miniature operating systems, with each of those drop down menu options or icons standing for a program. Click on the menu, and a new program is launched. Developers appreciate the ability to drag and drop many of these programs into the visual space of the integrated development environment (IDE). Virtually all tuturials or guides on the subject of OOP refer to the benefit of code reusability.

Another advantage, evidently, is that OOP semantics heavily favor multithreading computing. OOP concepts are usually explained by analogy with zoology [*]: the typical class is dog; dogs are a subclass of animals, and collies are a subclass of dogs. Instances of the class dog have a method known as barking, and collies inherit this class from dogs. Another tutorial used the scholastic example I have used above [*], but did so literally; I tried to de-abstract this using various articles on C++ linked below, and no doubt there are serious flaws in my exposition. However, it seems fairly clear that the most compelling advantage to OOP is that it provides a ready-made formal system for running program processes as a group of separate threads. One of the more economical ways that threads can pass messages to each other is by saving them to a common memory sector, something that requires a carefully designed (or standardized) syntax.

Criticisms of OOP are that it's a needlessly exotic, arbitrary, and costly constraint on programming procedure. Applications written in OOP languages are too large for what they do; OOP requires requires programmers to effectively adopt a novel syntax that is much too remote from ordinary programming, yet the method provides benefits that are inconsequential. The experience with Unix and GNU suggests that code reusability is not a real problem; programmers have managed to create immensely complex programs using bits and pieces of code from all over. Sealing objects off so that they cannot be hacked by other objects is regarded by some programmers as bizarre, and certainly unnecessary. Some have gone so far as to claim that OOP is doomed to go the way of other fads. OOP implementation is far too difficult and has failed to increase productivity.

In a way, a far test is impossible because (on the side of the skeptics) there's no way of knowing what other, more apt, methods of coding huge applications might have been made to work better if the energy had been spent on them instead; and (on the side of the defenders), I'm skeptical of the argument that this was a burn-the-ships fad. Clearly, there had been a problem developing robust code for ever-larger applications, and the approach broadly embraced OOP. This was across languages and included the open-source movement as well as a lot of firms (C++ was developed at Bell Labs in '85; C# is from Microsoft; Common Lisp was an international convention developed and supported by ANSI to unify many variants; ANSI also developed Fortran 2003; PHP 5 is actively developed by Zend Technologies in Israel; Smalltalk was developed by Xerox PARC, and inspired the development of Stepstone/Nextstep's Objective-C; Java, as everyone knows, was developed at Sun Microsystems.).

Political fads are the stuff of game theory, because they're about acting in groups. So history is full of dire political fads. But it's hard to come up with an example of a totally invalid scientific or engineering fad that had many different points of origin.

See also Unified Modeling Language (UML)


ADDITIONAL READING & SOURCES: Wikipedia entries for Object-oriented programming: OOP languages, Unified Modeling Language (UML), Executable UML; Aspect-oriented programming;

Peter Müller, Introduction to Object-Oriented Programming Using C++ ; "IBM Smalltalk Tutorial - Table of Contents"; Richard Mansfield "OOP Is Much Better in Theory Than in Practice"(2005); B. Jacobs, "Object Oriented Programming Oversold!";

BOOKS: An Introduction to Database Systems-7th Edition, C.J. Date, Addison-Wesley (2000), Chapter 21; Harold Abelson & Gerald Jay Sussman, Structure and Interpretation of Computer Programs, MIT Press (1996; complete text online); Chapter 3;

Labels:

08 September 2006

Java and CMS

I mentioned rather briefly my interest in Java-powered CMS (here). There are not many wiki engines written in Java, possibly because it's more demanding. Java is a program run by the web browser, which take is responsible for converting available data into a readable webpage. My impression which could be wrong, is that interactive webpages are more robust and less prone to unintended results when loading, since they are designed to actually interface with the web browser's virtual machine. In contrast, programs like PHP or JavaScript are designed to create another layer of interface by prompting the website's host to generate a page.
(I tried to discuss this in the prior post on CMS applications linked above. Basically, most CMS applications either generate static pages, which are created as stand-alone HTML files; or else they follow the database format, in which case every single distinguishing trait of each page in the website is saved in computer memory as a field in a database record. The later design is usually more efficient in terms of memory and searching, and is essential for very large sites like Wikipedia. In either case, however, the CMS application that powers the website must generate a file--temporary or permanent--that is read as HTML.)

Another reason why Java-based CMS's might be better is that they do not actually launch a server process whenever the user interacts with the application. Supposing it is a WikiEngine, for example, which is accessed by a large number of users. Each time a user wants to preview her new post, for example, the CGI application is required to launch a new process. But the Java app will only need to launch a new thread.

CGI versus Java: not a valid comparison!

It has to be pointed out that the dichotomy between CGI and Java is not valid. CGI is, after all, an open application programming interface (API); Java is a programming language. One can create a CGI application that is powered by Java, although this is not common. Generally speaking, Perl or PHP is used for programming CGI applications; Java applets are used for programs that run off the visitor's web browser.

However, in researching this essay, it became apparent that Java (unlike Perl or PHP) can replace many of the functions of a CGI application, while executing those functions in a way that is, in some ways, preferable to (and logically exclusive of) the CGI API. Conversely, most CMS's that are in common use were created in Perl or PHP (not Java!) because they are easily understood by people with a casual familiarity with HTML. Also, it is often unnecessary to have a costly Java application when mere HTML with a little JavaScript will do fine.
__________________________________________________________
There are quite a few Java-powered WikiEngines, mostly of the database-orientation. Courtesy of WikiMatrix, I am aware of Clearspace, Corendal, Ikewiki, JAMWiki, JSPWiki, SnipSnap, and XWiki. In addition to these named, there are some systems developed for large organizations, such as SamePage, which I have ignored. XWiki (samples) seems to be oriented to professional developers, and I don't think it's really feasible for my purposes.

Ikewiki is a semantic wiki developed in Salzburg, Austria. Semantic wikis (SW's) differ from the usual type in that they have a peculiar logical structure of the data. So far I have found no implementations.

Examination of wikis created from these engines has been extremely time-consuming, but let's make some quick notes. Clearspace is a commercial product ($29/user) from Jive Software. It's evidently used in the BBC's website, TechRepublic forums, CNET forums, and Amazon.

JAMWiki is an interesting concept: it's a WikiEngine with feature parity to MediaWiki (the most commonly implemented of all, and used with Wikipedia). So far, the selection of implementations is very slim indeed. Janne Jalkanen created JSPWiki to develop and advertise coding tricks, but it's spartan and specific to the general purpose of JSP.

Labels: , , , , , ,

10 July 2006

What is an Integrated Development Environment?

At my job there are a number of proprietary applications written in Visual Studio. Visual Studio is a Microsoft package of compilers/interpreters and debugging aids, collectively known as an "integrated development environment" (IDE).

Compiler
Computer hardware responds to a type of computer language called machine code, which is expressed in strings of ones and zeros (binary code). Assembly languages, such as C, use symbols like conventional numerals and letters; in programming parlance, such symbols are called mnemonics, because they can be memorized. A program called an "assembler" translates assembly code for the computer processor. A compiler renders high-level programming languages such as Pascal, BASIC, or COBOL into machine code.

Assemblers tend to be specific to one particular processor; the assembler converts symbols into machine code in a general one-to-one correspondence; an analogy can be made to converting Imperial units of weights and measures into their metric equivalents. Compilers tend to interface with the OS kernel, so they are not so processor specific, but their output must be compatible with that particular kernel, while the compiler can only recognize code from a particular programming language (of which there are many). I say this because assembly languages and their assemblers are often referred to by the processor they served, while compilers are typically known for the language (and operating system) they were written for. The appropriate analogy here is that of translating from, say, English to mathematical notation. There is hence a correspondingly greater range of flexibility and functional specialization.

Very few programs today are written entirely, or even predominantly, in assembly code. Instead, programmers normally use a "high-level" language, such as C, Pascal, BASIC, or FORTRAN to write applications. "High-level" means that the programming language uses more familiar words as commands. Today, compilers and assemblers are typically so fast that there is relatively reason to use a lower-level language to program.

Interpreter
(main article)
An "interpreter" is a program that executes another program line by line, rather than translating it into assembly or machine code. When an interpreter runs a program, it responds to errors differently than would a computer running the compiled version of the same program. Since the compiled program is merely a translation, an error would potentially cause the computer running the program to crash. An interpreter can stop and send an error message.

Debuggers
Debuggers are programs that can identify problems in a program they are "running." They can also supply the programmer with clues as to the error in the program, such as, indicaticating where the error occurred in the program and a general diagnosis of the problem. Debuggers can also be used to defeat a program, such as those providing copy protection.


Screen capture of Macro Debugger, MS Word

VARIETIES OF IDE's
Visual Studio is a common form of IDE, obviously; it supports several different computer languages: C#, J#, and Visual Basic. Another IDE that supports multiple programming languages is Sun Microsystem's NetBeans IDE (J2SE, web, EJB and mobile applications) and the open source Eclipse (which can potentially support C/C++, CFML, Fortran, Lua, PHP and Perl, Ruby, Python, telnet and database). Sybase Powerbuilder is both a type of computer language and an IDE for building Powerbuilder applications.

In the past, it was more common for IDE's to support a single programming language (e.g., Borland's TurboPascal). Multi-language IDE's typically include additional tools that help port an application to a different language, such as database mapping tools.

Labels: , ,