Every parallel programming language must address certain issues, either explicitly or implicitly. There must be a way to create parallel processes, and there must be a way to coordinate the activities of these processes. Sometimes the processes work on their own data and do not interact. But when processes exchange results, they must communicate and synchronize with each other. Communication and synchronization can be accomplished by sharing variables or by message passing.

There are two methods of synchronization: synchronization for precedence and synchronization for mutual exclusion. Precedence synchronization guarantees that one event does not begin until another even has finished. Mutual exclusion synchronization guarantees that only one process at a time enters a critical section of code where a data structure to be shared is manipulated.
Go to  Linda  MPI  PVM  Occam  Sisal


You can also find information on concurrent and parallel languages on the following sites

 
 


Linda

Linda is a MIMD model of parallel computation. The Linda programmer envisions an asynchronously executing group of processes that interact by means of a associative shared memory, tuple space. Tuple space consists of a collection of logical tuples. Parallelism is achieved by creating process tuples, which are evaluated by processors needing work. Parallel processes interact by sharing data tuples. After a process tuple has finished execution, it returns to tuple space as a data tuple.


Message Passing Interface

Message passing is a paradigm used widely on certain classes of parallel machine. Indeed, many significant applications have been cast in this paradigm. Unfortunately, many vendors have implemented their own hardware-specific variants. The resultant lack of a standard has severely restricted the effective use of themessage-passing model.

To address this problem, researchers at Argonne National Laboratory have collaborated with a broad group of parallel computer vendors, researchers, and users on the definition of MPI (Message Passing Interface).

MPI is the standard for multicomputer and cluster message passing introduced by the Message-Passing Interface Forum in April 1994. The goal of MPI is to develop a widely used standard for writing message-passing programs.


Parallel Virtual Machine (PVM)

PVM is a software package that permits a heterogeneous collection of Unix computers hooked together by a network to be used as a single large parallel computer. Thus large computational problems can be solved more cost effectively by using the aggregate power and memory of many computers. The software is very portable. The source, which is available free thru netlib, has been compiled on everything from laptops to CRAYs.

PVM enables users to exploit their existing computer hardware to solve much larger problems at minimal additional cost. Hundreds of sites around the world are using PVM to solve important scientific, industrial, and medical problems in addition to PVM's use as an educational tool to teach parallel programming. With tens of thousands of users, PVM has become the de facto standard for distributed computing world-wide.


Occam

"The occam programming language is designed to express concurrent algorithms and their implementation on a network of processing components.

Occam enables an application to be described as a collection of processes, where the processes execute concurrently, and communicate with each other through channels. Each process in such an application describes the behaviour of a particular aspect of the implementation, and each channel describes a connection between two processes. This approach has two important consequences. Firstly, it gives the program a clearly defined and simple structure. Secondly, it allows the application to exploit the performance of a system which consists of many parts.

Occam is developed at INMOS Limited, now SGS-THOMSON Microelectronics Limited in the United Kingdom. The development of the INMOS transputer, a family of devices which place one or more microcomputers on a single chip, has been closely related to occam, its design and implementation. The transputer reflects the occam architectural model, and may be considered an occam machine." (from Occam 2.1 Reference Manual, May 12, 1995)

Though occam was designed for multitransputer system now there are compilers for other platforms.

KROC (Kent Retargetable Occam Compiler)

KROC is a portable OCCAM compiler system that enables OCCAM to run on non-transputer platforms (Sun SPARC running SunOS 4.1.3U1 or Solaris 2.5; DEC Alpha running OSF1/3.0). KROC works by translating code produced by an INMOS OCCAM Toolset compiler into the native assembler for the target architecture and linking in a small (< 2K bytes) kernel that provides the process scheduling and message-passing functionality of the transputer micro-code.

Release of KROC is available


Sisal

Mission The objectives of the Sisal Language Project are to develop high-performance functional compilers and runtime systems to simplify the process of writing scientific programs on parallel supercomputers and to help programmers develop functional scientific applications.

Impact Functional languages such as Sisal provide a low-cost approach to developing parallel computing applications that still offer high performance and portability. This is of major significance, as parallel software costs are projected to top $1 trillion. Despite the commercial availability of multiprocessor computer systems, the number of parallel scientific and commercial applications in production use today remains small. It is estimated that the worldwide cost of developing software for sequential machines will reach $450 billion in 1995. Even if only a quarter of the software is parallelized, it would cost at least an additional $550 billion. Functional programming can reduce this increased cost while still providing high performance and portability.

Functional languages, such as Sisal, promote the construction of correct parallel programs by isolating the programmer from the complexities of parallel processing. Based on the principles of mathematics, Sisal exposes implicit parallelism through data independence and guarantees determinate results. Click here for more info on Sisal's mathematical foundations.


 Software Designers  Tools  Operating Systems

The page last updated on Mon, 15 Jun 1998 14:53:30 GMT