Laboratory of Parallel Algorithms
and Architectures

Head of the Laboratory: Sergey V. PISKUNOV
[Russian(koi8)]

SUMMARY.

Main topics include investigations of different formal models of parallel processes, as well as simulation of parallel algorithms and structures, and elaboration of methods for high performance parallel architecture synthesis.

IN MORE DETAILS.

Modern technology of parallel computing organization relies on two types of parallelism: coarse-grained and fine-grained ones.
Such a differentiation is rather conventional, nevertheless the following is worth to be noted.
Coarse-grained parallelism is inherent in computer systems, composed of a number of (tens, hundreds) powerful interconnected computers, forming a network.

Fine-grained parallelism is inherent in computing systems, including a huge number (tens, hundreds thousands) of relatively simple processing elements. Connection between them have a regular structure and frequently (but not always!) are organized according to local interactivity principle. As a rule, such systems are highly specialized. The term itself "fine-grained parallelism" exhibits simplicity and promptness of any computing action. A characteristic feature of fine-grained parallelism is that the approximate equality of computing intensity and that of data exchange.

Fine-grained parallelism has a long history: it is the most "ancient" variety of parallelism. Its theory has been developed concurrently with that of sequential computations, being associated with the name of John fon Neumann. His theoretical model of fine-grained parallel computer, referred to as "cellular automaton" is widely known.

In the laboratory the investigations of fine-grained parallelism are held in many directions.
We consider fine-grained parallelism to be attractive, because in its framework it is possible to find the best (for example, according to temporal characteristics) parallel algorithms for solving many practically important problems both numerical and nonnumerical ones. Moreover, some problems may be solved only in the framework of a certain fine-grained parallel model of computations (for example, solution of hard to be formalized problems by neural networks using learning process). Practical importance of fine-grained parallelism is in two facts: first, that it serves as a source of methods for solving complex problems on the modern multiprocessor computer systems, and second, that many of both real and hypothetical special-purpose processors have fine-grained parallel architecture.

INVESTIGATIONS OF FINE-GRAINED PARALLELISM
CARRIED OUT IN THE LABORATORY

Elaboration and investigation of fine-grained parallel algorithms (as well as of any other) is performed using a certain computation model. Nowadays, the following classes of models are used for fine-grained computations: "cellular automaton", "systolic array", "associative processing system", "neural network", "cellular neural network". The model class chosen for constructing and exploring a parallel algorithm predetermines the architecture of the processing unit (may be, an abstract one), which realizes the given algorithm. The problem is in searching such an algorithm, which is the best, according to a certain criterion ( for example, a number of steps) to be realized in a given architecture. The role of such investigations is hard to be overestimated. They are of great practical and theoretical importance. Researchers, dealing with models of one class, may easily compare the results and join their efforts to search for the best parallel algorithms. Many real and hypothetical processors architecture is in fact a realized in hardware model of computation from one of the above classes.

According to mentioned above, the following research themes are under investigation in the laboratory.
  1. Development of cellular technology for parallel algorithm and structure synthesis, which is based on a model of distributed computations called Parallel Substitution Algorithm .
  2. Analysis and synthesis of multiplanar cellular architecture (universal [2] and specialized [3]), oriented to 3D pipe-lined computations.
  3. Elaboration and investigation of associative parallel algorithms for nonnumerical (especially, graph-theoretical) problems solution [4].
  4. Design and investigation of distributed functional structures [5] and high performance special purpose processors construction on this base.
  5. Development and investigation of cellular neural algorithms for image processing [6], distorted patterns recognition [7], and discrete optimization [8].
  6. Recently, the investigations associated with simulation of physical phenomena in discrete space (cellular automaton, cellular neural network) are intensively evolved. We don't stay aside of this scientific direction: some problems are stated which are related to problems from mathematical physics using cellular neural approach. Particularly, cellular neural networks, simulating autowaves and dissipative structures are under investigation [9].
  7. Research in fine grained parallelism is performed with the help of program tools, elaborated in the laboratory. Systems, based on the languages STAR [10] and VEPRAN [11] are applied to associative algorithms investigation. These systems are permanently modified and improved. Cellular algorithms and structures (including neural and cellular neural networks) are studied with the help of a simulating system ALT [12]. A new more powerful simulating system WinALT [13] is under development.
  8. Design of combined architecture [14] of computer systems, comprising special purpose fine grained parallel processors.
  9. Elaboration and investigation of associative algorithms for numerical processing. This research has been transformed into a work on creation a program package for highly precise computations on computers of different types [15]. Now this work proceeds in the laboratory of parallel program synthesis [16].
  10. Something new is often a well forgotten past. In order to move forward, the past should be well learned. So, computer science hystory in Russia is studied in the laboratory [17].

PERSPECTIVES.

Future investigations are seen in the following:


Last update October 2, 2000
Maintained by Elvire Kuksheva.