Research in Parallel Computing
Parallel Algorithms, Parallel Architectures.
The main research topic is the design of scalable parallel
algorithms for distributed memory computers.
Applications in parallel machines are still restricted to
trivially parallizable problems, where the communication
requirements are modest.
On the other hand, there exist a great amount of
literature about parallel algorithms design for
many non-trivial problems.
The results, however, leave much to be desired due
to the inexistence of a convenient model for parallelism
that is sufficiently close to the existing machines,
to allow a reasonable prediction on the performance of
The problem is obvious for parallel algorithms based on
the PRAM model, but even algorithms based on the network
models are frequently problematic and the obtained speed
of such algorithms on commercial multiprocessors is
It is therefore imperative to design models and algorithms
such that the theoretical complexities are close to the
times observed in real implementations.
Under this research goal, we are designing scalable
parallel algorithms for sereral problems:
the list ranking problem, the problem of determining
the connected components in a graph, resolution of
tridiagonal systems, etc.
Another research topic is the parallelization of uniform loops.
In particular, we have been working on the problem of
cycle shrinking to parallelize loops with flow dependence
The Parallel Computing Group has obtained already several
important results, including two new methods:
one combining the generalized selective cycle shrinking
and index shift method, and the other proposing a
new method denominated dependence reduction (a doctoral thesis).
We have also done research in a special type of parallel
algorithm called systolic algorithms, adequate to
implementation in VLSI integrated circuits.
The Parallel Computing Group is part of
LCPD - Laboratory of Parallel and Distributed Computing.