Beschreibung
Facing challenging statistical problems one has to consider to take advantage of parallel computing. With the availability of multicore architectures even in commodity computers there is an increased demand for practical strategies for utilizing these architectures.Generally there are two different types of architectures: shared memory systems and distributed memory systems. Each of which has its advantages and disadvantages which have to be considered when creating parallel applications.
In this talk we present strategies for parallelizing programs using different packages available in R. On the basis of an example in numerical algebra we illustrate how both
hardware architectures can be used to achieve higher performance: For distributed memory systems such as clusters of workstations we show how MPI can be used to explicitly
parallelize a program. For shared memory systems OpenMP can improve the performance of a sequential program by implicit (compiler-driven) parallelization. Finally, we present
results of a benchmark experiment comparing the presented parallel routines with their sequential counterpart.
Zeitraum | 19 Juni 2008 → 21 Juni 2008 |
---|---|
Ereignistitel | ERCIM Workshop, |
Veranstaltungstyp | Keine Angaben |
Bekanntheitsgrad | International |