Parallel Programming with Message Passing Library and Its Precision of Calculation

Katsuhiro TAMURAa, Yuichi INADOMIb and Umpei NAGASHIMAb*

aShizuoka Industrial Research Institute of Shizuoka Prefecture
2078 Makigaya, Shizuoka city, Shizuoka 421-1298, Japan
bNational Institute of Advanced Industrial Science and Technology, Tsukuba Advanced Computing Center
1-1-1 Higashi, Tsukuba, Ibaraki 305-8561, Japan

(Received: December 31, 2001; Accepted for publication: February 12, 2002; Published on Web: March 22, 2002)

Using two programs about summing up 0.1, which is the circulating decimal in binary numbers, 109 times, the efficiency of parallel processing for performance and accuracy of calculation was demonstrated. One program uses sequential summing up (program1), and the other involves summing up using partial sum technique of 104 times 105 times (program2). These programs were parallelized with a message passing interface: MPI. They were executed on 4 parallel computers, Alta Technology AltaCluster, Hitachi SR8000, IBM RS/6000 SP and SGI Origin2000, up to 8 processors. The performance is proportionally improved with increasing number of processors, because the communication process is small compared with the computation process. The computing precision was quite similar for the 4 computers. In the case of program1 the precision was improved drastically by increasing the number of processors, but little improvement was observed in the case of program2. It was clearly shown that the numerical error accumulation, namely the loss of digits, was avoided by parallel processing.

Keywords: Parallel Computing, Single Program Multiple Data (SPMD) model, Message Passing Interface (MPI)

Abstract in Japanese

Text in Japanese

PDF file(101kB)