The User Manual Of ParJava Environment.

I.INTRODUCTION.

The ParJava environment allows to edit, debug and execute parallel programs on homogenous and heterogeneous shared-memory computing resources (a supercomputer, a cluster or a LAN). This environment also enables to model homogenous computing resources on heterogeneous ones. The main advantage of the ParJava environment is the opportunity to execute a parallel program without any modification or transformation on a scalable computing system.

A user interface of ParJava environment can be lunched on one of nodes of a parallel computing system.

II.INSTALLATION.

In spite of the fact that the ParJava environment has been implemented by the Java there is a platform depended part of libraries, which deal with MPI. The implementation of a massage passing layer by pure Java is actual problem for us, and a solution of it will allow to get full portability of ParJava libraries.

At the current moment there are two types of the ParJava distributions:

  • The first one is an archive file included source codes of native-libraries;
  • The second one is an archive file included precompiled native-libraries intended for Solaris and Linux OS.

In the first case compilation of native-libraries is needed. In the last case it is enough to unpack the archive file and set environment variables.

Installation of the ParJava distribution included source codes of native-libraries:

1. The first, please make sure that following software is installed:

  • ND LAM – It is the implementation of message passing interface (MPI). The ParJava environment has been implemented with using the 6.3.1. version of the LAM.
  • JDK 1.2. (or later) – The Java Development Kit is the Java run time system, the Java compiler, the Java API, and other.
  • GCC 2.95 – It is the C/C++ compiler by GNU, which needed to create native-libraries.

2. To compile native-libraries it is necessary to make sure that the JAVA_HOME, LAMHOME environment variables are presented and point to directories contained the JDK and LAM. If these variables are not presented, please set its manually into your configuration files

NOTE:

  • Moreover, to avoid conflicts concerned with the LAM tools execution we recommend to set path to the JavaVM place as the first entry in the PATH environment variable. (for example, PATH=$JAVA_HOME/bin:<…>).

3. Extract the ParJava.tar.gz archive file

#gunzip ParJava.tar.gz

#tar xvf ParJava.tar

4. The ParJava directory will be created. This directory contains libraries of the ParJava environment. To install (to compile native-libraries, to prepare the .PJ configure directory and etc.) you have to execute setup inside of it.

#./setup

5. Please, add the ParJava/classes path to the CLASSPATH and the ParJava/lib to the LIBRARY_PATH.

6. To lunch the ParJava environment you should execute the dev command inside ParJava directory. #dev&

Installation of the ParJava distribution included precompiled native-libraries for the Solaris and the Linux OS:

1. The first, please make sure that following software is installed:

  • ND LAM – It is the implementation of message passing interface (MPI). The ParJava environment has been implemented with using the 6.3.1. version of the LAM.
  • JDK 1.2. (or later) – The Java Development Kit is the Java run time system, the Java compiler, the Java API, and other.

2. Extract the ParJava-Linux-i386 archive file if you have the Linux or ParJava-Solaris-sparc in the Solaris case.

#gunzip <the archive file name>.tar.gz

#tar xvf < the archive file name>.tar

3. The ParJava directory will be created. This directory contains libraries of the ParJava environment. To install (to compile native-libraries, to prepare the .PJ configure directory and etc.) you have to execute setup inside of it.

#./setup

5. Please, add the ParJava/classes path to the CLASSPATH and the ParJava/lib to the LIBRARY_PATH.

NOTE:

  • Moreover, to avoid conflicts concerned with the LAM tools execution we recommend to set path to the JavaVM place as the first entry in the PATH environment variable. (for example, PATH=$JAVA_HOME/bin:<…>).

6. To lunch the ParJava environment you should execute the dev command inside ParJava directory. #dev&

NOTE: If you have any problems caused by installation, please send your questions to

  1. EXECUTION OF PARALLEL PROGRAMS.

TOOLS: The access to facilities allowed executing of parallel programs on a parallel computing system is provided by the “Tools” menu item of the main window of the environment GUI.

New Net – Using the «New Net» menu item of ParJava GUI a user can chose nodes which form a computing network The parallel program will be performed on this network. When this item was clicked a dialog box is appeared. The dialog box contains two lists. The first one is a list of accessible nodes of a parallel computing system. The second one is a list of nodes included to computing network. With the help of the «add», «add all» and «remove» buttons a user can make computing network, to add or remove nodes from the appropriate list.

NOTE:

  • A user MUST form a computing network before the first lunching of parallel program.
  • The ParJava environment is lunched on one node of a parallel system. This node is inserted to the computing network automatically. The “root” identifier in the brackets add to name of the node.
  • The list of accessible nodes of parallel system is formed by the .rhosts configure file. If this file is lack, you have to create it manually make use of UNIX documentations.
  • Then working of ParJava is finished, a current state of environment is stored including the list of nodes of computing network.

Hosts Performance – To determine the relative performance of nodes of computing network. Then this item is chosen, benchmarks are made. This benchmarks allow to determine relative performances of JavaVM running on each node of a computing network. The most powerful node we assign equal 100 percents and performances all others nodes of computing network are calculated relative this node.

Compile – Using the «Compile» item of main window of ParJava environment a user can compile a Java program. If this item was involved, a dialog box where you can set a mode of compile process is showed. The default mode is

javac <full pathclass name

Messages of compilation process (warning, errors and etc.) are taken out to a separate window.

NOTE:

  • The JDK compiler is used by default.
  • The JDK must be installed before. For more information make use JDK documentation.

Run– Using «Run» item of main window a user can execute sequential Java program. If this item was involved, a dialog box where you can set a mode of execution is showed. The default mode is

java <class name

Run on Homogeneous Net - Using this item of main window a user can execute a SPMD Java programs on a HOMOGENIOUS network of the Java virtual machines. The homogeneous network is modeled on the current computing one with optimal quantity of nodes. If this item was involved, a dialog box where you can set a mode of execution is showed. For more details you can get the documentations and articles described the ParJava on the

Run on Heterogeneous Net -Using this item of main window a user can execute a SPMD Java programs on a HOMOGENIOUS network of the Java virtual machines. For more details you can get the documentations and articles described the ParJava on the

NOTE:

  • Itisnecessarytonotethatanodeisexcludedfromcomputingnetworkifits relativeperformanceislessthem 10 percent

ANALYZERS: When a sequential program is tested and debugged, it is necessary to provide its correctness, efficiency and stability. In the case of a parallel program, it is necessary to provide scalability too. For this purpose it is useful for a developer to know some properties of the program. In particular, it is profiles of the program. An effective distribution of SPMD-program on nodes of a heterogeneous network demands the knowledge of the program parameters, which define actual speed of the program execution on each node of the network. The ParJava environment has tools to determine these parameters. It is the “Analyzers” item of the menu. The “Instrument” and the “Profile” tools allow to estimate costs of execution time of a sequential program part. It enables to take into account a weight of the sequential part, when a performance of nodes is calculated.

Trace Visualization – When a parallel program is executing a user can get a trace information. (see the Test_mode item). To visualize the trace you should involve the «TraceVisualization» item of the main menu.

The “TraceVisualization” tool displays an each parallel branch of SPMD-program as a horizontal line. Calls to the communication library are marked on this line by small squares. A temporal bar is displayed on the top of the image. A time of communication function call can be defined in milliseconds by using this bar. Numbers presents the communication functions on the image for convenience of representation. For example, one number presents the Init() function, but two number represents the Finalize() function. The full legend you can see on appendix “A”. If the small square with number of function was covered by cursor, a hint appears. It contains a name of a communication function and a relative time of execution. A green line bridges some small squares toshow that a branch of parallel program invokes a communication function and an expectation for external event is started to complete this function. A length of the green line is proportional a time of expectation for external event of function (the expectation for a data, for example). If a button on the left side of the image was clicked by the cursor, the idle time of each processor will be calculated and showed on a diagram in the milliseconds. To get the diagram of all processors it is necessary to click the “All” button. The result of program profiling is stored automatically into system files. These files are accessible to users by the “TraceVisualization” tool. The names of system files consist two parts. The first part is number of executing program. The second part is number of process. When visualization of scheme has started up, the scale of viewed time segment is defined automatically by size of image. Further, user can change the scale. A user can select a lot of trace by two red borders and click the “Zoom In” or “Zoom out” buttons to change a scale of viewed time segment.

Instrumentate– A program is instrumented. The tool inserts debugging calls to a timer. It is implementing.

Profile– The tool makes a dynamic profile of a parallel program. It is implementing.

Normal_mode– It is default mode.

Test_mode– It is the mode to translate a parallel program on the test one. A debugging library will be used in this mode. If the “Test_mode” mode isestablished, an each branch of parallel program collects a history of parallel execution in the special file. All events bounded with parallel execution are written in this file. An absolute time of a branch whole execution and a time between parallel events are presented. For example, an all calls of communication functions are these events. The special file contains also an additional information about the events (a size of the event, a source and a target of the event, a number of a Java code sting, etc.) The file is created automatically by a debugging library. A user can visualize the stored history of parallel program execution by the “TraceVizualization” item of the menu.

APPENDIX «А». Conformity of codes numbers to MPI function names.

0 Init

1 Finalize

2 Wtime

3 Wtick

4 Get_processor_name

5 Initialized

6 Buffer_attach

7 Buffer_detach
8 clone

9 Size

10 Rank

11 Compare

12 Free

13 Is_null

14 Group

15 Test_inter

16 Create_intercomm

17 GetIntercomm

18 Object_Serialize

19 Object_Deserialize

20 Send

21 Recv

22 Sendrecv

23 Sendrecv_replace

24 Bsend

25 Ssend

26 Rsend

27 Isend

28 Ibsend

29 Issend

30 Irsend

31 Irecv

32 Send_init

33 Bsend_init

34 Ssend_init

35 Rsend_init

36 Recv_init

37 Pack

38 Unpack

39 Pack_size

40 Iprobe

41 Probe

42 Attr_get

43 Topo_test

44 Abort

45 Errhandler_set

46 Errorhandler_get

47 Disconnect

48 Get_parent

49 Join

50 clone

51 Remote_size

52 Remote_group

53 Merge

54 clone

55 Split

56 Creat

57 Barrier

58 copyBuffer

59 newBuffer

60 Bcast

61 Gather

62 Gatherv

63 Scatter

64 Scatterv

65 Allgather

66 Allgatherv

67 Alltoall

68 Alltoallv

69 Reduce

70 Allreduce

71 Reduce_scatter

72 Scan

73 Create_cart

74 Create_graph

75 Spawn

76 Spawn( ... , int[] array_of_errcodes)

77 Spawn_multiple

78 Spawn_multiple( ... , int[] array_of_errcodes)