How to run HPL/HPCG/IO500 in WSL (4) | 青训营笔记

302 阅读1分钟

2. Change versions

You can install other versions by compiling from source, the version installed from source will be newer.

 # For OpenMPI
 cd ~ && wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.5.tar.gz
 tar -xzvf openmpi-4.1.5.tar.gz && cd openmpi-4.1.5
 ./configure --prefix=/usr/local/openmpi
 sudo make all install -j8
 echo "export PATH=/usr/local/openmpi/bin:$PATH" >> ~/.bashrc
 echo "export LD_LIBRARY_PATH=/usr/local/openmpi/lib:$LD_LIBRARY_PATH" >> ~/.bashrc
 source ~/.bashrc
 ​
 # For OpenBLAS
 cd ~ && wget https://github.com/xianyi/OpenBLAS/releases/download/v0.3.23/OpenBLAS-0.3.23.tar.gz # If there is a network problem, please switch to https://ghproxy.com/https://github.com/xianyi/OpenBLAS/releases/download/v0.3.23/OpenBLAS-0.3.23.tar.gz
 tar -xzvf OpenBLAS-0.3.23.tar.gz && cd OpenBLAS-0.3.23
 make FC=gfortran -j8
 sudo make PREFIX=/usr/local/openblas install
 ​
 # For MPL
 ​
 cd ~ && wget https://www.netlib.org/benchmark/hpl/hpl-2.0.tar.gz
 tar -xzvf hpl-2.0.tar.gz && cd hpl-2.0

You need to change the MPdir and LAdir to the corresponding installation paths, like this:

 TOPdir       = $(HOME)/hpl-2.0
 ​
 MPdir        = /usr/local/openmpi
 MPlib        = $(MPdir)/lib/libmpi.so
 ​
 LAdir        = /usr/local/openblas
 LAlib        = $(LAdir)/lib/libopenblas.a
 ​
 CC           = /usr/local/openmpi/bin/mpicc
 LINKER       = /usr/local/openmpi/bin/mpif77

Then recompile MPL and test again.

 cp ~/hpl-2.3/bin/Linux_PII_CBLAS/HPL-Tuning.dat ~/hpl-2.0/bin/Linux_PII_CBLAS/HPL.dat
 cd ~/hpl-2.0/bin/Linux_PII_CBLAS
 mpirun -np 8 xhpl

Output:

 ================================================================================
 T/V                N    NB     P     Q               Time                 Gflops
 --------------------------------------------------------------------------------
 WR13L2L2       16384   192     1     1              15.90              1.844e+02
 --------------------------------------------------------------------------------
 ||Ax-b||_oo/(eps*(||A||_oo*||x||_oo+||b||_oo)*N)=        0.0022339 ...... PASSED
 ================================================================================
 T/V                N    NB     P     Q               Time                 Gflops
 --------------------------------------------------------------------------------
 WR13L2L2       20352   192     1     1              35.05              1.604e+02
 --------------------------------------------------------------------------------
 ||Ax-b||_oo/(eps*(||A||_oo*||x||_oo+||b||_oo)*N)=        0.0019320 ...... PASSED
 ================================================================================

It seems the results are even worse:>

5. In Windows

If you are on windows, you can simply install Intel® oneAPI Base Toolkit to get benchmarks and Intel® oneAPI HPC Toolkit to get mpiexec.

After installation, Run the terminal in administrator mode and go to the directory <oneAPI Install path>\mkl\2023.1.0\benchmarks\mp_linpack.

Then run runme_intel64_dynamic.bat Or run xhpl_intel64_dynamic.exe directly.

 This is a SAMPLE run script. Change it to reflect the correct
 number of CPUs/threads, number of nodes, MPI processes per node, etc..
 This run started on:
 05/20/2023 Sat
 01:02
 Capturing output into: xhpl_intel64_dynamic_outputs.txt
 ​
 Done:
 05/20/2023 Sat
 01:02

And you will get a HPL report xhpl_intel64_dynamic_outputs.txt in the current directory.

Then you can refer to the performance tuning guide above to improve your score.