lavamd2

Differences

This shows you the differences between two versions of the page.


lavamd2 [2018/10/03 17:51] (current) – created - external edit 127.0.0.1
Line 1: Line 1:
 +====== LavaMD2 ======
 +The code calculates particle potential and relocation due to mutual forces between particles within a large 3D space. This space is divided into cubes, or large boxes, that are allocated to individual cluster nodes. The large box at each node is further divided into cubes, called boxes. 26 neighbor boxes surround each box (the home box). Home boxes at the boundaries of the particle space have fewer neighbors. Particles only interact with those other particles that are within a cutoff radius since ones at larger distances exert negligible forces. Thus the box size s chosen so that cutoff radius does not span beyond any neighbor box for any particle in a home box, thus limiting the reference space to a finite number of boxes.
 +
 +This code [1] was derived from the ddcMD application [2] by rewriting the front end and structuring it for parallelization. This code represents MPI task that runs on a single cluster node. While the details of the code are somewhat different than the original, the code retains the structure of the MPI task in the original code. Since the rest of MPI code is not included here, the application first emulates MPI partitioning of the particle space into boxes. Then, for every particle in the home box, the nested loop processes interactions first with other particles in the home box and then with particles in all neighbor boxes. The processing of each particle consists of a single stage of calculation that is enclosed in the innermost loop. The nested loops in the application were parallelized in such a way that at any point of time GPU warp/wavefront accesses adjacent memory locations. The speedup depends on the number of boxes, particles (fixed) and the actualcal culation for each particle (fixed). The application is memory bound, and GPU speedup seems to saturate at about 16x when compared to single-core CPU.
 +
 +==== Papers ====
 +[1] L. G. Szafaryn, T. Gamblin, B. deSupinski and K. Skadron. "Experiences with Achieving Portability across Heterogeneous Architectures." Submitted to WOLFHPC workshop at 25th International Conference on Supercomputing (ICS). Tucson, AZ. 2010. ([[http://www.cs.virginia.edu/~lgs9a/publications/11_ics_wolfhpc_paper.pdf|pdf]])
 +
 +[2] F. H. Streitz, J. N. Glosli, M. V. Patel, B. Chan, R. K. Yates, B. R. de Supinski, J. Sexton, J and A. Gunnels. "100+ TFlop Solidification Simulations on BlueGene/L." In Proceedings of the 2005 Supercomputing Conference (SC 05). Seattle, WA. 2005. ([[http://www.cs.virginia.edu/~lgs9a/rodinia/lavamd/paper_2.pdf|pdf]])
 +
 +==== Presentation Slides ====
 +[1] L. G. Szafaryn, T. Gamblin, B. deSupinski and K. Skadron. "Experiences with Achieving Portability across Heterogeneous Architectures." Submitted to WOLFHPC workshop at 25th International Conference on Supercomputing (ICS). Tucson, AZ. 2010. ([[http://www.cs.virginia.edu/~lgs9a/rodinia/lavamd/lavamd.ppt|ppt]])
 +
 +OpenMP Version: ([[http://www.cs.virginia.edu/~lgs9a/rodinia/lavamd/lavamd_openmp_code.tar.gz|tar.gz]])
 +
 +CUDA Version: ([[http://www.cs.virginia.edu/~lgs9a/rodinia/lavamd/lavamd_cuda_code.tar.gz|tar.gz]])
 +
 +OpenCL Version: ([[http://www.cs.virginia.edu/~lgs9a/rodinia/lavamd/lavamd_opencl_code.tar.gz|tar.gz]])
 +
 +
 +Retrieved from "https://www.cs.virginia.edu/~skadron/wiki/rodinia/index.php?title=LavaMD&oldid=605"
  
  • lavamd2.txt
  • Last modified: 2018/10/03 17:51
  • by 127.0.0.1