8th VI-HPS Tuning Workshop (GRS, Germany)
Date
September 5-9, 2011
Location
German Research School for Simulation Sciences, Aachen, Germany
Goals
This workshop will:
- give an overview of the VI-HPS programming tools suite
- explain the functionality of individual tools, and how to use them effectively
- offer hands-on experience and expert assistance using the tools
The Paraver toolset from Barcelona Supercomputing Center is also being featured.
The workshop will be held in English and run from 09:00 to not later than 18:00 each day, with breaks for lunch and refreshments. There is no fee for participation, however, participants are responsible for their own travel and accommodation.
Schedule
Day 1 | Monday 5 Sep |
09:00 | (registration & set-up of
course accounts on workshop computers)
[Optional] Individual preparation of participants' own codes. |
12:00 | (lunch) |
13:00 | (registration) |
13:30 | Welcome &
Introduction to VI-HPS
|
15:00 | (break) |
15:30 | Overview of VI-HPS tools [Wylie, JSC] |
16:30 | Lab setup
|
17:30 | (adjourn) |
19:00 | Social dinner sponsored by Bull, Im Alten Zollhaus |
|
|
Day 2 | Tuesday 6 Sep |
09:00 | Scalasca performance analysis toolset [Wylie, JSC]
|
10:30 | (break) |
11:00 | Periscope automatic performance analysis tool [Oleynik, TUM]
|
12:30 | (lunch) |
13:30 | Hands-on coaching to apply tools to analyze participants' own code(s). |
17:00 | Review of day and schedule for remainder of workshop |
17:30 | (adjourn) |
|
|
Day 3 | Wednesday 7 Sep |
09:00 | TAU performance system [Shende, UOregon]
|
10:30 | (break) |
11:00 | KCachegrind toolset [Weidendorfer, TUM]
|
12:30 | (lunch) |
13:30 | Hands-on coaching to apply tools to analyze participants' own code(s). |
17:00 | Review of day and schedule for remainder of workshop |
17:30 | (adjourn) |
|
|
Day 4 | Thursday 8 Sep |
09:00 | Vampir trace analysis toolset [Hilbrich, TUD-ZIH]
|
10:30 | (break) |
11:00 | Paraver trace analysis toolset [Labarta/Gimenez, BSC]
|
12:30 | (lunch) |
13:30 | Hands-on coaching to apply tools to analyze participants' own code(s). |
17:00 | Review of day and schedule for remainder of workshop |
17:30 | (adjourn) |
|
|
Day 5 | Friday 9 Sep |
09:00 | MUST/Marmot correctness checking tools [Hilbrich/Protze, TUD-ZIH]
|
10:30 | (break) |
11:00 | VI-HPS libraries
|
12:30 | (lunch) |
13:30 | Hands-on coaching to apply tools to analyze participants' own code(s). |
15:00 | (break) |
15:30 | (adjourn or continue with work to 16:30) |
Classroom capacity is limited, therefore priority will be given to applicants with parallel codes already running on the workshop computer systems, and those bringing codes from similar systems to work on. Participants are therefore encouraged to prepare their own MPI, OpenMP and hybrid OpenMP/MPI parallel application codes for analysis.
VI-HPS Tools
- KCachegrind is a free cache-utilization visualization tool developed by TUM.
- MUST & Marmot are free correctness checking tools for MPI programs developed by TUD-ZIH and partners.
- PAPI is a free library interfacing to hardware performance counters developed by UTK-ICL, used by many tools.
- Periscope is an automatic performance analysis tool using a distributed online search for performance bottlenecks being developed by TUM.
- Scalasca is an open-source toolset developed by JSC & GRS that can be used to analyze the performance behaviour of MPI & OpenMP parallel applications and automatically identify inefficiencies.
- Vampir is a commercial framework and graphical analysis tool developed by TUD-ZIH to display and analyze trace files, such as those produced by the open-source VampirTrace library.
- TAU is a performance system for measurement and analysis of parallel programs written in Fortran, C, C++, Java & Python developed by the University of Oregon.
Hardware and Software Platforms
The local systems are expected to be the primary platforms for the workshop, with priority for improved job turnaround and local system support. Course accounts will be provided for those who need them.
- Sun/Bull Nehalem cluster (JSC Juropa / HPC-FF): Intel Xeon X5570 quad-core processors, SLES Linux, ParaStation MPI, Intel compilers
- Sun Nehalem cluster (RWTH): Nehalem 8-core & Westmere 6-core processors, Scientific Linux, Intel MPI, Intel compilers
- IBM BlueGene/P (JSC Jugene): PowerPC 450 quad-core processors, BG-Linux compute kernel, IBM BG-MPI library, IBM BG-XL compilers
The VI-HPS tools support and are also installed on a range of HPC platforms, including:
- Cray XE6 (HLRS Hermit): Opteron 8-core processors, Cray Linux & MPI, GCC/PGI/CCE compilers
- NEC Nehalem cluster (HLRS): Xeon quad-core processors, Scientific Linux, OpenMPI, Intel & GCC compilers
- SGI Altix 4700 (LRZ HLRB-II): Itanium2 dual-core processors, SGI Linux, SGI MPT, Intel compilers
- SGI Altix ICE (LRZ ICE1): Xeon quad-core processors, SGI Linux, SGI MPT (and MVAPICH2 & Intel MPI), Intel/GNU/PGI compilers
- SGI Altix ICE (HLRN): Xeon quad-core processors, SGI Linux, SGI MPT (and MVAPICH2 & Intel MPI), Intel/GNU/PGI compilers
- SGI Altix 4700 (ZIH): Itanium2 dual-core processors, SGI Linux, SGI MPT, Intel compilers
- IBM p5-575 cluster (SARA Huygens): Power6 dual-core processors, SuSE Linux 11, IBM POE MPI, IBM XL compilers
- Dell Xeon cluster (SARA Lisa): Xeon quad-core processors, Debian Linux, OpenMPI, Intel & GCC compilers
- IBM BlueGene/P (RZG Genias): PowerPC 450 quad-core processors, BG-Linux compute kernel, IBM BG-MPI library, IBM BG-XL compilers
- IBM p5-575 cluster (RZG VIP): Power6 dual-core processors, AIX OS, IBM POE MPI, IBM XL compilers
Other systems where up-to-date versions of the tools are installed can also be used when preferred, though support may be limited. Participants are expected to already possess user accounts on non-local systems they intend to use, and should be familiar with the procedures for compiling and running parallel applications on the systems.
Contact
Brian Wylie (JSC), phone +49 2461 61-6589
Marc-Andre Hermanns (GRS), phone +49 241 80-99753