Profiling Tool - Jar Usage#

The Profiling tool can be run in as a java cmd in three different ways if you aren’t using the CLI tool:

There are 3 modes of operation for the Profiling tool:

For sample execution commands, refer to the examples section.

Setting Up Environment#

Prerequisites#

  • Java 8+

  • Spark event log(s) from Spark 2.0 or above version. Supports both rolled and compressed event logs with .lz4, .lzf, .snappy and .zstd suffixes as well as Databricks-specific rolled and compressed(.gz) event logs.

  • The tool requires the “Spark [3.x, 4.0[” jars to be able to run but it doesn’t need an Apache Spark runtime. If you don’t already have “Spark [3.x, 4.0[” installed, you can download the Apache Spark Distribution to any machine and include the jars in the classpath.

  • This tool parses the Spark CPU event log(s) and creates an output report. Acceptable inputs are either individual or multiple event logs files or directories containing spark event logs in the local filesystem, HDFS, S3, ABFS, GCS or mixed. If you want to point to the local filesystem be sure to include prefix file: in the path. If any input is a remote file path or directory path, then you need to the connector dependencies to be on the classpath

    Include $HADOOP_CONF_DIR in classpath

    Sample showing Java’s classpath#
    -cp ~/rapids-4-spark-tools_2.12-<version>.jar:$SPARK_HOME/jars/*:$HADOOP_CONF_DIR/
    

    Download the gcs-connector-hadoop3-<version>-shaded.jar and follow the instructions to configure Hadoop/Spark.

    Download the matched jars based on the Hadoop version

    • hadoop-aws-<version>.jar

    • aws-java-sdk-<version>.jar

    In $SPARK_HOME/conf, create hdfs-site.xml with below AWS S3 keys inside:

     1<?xml version="1.0"?>
     2<configuration>
     3   <property>
     4      <name>fs.s3a.access.key</name>
     5      <value>xxx</value>
     6   </property>
     7   <property>
     8      <name>fs.s3a.secret.key</name>
     9      <value>xxx</value>
    10   </property>
    11</configuration>
    

    You can test your configuration by including the above jars in the -jars option to spark-shell or spark-submit

    Refer to the Hadoop-AWS doc on more options about integrating Hadoop-AWS module with S3.

    • Download the matched jar based on the Hadoop version hadoop-azure-<version>.jar.

    • The simplest authentication mechanism is to use account-name and account-key.Refer to the Hadoop-ABFS support doc on more options about integrating Hadoop-ABFS module with ABFS.

Getting the Tools Jar#

  • Checkout the code repository

    git clone git@github.com:NVIDIA/spark-rapids-tools.git
    cd spark-rapids-tools/core
    
  • Build using MVN. After a successful build, the jar of rapids-4-spark-tools_2.12-<version>-SNAPSHOT.jar will be in target/ directory. Refer to build doc for more information on build options (that is, Spark version)

    mvn clean package
    

Running Tools Jar#

Profiling Tool Options#

 1Profiling tool for the RAPIDS Accelerator and Apache Spark
 2
 3Usage: java -cp rapids-4-spark-tools_2.12-<version>.jar:$SPARK_HOME/jars/*
 4       com.nvidia.spark.rapids.tool.profiling.ProfileMain [options]
 5       <eventlogs | eventlog directories ...>
 6
 7 -a, --auto-tuner                 Toggle AutoTuner module.
 8     --target-cluster-info  <arg> File path to YAML containing target cluster
 9                                  information including worker instance type
10                                  and system properties. Provides platform-aware
11                                  cluster configuration. Requires AutoTuner to
12                                  be enabled.
13     --tuning-configs  <arg>      File path to YAML containing custom tuning
14                                  configuration parameters. Allows overriding
15                                  default AutoTuner constants. Requires
16                                  AutoTuner to be enabled.
17     --combined                   Collect mode but combine all applications into
18                                  the same tables.
19 -c, --compare                    Compare Applications (Note this may require
20                                  more memory if comparing a large number of
21                                  applications). Default is false.
22      --csv                       Output each table to a CSV file as well
23                                  creating the summary text file.
24  -d, --driverlog <arg>           Specifies the name of a driver log file that
25                                  the profiling tool is to process. The tool
26                                  identifies any invalid operations in the log
27                                  and writes them to a .csv file. When
28                                  --driverlog is specified, the eventlog
29                                  parameter is optional.
30  -f, --filter-criteria  <arg>    Filter newest or oldest N eventlogs based on
31                                  application start timestamp for processing.
32                                  Filesystem based filtering happens before
33                                  application based filtering (see start-app-time).
34                                  for example, 100-newest-filesystem (for processing newest
35                                  100 event logs). For example, 100-oldest-filesystem (for
36                                  processing oldest 100 event logs).
37  -g, --generate-dot              Generate query visualizations in DOT format.
38                                  Default is false
39      --generate-timeline         Write an SVG graph out for the full
40                                  application timeline.
41  -m, --match-event-logs  <arg>   Filter event logs whose filenames contain the
42                                  input string
43  -n, --num-output-rows  <arg>    Number of output rows for each Application.
44                                  Default is 1000
45      --num-threads  <arg>        Number of thread to use for parallel
46                                  processing. The default is the number of cores
47                                  on host divided by 4.
48  -o, --output-directory  <arg>   Base output directory. Default is current
49                                  directory for the default filesystem. The
50                                  final output will go into a subdirectory
51                                  called rapids_4_spark_profile. It will
52                                  overwrite any existing files with the same
53                                  name.
54  -p, --print-plans               Print the SQL plans to a file named
55                                  'planDescriptions.log'.
56                                  Default is false.
57  -s, --start-app-time  <arg>     Filter event logs whose application start
58                                  occurred within the past specified time
59                                  period. Valid time periods are
60                                  min(minute),h(hours),d(days),w(weeks),m(months).
61                                  If a period isn't specified it defaults to
62                                  days.
63 -t, --timeout  <arg>             Maximum time in seconds to wait for the event
64                                  logs to be processed. Default is 24 hours
65                                  (86400 seconds) and must be greater than 3
66                                  seconds. If it times out, it will report what
67                                  it was able to process up until the timeout.
68 -h, --help                       Show help message
69
70 trailing arguments:
71  eventlog (optional)   Event log filenames (space separated) or directories
72                        containing event logs. For example, s3a://<BUCKET>/eventlog1
73                        /path/to/eventlog2. At least one eventlog or a driver
74                        log must be specified; thus an eventlog parameter is
75                        required if the --driverlog option isn't specified.

Tuning Spark Properties For GPU Clusters#

Currently, the Auto-Tuner calculates a set of configurations that impact the performance of Apache Spark apps executing on GPU. Those calculations can leverage cluster information (for example, memory, cores, Spark default configurations) as well as information processed in the application event logs. The tool also will recommend settings for the application assuming that the job will be able to use all the cluster resources (CPU and GPU) when it’s running. The values loaded from the app logs have higher precedence than the default configs.

Note

Auto-Tuner limitations:

  • It’s assumed that all the worker nodes on the cluster are homogenous.

To run the Auto-Tuner, enable the auto-tuner flag. Optionally, provide target cluster information using --target-cluster-info <FILE_PATH> to specify the GPU worker node configuration for generating optimized recommendations. The file path can be local or remote (for example, HDFS).

If the --target-cluster-info argument isn’t supplied, the Auto-Tuner will use platform-specific default worker instance types for tuning recommendations. See AutoTuner Configuration for details on default instance types, supported platforms, and how to customize AutoTuner behavior.

Processing Spark Event Logs#

  1. The tool reads the log files and process them in-memory. So the heap memory should be increased when processing large volume of events. It’s recommended to pass VM options -Xmx10g and adjust according to the number-of-apps / size-of-logs being processed.

    export JVM_HEAP=-Xmx10g
    
  2. Examples running the tool on the following environments

    • Extract the Spark distribution into a local directory if necessary.

    • Either set SPARK_HOME to point to that directory or just put the path inside of the classpath java -cp toolsJar:$SPARK_HOME/jars/*:... when you run the Qualification tool.

    java ${JVM_HEAP} \
         -cp rapids-4-spark-tools_2.12-<version>.jar:$SPARK_HOME/jars/* \
         com.nvidia.spark.rapids.tool.profiling.ProfileMain [options] \
         <eventlogs | eventlog directories ...>
    
    java ${JVM_HEAP} \
         -cp rapids-4-spark-tools_2.12-<version>.jar:$SPARK_HOME/jars/* \
         com.nvidia.spark.rapids.tool.profiling.ProfileMain \
         /usr/logs/app-name1
    

    Example running on files in HDFS: (include $HADOOP_CONF_DIR in classpath). Note, on an HDFS cluster, the default filesystem is likely HDFS for both the input and output so if you want to point to the local filesystem be sure to include file: in the path.

    java ${JVM_HEAP} \
         -cp rapids-4-spark-tools_2.12-<version>.jar:$SPARK_HOME/jars/*:$HADOOP_CONF_DIR/ \
         com.nvidia.spark.rapids.tool.profiling.ProfileMain  /eventlogDir
    

Processing Driver Logs#

The Profiling tool can process GPU a driver log as well as CPU and GPU event logs. When the Profiling tool processes a driver log, it generates a .csv file that lists unsupported operators.

You inform the Profiling tool of a GPU driver log with the command line option --driverlog. The option has one required argument, specifying the pathname of a driver log file. You may specify just one driver log file per a single run.

A single run of the Profiling tool may process CPU/GPU event logs, a GPU driver log, or both.

Please refer to Processing event logs section for instructions on accessing the driver log existing on remote and local filesystems.

Example running the tool on a driver log#
java ${JVM_HEAP} \
      -cp rapids-4-spark-tools_2.12-<version>.jar:$SPARK_HOME/jars/* \
      com.nvidia.spark.rapids.tool.profiling.ProfileMain  \
      --driverlog /path_to_driverlog \
      /eventlog

Java CMD Samples#

Collection Modes#

Example running Profiling tool with different collections modes:

java -cp rapids-4-spark-tools_2.12-<version>.jar:$SPARK_HOME/jars/* \
     com.nvidia.spark.rapids.tool.profiling.ProfileMain [options] \
     <eventlogs | eventlog directories ...>
java -cp rapids-4-spark-tools_2.12-<version>.jar:$SPARK_HOME/jars/* \
     com.nvidia.spark.rapids.tool.profiling.ProfileMain --combined \
     <eventlogs | eventlog directories ...>
java -cp rapids-4-spark-tools_2.12-<version>.jar:$SPARK_HOME/jars/* \
     com.nvidia.spark.rapids.tool.profiling.ProfileMain --compare \
     <eventlogs | eventlog directories ...>