User Guide (Latest)
User Guide (Latest Version)

Output Details

The qualification tool will generate a number of detailed reports in addition to the high-level recommendations. The summary report goes to STDOUT and by default it outputs log/CSV files under ./rapids_4_spark_qualification_output/ that contain the processed applications. The output will go into your default filesystem and it supports both local filesystem and HDFS. If you are on an HDFS cluster the default filesystem is likely HDFS for both the input and output. If you want to point to the local filesystem be sure to include prefix file: in the path.

The qualification tool generates a brief summary on the STDOUT, which also gets saved as a text file. The detailed report of the processed apps is saved as a set of CSV files that can be used for post-processing. The CSV reports include the estimated performance if the app is run on the GPU for each of the following: app execution; stages; and execs.

The tree structure of the output directory is as follows:

Copy
Copied!
            

qual_20240814145334_d2CaFA34 ├── app_metadata.json ├── qualification_statistics.csv ├── qualification_summary.csv └── rapids_4_spark_qualification_output ├── rapids_4_spark_qualification_output.csv ├── rapids_4_spark_qualification_output.log ├── rapids_4_spark_qualification_output_persql.log ├── rapids_4_spark_qualification_output_persql.csv ├── rapids_4_spark_qualification_output_execs.csv ├── rapids_4_spark_qualification_output_stages.csv ├── rapids_4_spark_qualification_output_status.csv ├── rapids_4_spark_qualification_output_cluster_information.csv ├── rapids_4_spark_qualification_output_cluster_information.json ├── rapids_4_spark_qualification_output_mlfunctions.csv ├── rapids_4_spark_qualification_output_mlfunctions_totalduration.csv ├── rapids_4_spark_qualification_output_unsupportedOperators.csv ├── runtime.properties |── raw_metrics | |── app-001-0001 │ │ ├── stage_level_all_metrics.csv │ │ ├── sql_to_stage_information.csv │ │ ├── profile.log │ │ ├── removed_blockmanagers.csv │ │ ├── spark_properties.csv │ │ ├── job_level_aggregated_task_metrics.csv │ │ ├── io_metrics.csv │ │ ├── application_information.csv │ │ ├── sql_duration_and_executor_cpu_time_percent.csv │ │ ├── wholestagecodegen_mapping.csv │ │ ├── stage_level_aggregated_task_metrics.csv │ │ ├── data_source_information.csv │ │ ├── executor_information.csv │ │ ├── sql_plan_metrics_for_application.csv │ │ ├── removed_executors.csv │ │ ├── spark_rapids_parameters_set_explicitly.csv │ │ ├── job_information.csv │ │ ├── sql_level_aggregated_task_metrics.csv │ │ ├── application_log_path_mapping.csv │ │ └── system_properties.csv └── tuning ├── app-001-0001.conf └── app-001-0001.log

For information on the files content and processing the Qualification report and the recommendation, refer to Understanding the Qualification tool output and Output Formats sections below.

When the “--auto-tuner” argument is enabled, one notable addition is the creation of a new subdirectory within the output folder named “tuning”. Within this directory, each application will have two files:

  • <appi-id>.log: This file contains the recommendations and accompanying comments.

  • <app-id>.conf: Here, you’ll find the combined Spark properties.

It’s worth noting that certain properties with sensitive information will be redacted in the combined results.

The Auto-Tuner output has 2 main sections:

  1. Spark Properties: A list of Apache Spark configurations to tune the performance of the app. The list is the result of diff between the existing app configurations and the recommended ones. Therefore, a recommendation matches the existing app configuration, it won’t show up in the list.

  2. Comments: A list of messages to highlight properties that were missing in the app configurations, or the cause of failure to generate the recommendations.

Examples

Example of a successful run with missing softwareProperties

Copy
Copied!
            

Spark Properties: –conf spark.executor.cores=16 –conf spark.executor.instances=8 –conf spark.executor.memory=32768m –conf spark.executor.memoryOverhead=7372m –conf spark.rapids.memory.pinnedPool.size=4096m –conf spark.rapids.sql.concurrentGpuTasks=2 –conf spark.sql.files.maxPartitionBytes=512m –conf spark.sql.shuffle.partitions=200 –conf spark.task.resource.gpu.amount=0.0625 Comments: - ‘spark.executor.instances’ wasn't set. - ‘spark.executor.cores’ wasn't set. - ‘spark.task.resource.gpu.amount’ wasn't set. - ‘spark.rapids.sql.concurrentGpuTasks’ wasn't set. - ‘spark.executor.memory’ wasn't set. - ‘spark.rapids.memory.pinnedPool.size’ wasn't set. - ‘spark.executor.memoryOverhead’ wasn't set. - ‘spark.sql.files.maxPartitionBytes’ wasn't set. - ‘spark.sql.shuffle.partitions’ wasn't set. - ‘spark.sql.adaptive.enabled’ should be enabled for better performance.


Example of a successful run with missing softwareProperties. Only two recommendations didn’t match the existing app configurations.

Copy
Copied!
            

Spark Properties: --conf spark.executor.instances=8 --conf spark.sql.shuffle.partitions=200 Comments: - 'spark.sql.shuffle.partitions' wasn't set.


Example showing the output when loading the worker info has failed.

Copy
Copied!
            

Cannot recommend properties. Refer to Comments. Comments: - java.io.FileNotFoundException: File worker-info.yaml doesn't exist - 'spark.executor.memory' should be set to at least 2GB/core. - 'spark.executor.instances' should be set to (gpuCount * numWorkers). - 'spark.task.resource.gpu.amount' should be set to Max(1, (numCores / gpuCount)). - 'spark.rapids.sql.concurrentGpuTasks' should be set to Min(4, (gpuMemory / 7.5G)). - 'spark.rapids.memory.pinnedPool.size' should be set to 2048m. - 'spark.rapids.sql.enabled' should be true to enable SQL operations on the GPU. - 'spark.sql.adaptive.enabled' should be enabled for better performance.


For each processed Spark application, the Qualification tool generates three main fields to help quantify the expected acceleration of migrating a Spark application or query to GPU.

  1. Estimated GPU Speedup Category: summary recommendation for the application for migration to GPU. Can be Large, Medium, Small, or Not Recommended. The size (Large, Medium, or Small) represents the likelihood of acceleration on GPU for an application that is qualified. A result of Not Recommended indicates the application is not a candidate for migration to GPU.

  2. Estimated Speedup: the estimated speed-up is simply the original CPU duration of the app divided by the estimated GPU duration. That will estimate how much faster the application would run on GPU.

  3. Estimated GPU Duration: predicted runtime of the app if it was run on GPU. It is the sum of the accelerated operator durations and ML functions duration(if applicable) along with durations that could not run on GPU because they are unsupported operators or not SQL/Dataframe.

The lower the estimated GPU duration, the higher the “Estimated Speed-up”. The processed applications or queries are ranked by the “Estimated Speed-up”.

As mentioned before, the tool doesn’t guarantee the applications or queries with the highest recommendation will actually be accelerated the most. Refer to Supported operators guide.

In addition to the recommendation, the Qualification tool reports a set of metrics in tasks of SQL Dataframe operations within the scope of: “Entire App”; “Stages”; and “Execs.” The report is divided into three main levels. The fields of each level are described in details in the following sections: Entire App Report, Stages report, and Execs report. Then we describe the output formats and their file locations in Output Formats section.

There’s an option --per-sql to print a report at the SQL query level in addition to the application level.

Entire App Report

The report represents the entire app execution, including unsupported operators and non-SQL operations. This file is saved as rapids_4_spark_qualification_output/rapids_4_spark_qualification_output.csv.

  1. App Name

  2. App ID

  3. Recommendation: recommendation based on Estimated Speed-up Factor, where an app can be “Strongly Recommended”, “Recommended”, “Not Recommended”, or “Not Applicable.” The latter indicates that the app has job or stage failures.

  4. Estimated GPU Speedup

  5. Estimated GPU Duration

  6. Estimated GPU Time Saved: estimated wall-Clock time saved if it was run on the GPU.

  7. SQL DF duration: wall-Clock time duration that includes only SQL-Dataframe queries.

  8. SQL Dataframe Task Duration: amount of time spent in tasks of SQL Dataframe operations.

  9. App Duration

  10. GPU Opportunity: wall-Clock time that shows how much of the SQL duration and ML functions(if applicable) can be accelerated on the GPU.

  11. Executor CPU Time Percent: this is an estimate at how much time the tasks spent doing processing on the CPU versus waiting on IO. This isn’t always a good indicator because sometimes the IO that’s encrypted and the CPU has to do work to decrypt it, so the environment you are running on needs to be taken into account.

  12. SQL Ids with Failures: SQL Ids of queries with failed jobs.

  13. Unsupported Read File Formats and Types: looks at the Read Schema and reports the file formats along with types that may not be fully supported. Example: JDBC[*]. This is based on the current version of the plugin and future versions may add support for more file formats and types.

  14. Unsupported Write Data Format: reports the data format that we currently don’t support, that is, if the result is written in JSON or CSV format.

  15. Complex Types: looks at the Read Schema and reports if there are any complex types(array, struct or maps) in the schema.

  16. Nested Complex Types: nested complex types are complex types that contain other complex types (Example: array<struct<string,string>>). It can read all the schemas for DataSource V1. The Data Source V2 truncates the schema, so if you see “...”, then the full schema isn’t available. For such schemas we read until ... and report if there are any complex types and nested complex types in that.

  17. Potential Problems: some UDFs and nested complex types. Keep in mind that the tool is only able to detect certain issues.

  18. Longest SQL Duration: the maximum amount of time spent in a single task of SQL Dataframe operations.

  19. NONSQL Task Duration Plus Overhead: Time duration that doesn’t span any running SQL task.

  20. Unsupported Task Duration: sum of task durations for any unsupported operators.

  21. Supported SQL DF Task Duration: sum of task durations that are supported by RAPIDS GPU acceleration.

  22. App Duration Estimated: True or False indicates if we had to estimate the application duration. If we had to estimate it, the value will be True and it means the event log was missing the application finished event, so we will use the last job or sql execution time we find as the end time used to calculate the duration.

  23. Unsupported Execs: reports all the execs that are not supported by GPU in this application. Note that an Exec name may be printed in this column if any of the expressions within this Exec is not supported by GPU. If the resultant string exceeds maximum limit (25), then … is suffixed to the STDOUT and full output can be found in the CSV file.

  24. Unsupported Expressions: reports all expressions not supported by GPU in this application.

  25. Estimated Job Frequency (monthly): application executions per month assuming uniform distribution, default frequency is daily (30 times per month) and minimum frequency is monthly (one time per month). For a given log set, determines a logging window using the earliest start time and last end time of all logged applications. Counts the number of executions of a specific App Name over the logging window and converts the frequency to per month (30 days). Applications that are only ran once are assigned the default frequency.

  26. Cluster Tags: stores cluster information (cluster Id, job Id, run name) of Databricks jobs.

  27. Read Schema: shows the datatypes and read formats. This field is only listed when the argument --report-read-schema is passed to the CLI.

Note

The Qualification tool won’t catch all UDFs, and some of the UDFs can be handled with additional steps. Refer to Supported operators guide for more details on UDF.

By default, the applications and queries are sorted in descending order by the following fields:

Stages Report

For each stage used in SQL operations, the Qualification tool generates the following information. This file is saved as rapids_4_spark_qualification_output/rapids_4_spark_qualification_output_stages.csv

  1. App ID

  2. Stage ID

  3. Stage Task Duration: amount of time spent in tasks of SQL Dataframe operations for the given stage.

  4. Unsupported Task Duration: sum of task durations for the unsupported operators. For more details, see the Supported operators guide.

  5. Stage Estimated: True or False indicates if we had to estimate the stage duration.

  6. Number of transitions from or to GPU: total number of RowToColumnar and ColumnarToRow transitions for the unsupported operators on GPU.

Execs Report

The Qualification tool generates a report of the “Exec” in the “SparkPlan” or “Executor Nodes” along with the estimated acceleration on the GPU.Refer to the Supported operators guide for more details on limitations on UDFs and unsupported operators. This file is saved as rapids_4_spark_qualification_output/rapids_4_spark_qualification_output_execs.csv.

  1. App ID

  2. SQL ID

  3. Exec Name: example Filter, HashAggregate

  4. Expression Name

  5. Exec Duration: wall-Clock time measured since the operator starts till it is completed.

  6. SQL Node Id

  7. Exec Is Supported: whether the Exec is supported by RAPIDS or not. Refer to the Supported operators guide.

  8. Exec Stages: an array of stage IDs

  9. Exec Children

  10. Exec Children Node Ids

  11. Exec Should Remove: whether the Op is removed from the migrated plan.

  12. Exec Should Ignore: whether the Op is ignored from the migrated plan.

  13. Action: action based on the Exec, can be NONE if the Exec is supported, IgnoreNoPerf if the Exec is removed, IgnorePerf if the Exec is ignored, or Triage otherwise.

Statistics Report

Statistics report is generated per AppId, per SQLID, and per operator. Statistics is generated by reading Execs, stages and unsupportedOperators output files. This file is saved as qualification_statistics.csv.

  1. App ID

  2. SQL ID

  3. Operator: Operator name for example Filter, Scan parquet.

  4. Count: Number of times the operator was executed in the given SQLID.

  5. Stage Task Exec Duration(s): Total task time of the all stages that contain the operator.

  6. Total SQL Task Duration(s): Total task time of the SQL that the operator is part of.

  7. % of Total SQL Task Duration: Percentage of stage task Exec Duration to Total SQL Task Duration.

  8. Supported: Whether the operator is supported to run on RAPIDS Accelerator for Spark plugin.

App Meta Data Report

The qualification tool generates meta data for each processed app. This file is saved as app_metadata.json.

  1. App ID

  2. App Name

  3. Eventlog: The event log file which contains the application.

  4. Platform: The platform where the application was run.

  5. Source Cluster Info: Cluster information of the application, including driver and worker node types, and the number of workers. This is either inferred by the Qualification tool from the event log or provided by the user through the tool command.

  6. Recommended Cluster Info: Cluster information of the recommended GPU cluster by the Qualification tool.

  7. Estimated GPU Speed-up Category: Large, Medium, or Small indicates the job should be migrated to GPU, or Not Recommended or Not Applicable otherwise.

  8. Full Cluster Config Recommendations: This points to the file path of the cluster configs.

Sample output in text:

Copy
Copied!
            

[{ "appId": "application_000000000000_0001", "appName": "sanity_test", "eventLog": "file:/application_000000000000_0001", "clusterInfo": { "platform": "emr", "sourceCluster": { "driverNodeType": "i3.2xlarge", "workerNodeType": "m5d.16xlarge", "numWorkerNodes": 2 }, "recommendedCluster": { "driverNodeType": "i3.2xlarge", "workerNodeType": "g5.8xlarge", "numWorkerNodes": 4 } }, "estimatedGpuSpeedupCategory": "Not Recommended", "fullClusterConfigRecommendations": "/qual_20240814145334_d2CaFA34/rapids_4_spark_qualification_output/tuning/application_000000000000_0001.conf", "gpuConfigRecommendationBreakdown": "/qual_20240814145334_d2CaFA34/rapids_4_spark_qualification_output/tuning/application_000000000000_0001.log" }]

Parsing Expressions within each Exec

The Qualification tool looks at the expressions in each Exec to provide a fine-grained assessment of RAPIDS’ support.

It isn’t possible to extract the expressions for each available Exec: - some Execs don’t take any expressions, and - some execs may not show the expressions in the eventlog.

The following table lists the Execs’ name and the status of parsing their expressions where:

  • Expressions Unavailable” marks the Execs that don’t show expressions in the eventlog

  • Fully Parsed” marks the Execs that have their expressions fully parsed by the Qualification tool

  • In Progress” marks the Execs that are still being investigated; therefore, a set of the marked Execs may be fully parsed in future releases.

Exec

Expressions Unavailable

Fully Parsed

In Progress

AggregateInPandasExec ☑️
AQEShuffleReadExec ☑️
ArrowEvalPythonExec ☑️
BatchScanExec ☑️
BroadcastExchangeExec ☑️
BroadcastHashJoinExec ☑️
BroadcastNestedLoopJoinExec ☑️
CartesianProductExec ☑️
CoalesceExec ☑️
CollectLimitExec ☑️
CreateDataSourceTableAsSelectCommand ☑️
CustomShuffleReaderExec ☑️
DataWritingCommandExec ☑️
ExpandExec ☑️
FileSourceScanExec ☑️
FilterExec ☑️
FlatMapGroupsInPandasExec ☑️
GenerateExec ☑️
GlobalLimitExec ☑️
HashAggregateExec ☑️
InMemoryTableScanExec ☑️
InsertIntoHadoopFsRelationCommand ☑️
LocalLimitExec ☑️
MapInPandasExec ☑️
ObjectHashAggregateExec ☑️
ProjectExec ☑️
PythonMapInArrowExec ☑️
RangeExec ☑️
SampleExec ☑️
ShuffledHashJoinExec ☑️
ShuffleExchangeExec ☑️
SortAggregateExec ☑️
SortExec ☑️
SortMergeJoinExec ☑️
SubqueryBroadcastExec ☑️
SubqueryExec ☑️
TakeOrderedAndProjectExec ☑️
UnionExec ☑️
WholeStageCodegenExec ☑️
WindowExec ☑️
WindowGroupLimitExec ☑️
WindowInPandasExec ☑️
WriteFilesExec ☑️

MLFunctions Report

The Qualification tool generates a report if there are SparkML or Spark XGBoost functions used in the eventlog. The functions in “spark.ml.” or “spark.XGBoost.” packages are displayed in the report.

  1. App ID

  2. Stage ID

  3. ML Functions: List of ML functions used in the corresponding stage.

  4. Stage Task Duration: amount of time spent in tasks containing ML functions for the given stage.

MLFunctions Total Duration Report

The Qualification tool generates a report of total duration across all ML functions in the eventlog.

  1. App ID

  2. _Stage_Ids : Stage Id’s corresponding to the given ML function.

  3. ML Function Name: ML function name supported on GPU.

  4. Total Duration: total duration across all stages for the corresponding ML function.

The Qualification tool generates the output as CSV/log files.

Text and CSV files

The Qualification tool generates a set of log/CSV files in the output folder ${OUTPUT_FOLDER}/rapids_4_spark_qualification_output. The content of each file is summarized in the following sections.

Application Report Summary

The Qualification tool generates a brief summary that includes the projected application’s performance if the application is run on the GPU. Beside sending the summary to STDOUT, the Qualification tool generates text as rapids_4_spark_qualification_output.log

The summary report outputs the following information: “App Name,” “App ID,” “App Duration,” “SQL DF duration,” “GPU Opportunity,” “Estimated GPU Duration,” “Estimated GPU Speedup,” “Estimated GPU Time Saved,” “Recommendation,” “Unsupported Execs,” “Unsupported Expressions,” and “Estimated Job Frequency (monthly).”

Sample application output in text format. The duration(s) reported are in milliseconds.

Copy
Copied!
            

+------------+--------------+----------+----------+-------------+-----------+-----------+-----------+--------------------+-------------------------------------------------------+-----------+ | App Name | App ID | App | SQL DF | GPU | Estimated | Estimated | Estimated | Recommendation | Unsupported Execs |Unsupported Expressions| Estimated | | | | Duration | Duration | Opportunity | GPU | GPU | GPU | | | | Job | | | | | | | Duration | Speedup | Time | | | | Frequency | | | | | | | | | Saved | | | | (monthly) | +============+==============+==========+==========+=============+===========+===========+===========+====================+=======================================================+===========+ | appName-01 | app-ID-01-01 | 898429| 879422| 879422| 273911.92| 3.27| 624517.06|Strongly Recommended| | | 1 | +------------+--------------+----------+----------+-------------+-----------+-----------+-----------+--------------------+-------------------------------------------------------+-----------+ | appName-02 | app-ID-02-01 | 9684| 1353| 1353| 8890.09| 1.08| 793.9| Not Recommended|Filter;SerializeFromObject;S...| hex | 30 | +------------+--------------+----------+----------+-------------+-----------+-----------+-----------+--------------------+-------------------------------------------------------+-----------+


In the above example, two application event logs were analyzed. “app-ID-01-01” is “Strongly Recommended” because Estimated GPU Speedup is ~3.27. On the other hand, the estimated acceleration running “app-ID-02-01” on the GPU isn’t high enough; hence the app isn’t recommended.

Per SQL Query Report Summary

The Qualification tool has an option to generate a report at the per SQL query level. It generates a brief summary that includes the projected queries performance if the query is run on the GPU. Beside sending the summary to STDOUT, the Qualification tool generates text as rapids_4_spark_qualification_output_persql.log

The summary report outputs the following information: “App Name”, “App ID”, “Root SQL ID”, “SQL ID”, “SQL Description”, “SQL DF Duration”, “GPU Opportunity”, “Estimated GPU Duration”, “Estimated GPU Speedup”, “Estimated GPU Time Saved”, and “Recommendation”.

Sample Per SQL output in text. The duration(s) reported are in milliseconds.

Copy
Copied!
            

+------------+--------------+----------+----------+---------------+----------+-------------+-----------+-----------+-----------+--------------------+ | App Name | App ID | Root | SQL ID | SQL | SQL DF | GPU | Estimated | Estimated | Estimated | Recommendation | | | | SQL ID | | Description | Duration | Opportunity | GPU | GPU | GPU | | | | | | | | | | Duration | Speedup | Time | | | | | | | | | | | | Saved | | +============+==============+==========+==========+===============+==========+=============+===========+===========+===========+====================+ | appName-01 | app-ID-01-01 | | 1| query41| 571| 571| 187.21| 3.05| 383.78|Strongly Recommended| +------------+--------------+----------+----------+---------------+----------+-------------+-----------+-----------+-----------+--------------------+ | appName-02 | app-ID-02-01 | 3| 3| query44| 1116| 0| 1115.98| 1.0| 0.01| Not Recommended| +------------+--------------+----------+----------+---------------+----------+-------------+-----------+-----------+-----------+--------------------+


CSV Reports

Entire App Report

The file is saved as rapids_4_spark_qualification_output.csv. The apps are processed and ranked by the Estimated GPU Speed-up. In addition to the fields listed in the “Report Summary”, it shows all the app fields. The duration(s) are reported are in milliseconds.

Sample output in text can be referenced under: Application Report Summary

Per SQL Report

The file is saved as rapids_4_spark_qualification_output_persql.csv. This contains the per SQL query report in CSV format.

Sample output in text can be referenced under: Per SQL Query Report Summary

Stages Report

The file is saved as rapids_4_spark_qualification_output_stages.csv.

Sample output in text:

Copy
Copied!
            

+--------------+----------+------------+---------------+-----------+-----------------------+ | App ID | Stage ID | Stage Task | Unsupported | Stage | Number of transitions | | | | Duration | Task Duration | Estimated | from or to GPU | +==============+==========+============+===============+===========+=======================+ | app-ID-01-01 | 25 | 23 | 0 | false | 1 | +--------------+----------+------------+---------------+-----------+-----------------------+ | app-ID-02-01 | 29 | 0 | 0 | true | 1 | +--------------+----------+------------+---------------+-----------+-----------------------+

Execs Report

The file is saved as rapids_4_spark_qualification_output_execs.csv. Similar to the app and stage information, the table shows estimated GPU performance of the SQL Dataframe operations.

Sample output in text:

Copy
Copied!
            

+--------------+--------+---------------------------+-----------------------+----------+----------+-----------+--------+----------------------------+---------------+-------------+-------------+---------------+ | App ID | SQL ID | Exec Name | Expression Name | Exec | SQL Node | Exec Is | Exec | Exec Children | Exec Children | Exec Should | Exec Should | Action | | | | | | Duration | Id | Supported | Stages | | Node Ids | Remove | Ignore | | +==============+========+===========================+=======================+==========+==========+===========+========+============================+===============+=============+=============+===============+ | app-ID-02-01 | 7 | Execute CreateViewCommand | | 0 | 0 | false | | | | false | true | IgnorePerf | +--------------+--------+---------------------------+-----------------------+----------+----------+-----------+--------+----------------------------+---------------+-------------+-------------+---------------+ | app-ID-02-01 | 24 | Project | | 0 | 21 | true | | | | false | true | IgnorePerf | +--------------+--------+---------------------------+-----------------------+----------+----------+-----------+--------+----------------------------+---------------+-------------+-------------+---------------+ | app-ID-02-01 | 24 | Scan parquet | | 260 | 36 | true | 24 | | | false | true | IgnorePerf | +--------------+--------+---------------------------+-----------------------+----------+----------+-----------+--------+----------------------------+---------------+-------------+-------------+---------------+ | app-ID-02-01 | 15 | Execute CreateViewCommand | | 0 | 0 | false | | | | false | false | Triage | +--------------+--------+---------------------------+-----------------------+----------+----------+-----------+--------+----------------------------+---------------+-------------+-------------+---------------+ | app-ID-02-01 | 24 | Project | | 0 | 14 | true | | | | false | false | Triage | +--------------+--------+---------------------------+-----------------------+----------+----------+-----------+--------+----------------------------+---------------+-------------+-------------+---------------+ | app-ID-02-01 | 24 | WholeStageCodegen (6) | WholeStageCodegen (6) | 272 | 2 | true | 30 | Project:BroadcastHashJoin: | 3:4:5 | false | true | IgnorePerf | | | | | | | | | | HashAggregate | | | | | +--------------+--------+---------------------------+-----------------------+----------+----------+-----------+--------+----------------------------+---------------+-------------+-------------+---------------+

Status Report

The file is saved as rapids_4_spark_qualification_output_status.csv. This contains file processing information for each eventlog.

Copy
Copied!
            

+-------------------------+-----------+----------------------------------------------------------------------------------+ | Event Log | Status | Description | | | | | +=========================+===========+==================================================================================+ | file:/home/app-ID-01-01 | "SUCCESS" | "app-ID-01-01, Took 500 ms to process" | +-------------------------+-----------+----------------------------------------------------------------------------------+ | file:/home/app-ID-01-02 | "SKIPPED" | "GpuEventLogException: Cannot parse event logs from GPU run: skipping this file" | +-------------------------+-----------+----------------------------------------------------------------------------------+

Cluster Information Report

The file is saved as rapids_4_spark_qualification_output_cluster_information.csv. This contains cluster information for each eventlog.

Copy
Copied!
            

+--------------+--------------+----------+---------------+-------------+-----------+------------+-----------+-----------+-----------+ | App ID | App Name | Vendor | Driver | Cluster | Cluster | Executor | Driver | Num | Cores | | | | | Host | ID | Name | Instance | Instance | Executor | Per | | | | | | | | | | Nodes | Executor | | | | | | | | | | | | +==============+==============+==========+===============+=============+===========+============+===========+===========+===========+ | app-ID-01-01 | appName-01 | "onprem" |"10.100.10.100"| | | | | 1 | 8 | +--------------+--------------+----------+---------------+-------------+-----------+------------+-----------+-----------+-----------+ | app-ID-02-01 | appName-02 | "onprem" |"10.100.10.100"| | | | | 1 | 1 | +--------------+--------------+----------+---------------+-------------+-----------+------------+-----------+-----------+-----------+


This information is also capture in a json format, saved as rapids_4_spark_qualification_output_cluster_information.json.

ML Functions Report

The file is saved as rapids_4_spark_qualification_output_mlfunctions.csv. This contains information of ML functions, if they exist in the eventlogs.

ML Functions Duration Report

The file is saved as rapids_4_spark_qualification_output_mlfunctions_total_duration.csv. This contains total durations of all ML functions in the eventlogs.

Unsupported Operators Report

The file is saved as rapids_4_spark_qualification_output_unsupportedOperators.csv. This contains information of the unsupported operators by GPU in the eventlogs.

Copy
Copied!
            

+--------------+--------+----------+--------+-------------+-----------------+-------------------------------+----------+----------+------------+ | App ID | SQL ID | Stage ID | ExecId | Unsupported | Unsupported | Details | Stage | App | Action | | | | | | Type | Operator | | Duration | Duration | | +==============+========+==========+========+=============+=================+===============================+==========+==========+============+ | app-ID-02-01 | 7 | 0 | 0 | Exec | Project | "Contains unsupported expr" | 175 | 34780 | IgnorePerf | +--------------+--------+----------+--------+-------------+-----------------+-------------------------------+----------+----------+------------+ | app-ID-02-01 | 24 | 2 | 1 | Exec | LocalTableScan | "Unsupported" | 26 | 34780 | Tiage | +--------------+--------+----------+--------+-------------+-----------------+-------------------------------+----------+----------+------------+

Previous Quickstart
Next Benchmark Environments
© Copyright 2024, NVIDIA. Last updated on Oct 11, 2024.