Apache Spark Performance Tuning – Straggler Tasks

Apache Spark Performance Tuning – Straggler Tasks

Overview

This is the last article of a four-part series about Apache Spark on YARN. Apache Spark carefully distinguishes “transformation” operation into two types such as “narrow” and “wide”. This distinction is important due to strong implications on evaluating transformations and improving their performance. Spark depends heavily on key/value pair paradigm on defining and parallelizing operations, especially wide transformations requiring data to be redistributed between machines.

Few performance bottlenecks were identified in the SFO Fire department call service dataset use case with YARN cluster manager. To understand about the use case and performance bottlenecks identified, refer our previous blog on Apache Spark on YARN – Performance and Bottlenecks.

The Resource planning bottleneck is addressed and notable performance improvements achieved in the use case Spark application is discussed in our previous blog on Apache Spark on YARN – Resource Planning.

To know about partition tuning in the use case Spark application, refer our previous blog on Apache Spark Performance Tuning – Degree of Parallelism.

In this blog, let us discuss about shuffle and straggler tasks problem so as to improve the performance of the use case application.

Our other articles of the four-part series are:

Spark Shuffle Principles

Two primary techniques such as “shuffle less” and “shuffle better” to avoid performance problems associated with shuffles are as follows:

  • Shuffle Less Often – To minimize number of shuffles in a computation requiring several transformations, preserve partitioning across narrow transformations to avoid reshuffling data.
  • Shuffle Better – Computation cannot be completed without a shuffle sometimes. All wide transformations and all shuffles are not equally expensive or prone to failure.

Operations on the key/value pairs can cause:

  • Out-of-memory errors in the driver
  • Out-of-memory errors on the executor nodes
  • Shuffle failures
  • Straggler tasks or partitions, especially slow to compute

The memory errors in the driver is mainly caused by actions. The last three performance issues (such as out of memory on the executors, shuffles, and straggler tasks) are almost caused by shuffles associated with the wide transformations.

Understanding Use Case Application Shuffle

The number of partitions tuned based on the input dataset size is explained in our previous blog on Apache Spark Performance Tuning – Degree of Parallelism. The DataFrame API implementation of application submitted with the following configuration is shown in the below screenshot:

Partition23_23_ShuflleUnderstanding

On considering Shuffle Read and Write columns, the shuffled data are in Bytes and Kilo Bytes (KB) across all the stages as per the shuffle principles “Shuffle are less” in our use case application.

The input of ~849 MB is carried over in all the shuffle stages.

The “Executors” tab in the Spark UI provides the summary of input, shuffles read, and write as shown in the below diagram:

ExecutorSummary23_23Partition

The overall input size is 5.9 GB including original input of 1.5 GB and entire shuffle input of ~849 MB.

Detecting Stragglers Tasks in Use Case

“Stragglers” are tasks within a stage that take much longer to execute than other tasks.

The total time taken for DataFrame API implementation is 1.3 minutes.

On considering the Stages wise durations, Stage 0 and 2 consumed 10 s and 46 s, respectively. Totally, 56 seconds (~ 1 minute).

StragglerDeduction23_23Partition

Internally, Spark does the following:

  • Spark optimizers such as Catalyst and Tungsten optimize the code at run time
  • Spark high-level DataFrame and DataSet API encoder reduce the input size by encoding the data

By reducing input size and by filtering the data from input datasets in both low-level and high-level API implementation, the performance can be improved.

Low-Level and High-Level API Implementation

Our input dataset has 34 columns. 3 columns were used for computation to answer the use case scenario questions.

The below updated RDD and DataFrame API implementation code provides performance improvement by selecting only needed data for this use case scenario:

The above line is added at the beginning of the RDD API implementation to select 3 columns and remove 31 columns from the RDD to reduce the input size in all the shuffle stages.

The below code also does the same thing in DataFrame API implementation:

The code block of both RDD and DataFrame API implementations is given below:

Submitting Spark Application in YARN

The Spark submit command with partition tuning, used to execute the RDD and DataFrame API implementation in YARN, is as follows:

DataFrame API implementation of application input, shuffles read, and writes is monitored in stages view. The below diagram shows that the input size of shuffle stages is ~17 MB currently and ~849 MB previously. The Shuffle read and write do not have multiple changes.

DataFrameStraggerFixStages

The “Executors” tab in the Spark UI provides the summary of input, shuffles read, and write as shown in the below diagram:

DataFrameStragglerFixExecutorsStats

The summary shows that the input size is 1.5 GB currently and 5.9 GB previously.

The time duration after reducing input size in RDD and DataFrame API implementation is shown in the below diagram:

Straggler Fix Output

Understanding Use Case Performance

The performance duration (without any performance tuning) based on different API implementation of the use case Spark application running on YARN is shown in the below diagram:

SparkApplicationWithDefaultConfigurationPerformance

For more details, refer our previous blog on Apache Spark on YARN – Performance and Bottlenecks.

We tuned the number of executors, cores, and memory for RDD and DataFrame implementation of the use case Spark application. The below diagram is based on the performance improvements after tuning the resources:

SparkApplicationAfterResourceTuningPerformance

For more details, refer our previous blog on Apache Spark on YARN – Resource Planning.

We tuned the default parallelism and shuffle partitions of both RDD and DataFrame implementation in our previous blog on Apache Spark Performance Tuning – Degree of Parallelism. We did not achieve performance improvement. But, reduced the scheduler overhead.

Finally, after identifying the straggler tasks and reducing the input size, we got 2 x performance improvements in DataFrame implementation and 4 x improvements in RDD implementation.

StragglerPerformanceBenchmark

Conclusion

In this blog, we discussed about Shuffle principles and understood use case application shuffle, straggler task detection in the application, and input size reduction to improve the performance of different API implementations of the Spark application.

We achieved 2 x performance improvements in DataFrame implementation and 4 x improvements in RDD implementation from the result of resource and partition running.

References

3640 Views 2 Views Today