Introduction
Last updated
Last updated
In this guide, I'm going to introduce you some techniques for tuning your Apache Spark jobs for optimal efficiency. Using Spark to deal with massive datasets can become nontrivial, especially when you are dealing with a terabyte or higher volume of data. The first thing that comes up could be to use a large cluster of hundreds of machines with hundreds of cores and petabytes of RAM, but using a super-sized cluster has a cost that can exponentially grow. That's why I wrote this guide to help you to achieve better performance and saving costs.