Apache Spark vs. Amazon Redshift: Which is better for big data?

Every day, we talk to companies who are in the early phases of building our their data infrastructure.  A lot of times these conversations circle around which technology to pick for which job. For example, we often get the question “what’s better – Spark or Amazon Redshift?”, or “which one should we be using?”. Spark and Redshift are two very different technologies. It’s not an either / or, it’s more of a “when do I use what?”. In this post, I’ll lay out some of the differences and when to use which technology.

At the time of this post, if you look under the hood of the most advanced tech start-ups in Silicon Valley, you will likely find both Spark and Redshift. Spark is getting a little bit more attention these days because it’s a new shiny toy. But they cover different use cases (“dish washer vs. fridge”, per Ricardo Vladimiro).

Let’s give you a decision-making framework that can guide you through your thinking:

  1. What it is
  2. Data architecture
  3. Spark and Redshift: Data engineering

What it is

Spark

Apache Spark is a data processing engine. With Spark you can:

There is a general execution engine (Spark Core) and all other functionality is built on top of.

apache-spark-ecosystem - spark and redshift - intermix
Source: Databricks

People are excited about Spark for three reasons:

Spark is fast because it distributes data across a cluster, and processes that data in parallel. It tries to process data in memory, vs. shuffling things in and out of disk (like e.g. MapReduce does).

Spark is easy because it has a high level of abstraction, allowing you to write applications with less lines of code. Plus, Scala and R are attractive for data manipulation.

Spark is extensible via the pre-built libraries, e.g. for machine learning, streaming apps or data ingestion. These libraries are either part of Spark or 3rd party projects.

In short, the promise of Spark is to speed up development, make applications more portable and extensible, and make the actual application run faster.

A few more noteworthy points on Spark:

  • Spark is open source, so you can download it and start running it yourself, e.g. on Amazon’s EC2. Companies like Databricks (founded by the people who created Spark) offer support to make your life easier.
  • I think Kiyoto Tamura mentions a key distinction that isn’t always clear. Spark “is NOT a database”. You will need some sort of persistent data storage that Spark can pull data from (i.e. a data source, such as Amazon S3 or – hint, hint – Redshift).
  • Spark then “reads data into memory” to process it. Once that’s done, Spark will require a place to store / pass on the results (because Spark is not a database) . Could be back into S3. And from S3 into e.g. Redshift (see where this answer is going?).
spark-streatming - spark and redshift - intermix
Source: Spark Streaming – Spark 2.1.1 Documentation.

You need to know how to write code to use Spark (the “write applications” part). So the people who use Spark are typically developers.

Download the Top 14 Performance Tuning Techniques for Amazon Redshift

Redshift

Amazon Redshift is an analytical database. With Redshift you can:

  • build a central data warehouse unifying data from many sources
  • run big, complex analytic queries against that data with SQL
  • report and pass on the results to dashboards or other apps

Redshift is a managed service provided by Amazon. Raw data flows into Redshift (called “ETL”), where it’s processed and transformed at a regular cadence (“transformation” or “aggregations”), or on an ad-hoc basis (“ad-hoc queries”). Another term for loading and transforming data is also “data pipelines”.

amazon-redshift-architecture - spark and redshift - intermix
Source: Amazon Web Services

People are excited about Redshift for three reasons:

Redshift is fast because its massively parallel processing (MPP) architecture distributes and parallelizes queries. Redshift allows a high query concurrency and processes queries in memory.

Redshift is easy because it can ingest structured, semi-structured and unstructured datasets (via S3 or DynamoDB) up to a petabyte or more, to then slice ‘n dice that data any way you can imagine with SQL.

Redshift is cheap because you can store data for a $935/TB annual fee (if you use the pricing for a 3-year reserved instance). That price-point is unheard of in the world of data warehousing.

In short, the promise of Redshift is to make data warehousing cheaper, faster and easier. You can analyze much bigger and complex datasets than ever before, and there’s a rich ecosystem of tools that work with Redshift.

A few more noteworthy points about Redshift:

  • Redshift is a “fully managed service”. I’d say the “managed service” part of that is true, it’s fully managed from the hardware layer down. You run a credit card and you’re off to the races. The “fully” can be a bit misleading – lots of knobs to turn to get good performance.
  • Your cluster comes empty. But there are plenty of data integration / ETL tools that allow you to quickly populate your cluster to start analyzing and reporting data for business intelligence (“BI Analytics”) purposes.
  • Redshift is a database, so you can store a history of your raw data AND the results of your transformations. In April 2017, Amazon also introducedRedshift Spectrum”, which enables you to run queries against data in S3 (which is a much cheaper way of storing your data).

With intermix.io we make it very easy to figure out what knobs to turn when using Amazon Redshift. For example, below is a screenshot from our “Cluster Health” dashboard.

The Cluster Health Dashboard helps data teams measure and improve SLAs. It does this by surfacing:

  1. SLA measures for any connected app, down to the individual user
  2. Proactively surfaces Recommendations to improve performance and optimize costs
  3. Top 5 slowest queries with quick links to Query Optimization Recommendations

See personalized recommendations for your cluster now

automating-analytic-workflows-on-aws - spark and redshift - intermix
Source: Automating Analytic Workflows on AWS

You need to know how to write SQL queries to use Redshift (the “run big, complex queries” part). So the people who use Redshift are typically analysts or data scientists.

In summary, one way to think about Spark and Redshift is to distinguish them by what they are, what you do with them, how you interact with them, and who the typical user is.

spark-redshift-differences-intermix
Source: image created for this blog post by intermix

I’ve hinted at how you see both Spark and Redshift deployed. That gets us to data architecture.

Data Architecture

In very simple terms, you can build an application with Spark, and then use Redshift both as a source and a destination for data.

Why would you do that? A key reason is the difference between Spark and Redshift in the way they process data, and how much time it takes to product a result.

  • With Spark, you can do real-time stream processing, i.e. you get a real-time response to events in your data streams.
  • With Redshift, you can do near-real time batch operations, i.e. you ingest small batches of events from data streams, to then run your analysis to get a response to events.

A highly simplified example: Fraud detection. You could build an app with Spark that detects fraud in real-time from e.g. a stream of bitcoin transactions. Given it’s near-real time character, Redshift would not be a great fit in this case.

But let’s say if you wanted to have more signals for your fraud detection, for better predictability. You could load data from Spark into Redshift. There, you join it with historic data on fraud patterns. But you can’t do that in real-time, the result would come too late for you to block the transaction. So you use Spark to e.g. block a transaction in real-time, and then wait for the result from Redshift to decide if you keep blocking it, send it to a human for verification, or approve it.

In December 2017, the Amazon Big Data Blog had another example of using both Spark and Redshift: “Powering Amazon Redshift Analytics with Apache Spark and Amazon Machine Learning”. The post covers how to build a predictive app that tells you how likely a flight will be delayed. The prediction happens based on the time of day or the airline carrier, by using multiple data sources and processing them across Spark and Redshift.

separation-apps-data-warehousing-intermix

You can see how the separation of “apps” and “data warehousing” we created at the start of this post is, in reality, an area that’s shifting or even merging.

To help data engineers stay on top of both the apps and the data warehouse we’ve built a feature in intermix.io called “App Tracing”.

What we do is correlate information about applications (dashboards, orchestration tools) with cluster performance data.

App Tracing can answer questions like:

  • which dashboard and user is contributing to this spike in queries?
  • what is the average latency of a dashboard? Of all dashboards executed by a particular user?
  • my Airflow or Pinball tasks experienced a spike in latency – what caused it? Which query slowed down and why?
  • my Airflow or Pinball jobs are failing, how can I quickly find those queries in Intermix?
This is how easy it is to identify apps that are slowing down your cluster down to the individual user

Start Free and Identify Data Apps that are slowing down your cluster now

Spark and Redshift: Data Engineering

The border between developers and business intelligence analysts / data scientists are fading. That has given rise to a new occupation: Data engineering. I’ll use a definition for data engineering by Maxime Beauchemin:

“In relation to previously existing roles, the data engineering field [is] a superset of business intelligence and data warehousing that brings more elements from software engineering, [and it] integrates the operation of ‘big data’ distributed systems”.

Spark is such a “big data” distributed system. Redshift is the data warehousing part. Data engineering is the discipline that brings both together. That’s because you see “code” moving its way into data warehousing. Code allows you to author, schedule and monitor data pipelines that feed into Redshift, incl. the transformations on the data once it sits inside your cluster. And you’ll very likely have to ingest data from Spark. And so the trend to “code” in warehousing implies that knowing SQL is not sufficient any more. You need to know how to write code. Hence the “data engineer”.

This post is already way too long, but I hope it provides a useful summary on how to think about your data stack. For your big data architecture, you will likely end up using both Spark and Redshift, each one to fulfill a specific use case that’s is best suited for.

Lars Kamp

Lars Kamp

Join 11,000 of your peers.
Subscribe to our newsletter SF Data.
People at Facebook, Amazon and Uber read it every week.

Every Monday morning we'll send you a roundup of the best content from intermix.io and around the web. Make sure you're ready for the week! See all issues.