Start Now Login

Table of Contents

This is a guest blog post by Pete DeJoy. Pete is a Product Specialist at Astronomer, where he helps companies adopt Airflow. 

Apache Airflow has come a long way since it was first started as an internal project within Airbnb back in 2014 thanks to the core contributors’ fantastic work in creating a very engaged community while all doing some superhero lifting of their own. In this post, we’re going to go through some of the exciting things coming down the pipe as the project gears up for a very hotly anticipated 2.0 release.

What is Apache Airflow?

Apache Airflow is an open-source workflow management system that allows you programmatically author, schedule, and monitor data pipelines in Python. It is the most popular and effective open-source tool on the market for managing workflows, with over 8,500 stars and nearly 500 contributors on Github.

Since being open-sourced in 2015, Airflow has proven to be the dominant tool in its class (beating out alternatives like Spotify’s Luigi, Pinterest’s Pinball, and even the ever-present Hadoop-centric Oozie) because of its core principles of configurability, extensibility, and scalability. As we’ll see, the new features being developed and direction of the project still adhere to these principles.

Short Term (v1.10)

Improvements that are currently being worked on in active PRs or recently merged and will be included in the upcoming Airflow 1.10 release.

Download our Data Pipeline Resource Bundle

See 14 real-life examples of data pipelines built with Amazon Redshift

  • Full stack breakdown
  • Summary slides with links to resources
  • PDF containing detailed descriptions

RBAC and the UI

Joy Gao (https://twitter.com/joygao) took on the herculean task of converting the front end framework from Flask Admin to Flask AppBuilder (https://issues.apache.org/jira/browse/AIRFLOW-1433), which was an incredible feat for one person to accomplish largely on her own. One of the primary benefits realized in this update is ground-level support for role-based authentication controls (RBAC) that will open the door for various auth backends and functionally allow admin users to dictate who has access to specific elements of their Airflow cluster.

Along with this RBAC capability will come an improved UI that will allow for better security around user access. Our team at Astronomer is hoping to fix the UI so that refreshing the dashboard is not needed to check on the status of DAGs – we’d like to be able to view the status of our DAGs in real time without driving ourselves crazy pressing that refresh button.

The Kubernetes Executor

One of the most exciting developments the Astronomer team is anticipating is the release of the Kubernetes Executor that Daniel Imberman (https://github.com/dimberman) of Bloomberg has been leading development on. This is long-awaited from the community and will allow users to auto-scale workers via Kubernetes, ensuring that resources are not wasted. This is especially important for expanding the viable use cases for Airflow, as right now many are forced to either run Airflow on a low powered EC2 instance and use it to schedule external jobs or run it on expensive hardware that is massively underutilized when tasks aren’t actively running. While it is being included in the 1.10 release, the release of a new executor is part of a long-term but active effort to make Airflow completely could-native, which we’ll discuss in the following section.

See your data in intermix.io

Long Term (v2.0+)

In addition to the short-term fixes outlined above, there are a few longer-term efforts being worked on that will have a huge bearing on the stability and usability of the project. Most of these items have been identified by the Airflow core maintainers as necessary for the v2.x era and subsequent graduation from “incubation” status within the Apache Foundation.

First Class API Support

A largely requested feature (at least from users of Airflow that we work with) is first class support for an API to control everything from connection creation to DAG pausing to requesting usage metrics. Right now, the API is strictly experimental with limited functionality, so, in practice, if you want to create this behavior, you end up writing a plugin that directly manipulates the underlying MySQL or PostgreSQL. Ideally, given that much of the current functionality in the UI is based on direct modification of the database and not via any API, the inclusion of a first-class API that handles all functionality would mean that everything done in the UI could also be done in the CLI, further expanding the use cases Airflow could facilitate.

Making Airflow Cloud Native

As mentioned above in relation to the Kubernetes Executor, perhaps the most significant long-term push in the project is to make Airflow cloud native. Today it is still up to the user to figure out how to operationalize Airflow for Kubernetes, although at Astronomer we have done this and provide it in a dockerized package for our customers.

We feel this is an important step for the project to keep up with the changing deployment landscape and we plan to open-source what we can as we go, but knocking this out is easier said than done. One of the most fundamental problems blocking this initiative is the need for a high-availability, massively-distributed, and auto-rebalancing datastore, something that is hard to do with a simple postgresql or mysql.

A promising lead towards addressing this is added support for CockroachDB, a database following the Google Spanner whitepaper (and founded by former Google File System engineers) and designed precisely for the features listed above.

Improved Test Suite

A common complaint among contributors to Airflow is the long time that it can take for Travis, the CI of choice for the Airflow project, to run all the tests when cutting a new release. This has been brought up in the past but given Airflow’s code base has hit a scale where it can take up to an hour for Travis to run, we see this test suite finally making it over the line (and are looking forward to helping!). One factor to help in this process is the proposed break out of plugins (which has been growing and is a large code base in and of itself). Which brings us to…

Plugin Manager

Before we talk about this, it’s important to note that this is NOT on the official Airflow roadmap (at least not yet) but is rather something that the Astronomer team has been mulling around as we see the continued proliferation of plugins.

The brilliance of airflow plugins (and why they have contributed in no small part to the success of the entire project) is how wide-ranging they can be, enabling your workflows to connect with GCP, AWS, and Hadoop ecosystems as well as any number of other APIs and databases rather trivially.

Ironically, this is also their weakness. Importing Google Cloud plugins and opening the door to additional processing and dependency conflicts makes zero sense if your stack is on AWS. The inherent brittleness of plugins that have to interact with constantly changing APIs by their very nature require a different release schedule from the core project, which is a slower procedure as any error could affect core functionality.

All plugins should be on their own release schedule with an independent testing suite to make sure that all updates take advantage of the latest changes in external projects. Getting this to be as easy as a pip install will be huge for making Airflow more available to to other systems.

As we look toward the next year of our roadmap, we’re doubling down on our community contributions to help Airflow retain its status as the most flexible, extensible, and reliable scheduler available, regardless of how it’s being run. In Astronomer speak, we’d recommend hanging onto your helmets – the ship is about to kick into hyperdrive.

Introduction

At intermix.io, we use Amazon Redshift as part of our stack. Amazon Redshift is an OLAP database, and a valuable tool for data teams due to its low cost and speed for analytical queries. We have a particular use case though. We’re using Amazon Redshift in an OLTP scenario, i.e. we’ve built an analytical application on top of Redshift.

The challenge of using Redshift as an OLTP is that queries can lack the low-latency that would exist on a traditional RDBMS and transactional queries. This is a result of the column-oriented data storage design of Amazon Redshift which makes the tradeoff to perform better for big data analytical workloads. This comes from eliminating the need to traverse full records (all columns) for each query. Also, there are features of a traditional RDBMS that would often be useful when querying Redshift (e.g. indices).

In this post we detail a method of using Amazon Redshift as an OLTP. We will tell the story of how we off-load OLTP workloads from Amazon Redshift to Amazon RDS. We were able to do this without writing any complex ETL or changing our data processing strategy.

How We Use Amazon Redshift

intermix.io is an analytics platform that provides a single monitoring dashboard for data engineers to keep an eye on their critical data flows. We surface information for data engineers using Amazon Redshift like how to optimize warehouse resources with actionable recommendations.

For example, our “Cluster Health” dashboard gives you a very quick overview of SLA measures for any connected app, down to the individual users as well as actionable recommendations on how to improve performance and optimize costs.

Amazon Redshift cluster in intermix.io
Cluster Health Dashboard in intermix.io

If you’d like to instantly see personalized recommendations for your cluster, you can start a 14-day free trial on this link.

Back to the blog post now.

There are three steps for acquiring the data we need to provide our analytics to our customers.

1. We extract raw event data from the customer databases via the intermix.io Collector. The Collector runs as a Docker container in the customer’s environment.
2. We move the data into the intermix.io S3 environment.
3. From S3, we copy data into Redshift.

Amazon Redshift in an OLTP scenario
Image 1: Simplified architecture of Amazon Redshift in an OLTP scenario

We use Amazon Redshift for:

Download the Top 14 Performance Tuning Techniques for Amazon Redshift

Issues

In the early stage of intermix.io, our dashboard pages would pull data from Redshift directly. But as we added more and more customers, we realized the dashboard pages were not performing well. This was due to two factors:

We did use the best practices to manage our Redshift cluster, including the proper set-up of our workload management (WLM) and all 14 performance tuning techniques. But we also knew that using Redshift for this OLTP use case — serving data to the dashboard – was not going to work long term. So we started looking for solutions.

Our ideal solution would:

What is Amazon Redshift?

Amazon Redshift is a columnar-oriented petabyte-scale data warehouse. It is an OLAP database suitable for analytical queries and applications.  It uses a modified version of PostgreSQL 8.

Our Approach

The solution was to use Amazon RDS as a ‘cache’ by leveraging the Dblink feature of RDS. This allows us to make any Amazon Redshift table available as a relation in RDS, without the need to explicitly copy that data over. Dblink handles moving the data at the block level.

This solution had a bunch of benefits:

The Magic – 6 Steps

Before we go over the installation instructions, we’ll cover some of the features that using Dblink with Redshift provides.

The main features presented in this post require a Postgres instance with Postgres version >= 9.4 (some features listed in this section require more recent versions of Postgres). We use an RDS-hosted version of Postgres but other types of instances will work as long as they meet the minimum requirements.

Step 1: Setup on Redshift cluster

Now onto the installation. For installing on Redshift, perform the following commands:

Step 2: Setup on Postgres RDS instance

Enable dblink and postgres_fdw in the Postgres RDS instance (only needs to be done once per database).

Create the server link and user mapping.

That’s it, now you’re ready to query your Redshift data in RDS!

Step 3: Run some queries

This approach allows you to query Redshift while inside of a Postgres instance. It bridges data but doesn’t confer any performance benefits. For example, to get a list of users that were updated this year.

Step 4: Create a cached view into the data

To create a queryable cache of the Redshift data, we create a materialized view. You can also select a subset of rows and a subset of columns if you don’t require the full table.

Step 5: Keep it up to date

A materialized view will not self-update. So we need an approach to refresh the data. If you only wish to execute live queries (no materialized views) then you can ignore this section.

Refresh approach 1 (non-concurrent)
Easier and faster but with concurrent reads blocked.

This will block concurrent reads, so if you need to be able to always read from the view follow the next approach.

Refresh approach 2 (concurrent)
Non-blocking refreshes (this requires a unique index on the materialized view).

This requires a unique index to work. For example, after creating the view:

Periodic refresh
If the data you retrieve takes more than 5 minutes to transfer or you don’t mind refreshing synchronously, you can issue the REFRESH commands above inside your code or in a periodic task as needed.

Refresh Lambdas
If retrieving the data from Redshift takes fewer than 5 minutes, AWS’s Lambda is a good approach to use here. Be mindful of concurrency, for example it’s bad practice to issue multiple concurrent refreshes on a single table while another refresh is currently happening on the same table. We’ll have more about how we approach this in a subsequent blog post.

Incremental refreshes
The above refresh methods query the full set of data each time (even if nothing has changed), if you don’t have updates on your data you may want to use the following approach instead:

Step 6: Operations and monitoring

Dblink ingestion incurs regular data transfer costs, hence if you don’t want to be charged for the network traffic of synchronizing the data please have your Postgres RDS instance in the same AZ as your Redshift leader node.

You need to ensure that the user permissions for the users querying data via Dblink matches that of the same users querying directly on Redshift, so that additional permissions aren’t granted by querying Dblink.

Amazon Redshift supports identity-based policies (IAM policies). We recommend using IAM identities (e.g. a user, a group, a role) to manage cluster access for users vs. creating direct logins in your cluster. That’s because nobody ever keeps track of those logins. For example, at intermix.io we use IAM to generate temporary passwords for our clusters.

In the AWS console, keep an eye on the following Postgres RDS metrics to ensure that the instance isn’t overwhelmed:

And then, of course, we use our own intermix.io dashboard to monitor the performance of our Amazon Redshift clusters and our data pipelines. A few select key metrics we pay attention to are:

intermix.io dashboard
Image 2: intermix.io dashboard – query memory footprint

For example, in image 2 you can see a distribution of all queries for a certain time interval. We can see how much memory each query consumes, if it’s disk-based or not, and what the total query return time (execution time + queue time) is.

Not only that but if you click on a single query we surface every single detail about it as well as actionable insights based on your cluster about how to improve performance instantly.

intermix.io dashboard

See query recommendations and optimize your Amazon Redshift Performance Now

Caveats and limits

The total on-disk size limit for Postgres on RDS is 16TB, so the data you link cannot be greater than that (such a transfer would take at least 3.6 hours on a 10Gbit connection). The total table size on Redshift can be greater than 16TB but your materialized view query cannot select a quantity of rows and columns combined with index overhead that would exceed 16TB.

A current limitation of the DNS resolving on RDS is such that lookups don’t resolve to private IPs for DNS queries performed inside of Dblink. Hence if you’re running in a VPC without publicly routable Redshift then you’ll need to setup your own DNS to resolve to the private IP or hardcode Redshift Leader node private IPs when provisioning Dblink.

Wrapping up

With the approach in this post, we showed you how it’s possible to use Amazon Redshift in an OLTP scenario. By using Amazon RDS as a ‘cache’ and leveraging its Dblink feature, we can make any Amazon Redshift table available as a relation in RDS, without the need to explicitly copy that data over.

So if your using Amazon Redshift in an OLTP scenario for building data apps, and you want to monitor the performance of those data apps – schedule a call with us. We’d love to look at your set-up, and help you with a free trial of intermix.io to tune your Amazon Redshift cluster.

App Tracing surfaces important information about how apps & users interact with your data. It can help answer questions like:

What is a “Data App”?

There are three categories of data apps:

  1. Data integration services. Vendors who ETL data from external systems or applications into your data environment.
  2. Workflow orchestration. Tools for workflow orchestration – typically batch processing on your data pipeline.
  3. Visualization & Analysis. Reporting, modeling and visualization apps used by analysts and data scientists.

How it Works

App Tracing requires the data app to annotate the executed SQL with a comment. The comment encodes metadata about the application which submitted this query.

Intermix.io will automatically index all data contained in the annotation, and make it accessible as first-class labels in our system. I.e. for Discover searches, Saved Searches, and aggregations in the Throughput Analysis page.

Supported Apps

Out of the box, we support:

Don’t see your data app? No problem. Any queries tagged with our format will be automatically detected. See here for instructions on using the Tag Generator to create tags to embed into your SQL.

Example

Which Looker user is causing a concurrency spike

In the below example, a query spike in WLM 3 causes a bottleneck in query latency. The result is that queries which would otherwise take 13-14 seconds to execute, are stuck in the queue for > 3 minutes.

App Tracing detects that the majority of these queries are from Looker. How do you know which user is causing this?

Click on the chart, and a widget will pinpoint the specific Looker user(s) who ran those queries. In this example, we see that user 248 is responsible.

App Tracing in Looker

Armed with this information, you can now:

Monitoring & Setting an Alarm

See all the activity for this user by heading to Discover and use the new ‘App’ filter to search for Looker user 248.

To set up an alarm to get email notifications, save that search and stream the following metrics to CloudWatch:

Cloudswatch App Tracing with intermix.io

See What Customers are Saying

We soft-launched app tracing on the morning of this blog post. It didn’t take our customer long to notice. See this screenshot of Slack conversation (each one of our customers has a direct line to our team) we had today.

comment_app_tracing

Using Apache Airflow?

If you’re using Amazon Redshift in combination with Apache Airflow, and you’re trying to monitor your DAGs  – we’d love to talk! We’re running a private beta for a new Airflow plug-in with a few select customers. Go ahead and and click on the chat widget on the bottom right of this window. Answer three simple questions, schedule a call, and then mention “Airflow” at the end and we’ll get you set up! As a bonus, we’ll throw in an extended trial of 4 weeks instead of 2! 


Photo by Denise Johnson