Should I Enable Amazon Redshift’s Automatic WLM?

Background

Update 09/10/2019: AWS released Priority Queuing this week. We’re in the process of testing this new feature and will update this post with our results soon.

Today’s post is a bit long, but for good reason:  the Amazon Redshift team recently introduced a new feature related to one of the most important Redshift management tools, the WLM.  So bear with us, there are some interesting WLM insights ahead!

One of the key things to get right when optimizing your Redshift Cluster is its WLM (Workload Management) configuration.  We’ve talked a lot about different aspects of WLM (e.g. in our WLM tuning post or our SQA post) since getting your WLM configuration right can mean the difference between your users having their queries run immediately versus having your users wait minutes or even hours before their queries even start executing.

As a reminder, Redshift’s Workload Manager allows you to define one or more queues for your clusters’ SQL queries, and to define the resources (e.g. memory) and rules (e.g. timeouts) that should apply to queries that run in those queues.  The primary goals of the WLM are to allow you to maximize your query throughput and prioritize different types of workloads.

Manual WLM Tuning

Because cluster resources are finite, configuring your WLM always results in a tradeoff between cluster resources and query concurrency:  the more concurrent queries you let run in a queue (slots), the fewer resources (like memory and cpu) each query can be given. Memory is by far the most precious resource to consider when tuning WLM. Queries that need more memory than they are allocated spill over to disk, causing huge slowdowns in performance not only for the query that went disk-based, but for the cluster as a whole (since long-running queries take up memory and a concurrency slot, and disk-based queries consume disk IO).  AWS recommends keeping your % of disk-based queries to under 10%, but in practice most Redshift administrators can (and should) typically keep it much lower.

Fortunately, finding the optimal tuning for your WLM is pretty straightforward – if you’re using intermix.io you can use our Throughput Analysis and Memory Analysis tools to quickly view your clusters’ concurrency and memory usage in each WLM queue, and see at a glance which users and applications are experiencing unacceptable queuing:

You can then adjust concurrency and/or memory in the AWS console of your cluster to give more memory to queues that have a large number of disk-based queries, or increase the number of slots in queues that have significant queuing

One more trick

One of the limitations of Redshift’s WLM is that the total memory assigned to a queue is divided equally between all query slots (not queries) in the queue.  That means that if you, say, allocate 1gb of memory to a queue with 10 slots, each query that runs in the queue will get 1gb / 10 = 100 mb of memory, even if it’s the only query running in that queue.  So small queries that need less than 100mb waste the extra memory in their slot, and large queries that need more than 100mb spill to disk, even if 9 of the 10 slots (900mb) are sitting idle waiting for a query.  Clearly this isn’t optimal.

One workaround is to use the Redshift session parameter wlm_query_slot_count to temporarily increase the number of slots that should be given to a query.  If you set this parameter to, say, 2 in your database session before executing your query, then your query will consume 2 WLM concurrency slots (reducing the number of concurrent queries that can run in that queue) and get twice the memory. In the example above, a query that needed 150mb of memory would spill to disk when running in a single 100mb slot but run fully in memory when run with 2 slots.

This is a great way to allocate more memory to a big query when the following are true:

  1. The query is a repeated (not one-off) query, so you can look at past statistics to predict how much memory (i.e. how many slots) it will need to avoid going disk-based.
  2. The query runs in a queue with other queries that can afford an increase in queue wait time.
  3. The query uses much more memory compared to other queries in its queue, making increasing the memory in the queue too wasteful.
  4. You can’t (or don’t want to) spend time optimizing the query or your table definitions to reduce the amount of memory it needs.

While wlm_query_slot_count can be a good solution for targeting individual memory-hungry queries on an ad-hoc basis, it is difficult to use this solution to reduce disk-based queries in a general and on-going way cluster-wide since each query requires a different setting and knowing in real-time how many slots you should assign to a particular query is difficult.  Further, it is hard to know in a general way what impact assigning more slots to a query will have on queue wait times.

Auto WLM

So where does this leave us?  Detailed cluster monitoring lets you tune your concurrency and memory WLM settings to minimize both queue wait time and the % of disk-based queries you have. But since every slot in a queue is given the same fixed fraction of queue memory, inevitably some memory-hungry queries will end up spilling to disk causing query and cluster slowdowns. Using wlm_query_slot_count lets you target some of those individual disk-based queries to try to prevent them from spilling to disk, but makes it difficult to optimize per-query memory allocation in a more general way cluster-wide.

This is exactly the problem that Automatic WLM was introduced to solve. The key innovation of Auto WLM is that it assigns memory to each query dynamically, based on its determination of how much memory the query will need.  It’s a little bit like having wlm_query_slot_count tuned for you automatically for each query that runs on your cluster.  As a result, memory-hungry queries can be given up to the total amount of memory available to avoid them going disk-based.  But nothing is free: giving more memory to memory-hungry queries means that the cluster can run fewer queries concurrently, resulting in more queuing overall.

So if you take away one thing from this post, it’s this: enabling Auto WLM will speed up slow, memory-intensive queries by preventing them from going to disk, but slow down smaller queries by introducing more queue wait time.  The degree to which this will impact your cluster performance will depend on your specific workloads and your priorities.

A Real-world example

intermix.io not only helps our customers keep their Redshift clusters operating at peak efficiency and their costs down–it helps us do the same for own internal Redshift clusters. So to see the impact of Automatic WLM, we first enabled Auto WLM on one of our non-production internal Redshift clusters and then used intermix.io to see how our cluster efficiency was impacted.

Manual WLM

The first cluster we enabled it on was one of our development Redshift clusters.  This cluster runs a batch ETL pipeline, and prior to enabling Auto WLM had a well-tuned WLM with minimal queue time but some large, slow, disk-based queries.  Here is a chart of average execution time (light blue), average queue wait time (dark blue), and query count (green line) for a few days before we made the change:

So our average execution time is 5.57 seconds, and average queue time is 0.88 seconds.  Looking at the same chart with Maximum selected, we see the queries that take the longest to run:

So while the average queue wait time and execution time is well below the data SLAs we need for this cluster, we have some queries running longer than 60 minutes–there is clearly room for improvement!

Auto WLM

After enabling Automatic WLM on August 2nd, we saw a drop in average execution time by about half but a significant spike in average queue wait time, from under 1 second to over 10 seconds.  The net result was a significant net increase in average query latency, even though there is a drop in average execution time:

The drop in average execution time is due to the big reduction in execution times for slow, disk-based queries, as shown in this chart of latencies for disk-based queries:

So Automatic WLM reduced our max query runtime from around 50 minutes to around 10 minutes–a 6x improvement!

For this cluster, which runs a consistent set of batch-processing ETL jobs (or “ELT”) and few ad-hoc queries, this net increase in average latency is a good tradeoff to get a big improvement in query runtimes for our slowest disk-based queries.

So far so good.  Or so we thought…

Another Real-world example

Emboldened by our initial test, we enabled Auto WLM on five additional Redshift clusters. As with our first cluster, these five clusters had manually tuned WLMs and were operating well within our data SLAs.  These clusters were significantly larger than our first test cluster (both in terms of nodes, query volume, and data stored). All clusters ran batch ETL jobs similar to the first cluster and ran a small percentage of ad-hoc queries.

Four of the five clusters showed a similar trend to our initial test, though we observed more modest improvements (since their maximum query runtimes were smaller–10 minutes or less compared to 50 minutes in our initial test).  However, the fifth cluster immediately started setting off alarms due to exceeding one of our data SLAs. The root cause was that one particular set of pipeline queries (a combination of four COPYs) were now exceeding their data SLA summed max runtime requirement of 5 minutes due to excessive queueing.  With our manually tuned WLM, each of the three queries were taking a max of 30 sec to execute, whereas with Auto WLM they were now taking as much 4 minutes each due to excessive queueing:

Since there are no parameters to tune with Auto WLM, we had no choice but to revert the WLM mode back to Manual, which rapidly got the queries back under their SLA requirement and our pipeline running smoothly.

Automatic WLM Advantages and Disadvantages

In summary, Auto WLM has the following advantages over Manual WLM:

  1. Long-running disk-based queries can be given more memory dynamically, preventing them from going to disk and improving both their performance and overall cluster performance.
  2. You can Set It and Forget It (though since cluster workloads typically evolve somewhat gradually over time, Manual WLMs also don’t typically need to be changed very often once tuned).

Auto WLM has the following disadvantages over Manual WLM:

  1. You can not prioritize workloads to ensure your data SLAs are met.
  2. Queries will experience longer latencies on average; in particular, the performance of short ad-hoc queries will likely be impacted.
  3. In times of increased load or as your workloads evolve the only way you’ll be able to improve your cluster performance will be to add nodes to your cluster (via scaling or concurrency scaling clusters).
  4. It’s a bit of a blackbox: Redshift will decide in an opaque way which of your users’ queries and workloads to prioritize.

So should I enable Auto WLM?

We’re still in the early days of Automatic WLM and its likely that the AWS Redshift team will continuously make improvements to their tuning algorithms. But for the moment we can make the following broad recommendations around enabling Auto WLM:

  1. Establish a baseline: First and foremost, before you enable Auto WLM you should optimally tune your WLM manually so you can compare apples-to-apples and know if Automatic WLM is the best option to keep your users happy and cluster costs down (if you want help doing this, signup for a free trial of Intermix). Without this step, you won’t know if you can potentially be running with fewer nodes or if your cluster performance problems are due to just one or two problematic queries.
  2. Evaluate your workloads and priorities: Enable Auto WLM if your workloads are such that you can tolerate additional queuing for all queries. For example, if your cluster is primarily processing long-running batch ETL jobs, then an additional minute of queuing per query may not matter. But if your cluster is primarily handling Looker dashboards or ad-hoc queries, your users may not appreciate a 30-60 second increase in query latency for queries that otherwise would execute in a few seconds. Since most clusters run a mix of ETL and BI queries, you’ll need to measure your query runtimes for each workload before and after enabling Auto WLM to know how your users are affected by the change.
  3. Monitor your data SLAs. With no knobs to tune other than a number of nodes in your cluster, your data SLAs will be up to Redshift. Be sure you monitor your data pipelines and ensure that Auto WLM is giving you the performance you need. If not (and you don’t want to scale your cluster), prioritizing your workloads using Manual WLM may be a better option.
  4. Try it out. As long as you can measure your cluster performance, you can try it out to see if it meets your users’ needs. However, if optimizing for cost is important, we also recommend that you periodically compare your cluster’s performance with Auto WLM to vs. an optimized Manual WLM so see which lets you run with fewer nodes.

As always, the most important thing to do is to measure your Redshift cluster performance quantitatively. It’s the only way to know if Automatic WLM is helping or hurting, and whether just optimizing the most problematic queries or adjusting your Manual WLM is a better option.

In our case, we are disabling it for our initial test cluster since that cluster is used by our developers for ad-hoc queries. Their feedback was that they could tolerate the long execution times of a small percentage of ETL jobs in exchange for faster interactive ad-hoc queries. We are however keeping it enabled for the four of the five clusters discussed above for the time being. But since our workloads continuously evolve as more data is added and most importantly as we optimize and modify our SQL queries, we will periodically revert to manual WLM whenever we review our cluster costs (and before adding nodes) to see if optimal manual tuning will let us save money by running our clusters with fewer nodes.

Stefan Gromoll

Stefan Gromoll

Join 11,000 of your peers.
Subscribe to our newsletter SF Data.
People at Facebook, Amazon and Uber read it every week.

Every Monday morning we'll send you a roundup of the best content from intermix.io and around the web. Make sure you're ready for the week! See all issues.