For data engineers who work with Amazon Redshift.

Make your Redshift queries 100x faster with our performance analytics.

request a free demo
Queue Time
196.06 sec
Execution Time
12.05 sec
Rows Scanned
10.2 GB
Queue Time
9.06 sec
Execution Time
291.18 sec

What is empowers data engineers to make Redshift the core of their data platform. Our performance analytics help remove bottlenecks from your cluster.
Track your data growth, increase cluster throughput and run faster queries. Spot opportunities to cut expensive queries, save on disk usage and even reduce the number of nodes. We’ll support you with training, expert advice and real-time access to experienced engineers who’ve managed hundreds of clusters.
Focus on what matters most and avoid costly mistakes.

Track data growth

As a data engineer, you're responsible for getting all company data into a single place so analysts and applications can slice and dice data in any possible way. With streaming pipelines and ETL-as-a-service, more raw data is available than ever before.
Data volume can grow so fast that it can be hard to keep track of it. With, you can keep an eye on your data growth and make sure you never run out of disk space again.

Run faster queries

Dashboard tools and collaborative analytics have made it easy to connect to data. Users can join data sets and create queries without any SQL knowledge. That can be a problem when on Monday morning all your users hit the refresh button. With more and more requests coming in, queries start to queue up, fall back to disk or time out. Including the queries that feed analytics applications. Manual scripts to find expensive queries put more load on the cluster. Often they don't produce any actionable insights.
Help your analysts run more queries faster to support better decision making. helps you cut the bottlenecks that make your queries slow.

Optimize costs

Each Redshift node comes with a fixed amount of memory. Memory is one of the most critical resource in Redshift, and every query uses some memory to run. As an administrator, it’s up to you to ensure that your queries have sufficient memory otherwise they will go “disk-based” which increases their execution time as well as IO utilization of the whole cluster.
Alternatively, sometimes you waste memory by assigning it to a queue that doesn’t have queries running inside of it - memory that could be assigned easily to another queue. Finally, if you don’t do regular vacuum maintenance of your cluster, the system can think your queries need more memory than they actually do, which means you end up using memory the query didn’t really need.
Help save money by optimizing the memory allocated to your queries and ensure you’re not wasting memory in queues that don’t need it.
Everything you need to make Redshift the core of your data platform.
Our SaaS product gathers metadata from your clusters. Our dashboard gives you direct access to all performance metrics in near real-time. With, you can align your data team around a common set of performance metrics.

Predict future storage needs

With business growth comes data growth. More data leads to more analysis. When joining data sets in Redshift, analysts and algorithms create new, derived data. Which can be an order of magnitude larger than the original source data. Predict future storage needs with our Storage Analytics. Understand data growth rates, and identify opportunities where to save on storage.

DISK UTILIZATION Analyze disk use by node, database, schema and table. Predict storage needs with data growth rates for each individual table.

SCHEMA & TABLE SIZES Identify the top tables by size per schema and database. See all tables which need vacuuming to reclaim and reuse space.

TABLE SORTING Sort all tables by disk utilization and data growth rates. Spot tables with accelerating growth that could fill up your cluster.

VACUUM SCRIPTS Save time by downloading and running custom vacuum scripts. Reduce the storage footprint of your data.

Make every query fast & efficient

Even with the power of a large Redshift cluster, queries can still get slow. Concurrent queries are competing for the same memory resources. When queries are starving for memory, they fall back to disk. Getting concurrency settings right in the WLM is only half the battle. The other half is determining the right memory allocation for your workloads. With our Memory Analytics.

MEMORY STATS Understand what users are running expensive queries that are consuming too much memory.

DISK-BASED QUERIES See which queries are starved for memory and are falling back to disk. Debug the full text query, including the precise amount of memory it uses and the amount of rows the query scanned.

MEMORY DETAILS Drill down into the single, full-text query and its concurrency. See what other queries were running at the same time, competing for memory.

Cut queue wait times for your loads and queries

Throughput Analytics help you discover the Workload Management (WLM) configuration in Redshift. In context with cluster usage, you start understanding where data is not flowing. Maximize cluster throughput by finding the right Workload Management (WLM) configuration in Redshift.

WLM DETAILS Intuitive auto-discovery of WLM queues, user groups, memory and concurrency settings. See all your cluster settings in a single view.

CONCURRENCY ANALYSIS Identify concurrency bottlenecks that slow down your queries. See with time-series data when queries get stuck waiting in the queue.

QUERY GROUP SUMMARY Isolate key queries with automatic grouping. Track their performance over time and identify opportunities to increase query speeds.

Uncover actionable insights

Asking simple questions about your queries and loads shouldn’t be difficult. Which users write the most expensive queries? What’s the average query latency? How much time do queries wait in the queue? Which transformations are too slow or failing? Making sense of thousands of varying data points takes a lot of time and resources. That’s why allows you to group queries and transfers by common traits. To measure performance and find insights you can act on.

LOAD & QUERY ANALYTICS Get times-series reports for your data transfers and queries. With details on transfer rates, counts, queue and execution times.

ADVANCED SEARCH Powerful full-text search capabilities across all queries, data transfers, and user activity. Narrow down your search with filters and complex rules.

TOP LOADS & QUERIES Identify the top tables by size per schema and database. See all tables which need vacuuming to reclaim and reuse space.

QUERY DETAILS Sort all tables by disk utilization and data growth rates. Spot tables with accelerating growth that could fill up your cluster.


Start collaborating and sharing performance insights with your team members.

Sometimes you want to track your key queries. See if there’s a spike in memory usage or if they’re slowing down. Or simply see how your queries are running faster because you’ve tuned your cluster. With Saved Searches, you can do that. 

CUSTOM SEARCHES Save your searches to create personal dashboards relevant to achieving your performance goals.

TIME PICKER Use different time-based views of your performance data to create baselines and benchmarks.

COLLABORATION Share saved searches with your team via unique URLs, e.g. in Slack. Add custom descriptions and tags for better collaboration.

Get immediate help on your most complex Redshift problems

The use cases for Redshift differ from customer to customer. Classic reporting, log analysis, fraud detection, predictive apps – the list is endless. With different use cases come different questions. Problems are sometimes so unique and pressing, they need an immediate answer. Get instant help from our Redshift experts. We’ve seen so many Redshift clusters, we’re confident that we can solve your problem. Plus, we’re drinking our own champaign. Our back-end runs on Redshift, across three continents, with dozens of different clusters and thousands of nodes. And we’re using to to manage

SLACK CHANNEL Get help via a dedicated Slack channel that feels like an extension of your data team. Iterate faster, and get answers on your Redshift configurations.

TRAINING &WORKSHOPS Train your data team on Redshift with our dedicated workshops. With custom drill downs into your data model and workloads.

COMMUNITY Network and exchange with other Redshift users during our frequent on-site and virtual customer events, including once a year during AWS Reinvent.


Stress free security and scalability

Redshift has your “data crown jewels”. We help you to put your data work. Our cloud-based platform doesn’t require planned downtime. We have limitless capacity to store your performance data and make it accessible in real-time. We follow industry-standard protocols for storage, backup, and redundancy.  And we safeguard all information with government-level encryption. Like the Queen’s Guards.

WORLD CLASS SECURITY We protect all information with government-level encryption. Our security program—people, process, and privacy—is designed to protect your cluster .

DATA PRIVACY With our multi-region infrastructure, you can run in a region of your choice, aligned with where your data resides.

DESIGNED FOR DEVELOPERS Our REST APIs make it easy to connect the dots between all the solutions in your ecosystem. We know that because we built our own dashboard on top of our API


Your journey with

“Redshift is a black box” is a term we hear a lot from our customers when we start working with them. Some customers want to enable simple self-service analytics for their company. Others want to build predictive apps on top of Redshift. What they all have in common is that they'd like to invest more into Redshift, but are afraid to do so.
Because of performance issues. By the time they come to us, they've pretty much tried everything else. Switching cluster types. Adding more nodes. Hiring a Redshift consultant. Writing performance scripts. Yet the performance issues persist. Here's how helps you.


It's time to start reviewing your cluster performance. During the free trial, our Redshift experts become an extension of your team. Patterns can be similar, but no two clusters are alike. Our dashboard helps you spot the performance issues specific to your business. Together, we define a set of baseline metrics that measure success for your Redshift operations. The baseline metrics help track how changes to cluster settings affect performance.
Learn how concurrency and memory settings affect data throughput. For the first time, you'll be able to understand and measure your cluster performance. Why queries are non-concurrent. How much memory they use. How fast your disk is filling up. Where better vacuuming is necessary.


Track your baseline metrics and watch how data throughput has gone up. Take more aggressive actions like adding more load, and see throughput go up further. Remove nodes or merge clusters to save on spend. Be more confident in building on Redshift. Because you can see in real-time how performance changes with changing workloads. Set up alerts and saved searches to stay ahead of issues before they become avalanches.


During onboarding, you will learn how to navigate the dashboard. And how to analyze the huge amount of cluster information available. Instrumenting your Redshift cluster with is a matter of minutes. Performance data will show within minutes as well. You’ll understand how your workloads are impacting your resources. See everything in one place : Data loads. Batch jobs. Ad-hoc queries. Queues. Users. Everything.


In this phase, we'll start changing your Redshift settings. The baseline metrics help track how changes affect performance. We work with you to isolate and fix the most pressing issues for your cluster. And move the baseline metrics to a point where you see lasting improvements.
Actions can include redefinition of user groups, concurrency settings and memory allocation. Free up disk space by vacuuming stale tables. Identify tables that nobody uses yet take up space. Drive down per-query memory usage.


Data throughput is up. So is confidence in working with Redshift. Now give your entire company self-service access to data residing in Redshift. Fast dashboards, no matter how many users and queries. Enable your teams to run their own transformations and build products on top of data. Open up programmatic access to data sitting in Redshift. Be confident in adding more load to your cluster.
About Us
With, we want to solve the biggest problem when it comes to Redshift performance. Everyone is guessing . Data scientists don't know why their queries are slow or dashboards don't refresh. Data engineers aren't sure which workload settings to pick, why throughput is low or when a disk fills up. CTOs struggle predicting how much to budget for Redshift.
Our mission at is to make Redshift performance more transparent.

Today we provide the most actionable performance insights in the industry. We want to make this data available to as many Redshift users as possible.

(Click to learn more about Paul)

Paul Lappas
Cloud Computing Pioneer

(Click to learn more about Dave)

Dave Steinhoff
Database Inventor

(Click to learn more about Lars)

Lars Kamp
Client Service Fanatic

Paul immigrated to the US at the age of two. Growing up, he worked the kitchen in his parents Greek deli in New Jersey. Paul’s still a stellar cook today.

Paul studied Electrical Engineering at the University of Virginia. His first data engineering role included mining raw satellite telemetry data for the Iridium Satellite phone constellation, where he discovered signal interference from pirate radio stations around the world which eventually led to Motorola selling off the business.

Paul co-founded GoGrid, one of the early cloud computing companies, and grew the business from scratch to over $50M ARR. GoGrid was the first service provider with a dashboard to make configuring cloud servers understandable for humans. Paul holds multiple patents for cloud computing and performance analytics. He’s also a member of the AWS Customer Advisory Board.

When not working on, Paul is a husband and father. You can spot him on Saturdays in North Beach in San Francisco, scouring the local delis and farmers markets for fresh ingredients.

Dave likes playing pool and inventing databases. In that order. Dave is one of the original creators of the technology behind Redshift. He was a founder and chief architect of ParAccel (later bought by Actian). In 2012, Amazon acquired the ParAccel technology to use it as the foundation for Redshift. When you’re setting your sort and dist keys in Redshift (you are, right?), you can thank Dave for that because he’s the guy who invented them. Dave is the author of multiple patents on enhancing data throughput for data warehouses.

When Paul and Lars called on Dave to join as an advisor, it didn’t take long for him to say “yes”. Dave calls it “unfinished business” to give customers a better way to be in control of their Redshift workloads.

When not helping customers with their Redshift clusters, there’s a high chance you’ll find Dave in a pool hall.

Lars found his way from his native Germany to the US as an exchange student in mile-high Bend, Oregon. For college, he returned to Germany and Italy. His work experience while in college includes selling steel tubes to German shipyards and specialty chemicals for the die-casting industry in Lombardy. Turning his back on heavy industries, Lars joined Accenture’s High Tech Practice. 

From his long time at Accenture, Lars is probably best known for leading the effort to form Accenture Digital. Accenture Digital launched to help companies address the big shift to cloud and mobile computing. That included investments into a nascent cloud offering which today is the foundation for the Accenture AWS Business Group.

In his free time, you’ll find Lars skiing the Sierra Nevada with his family.

Companies who trust us

Make your queries 100x faster
Request a free demo