At intermix.io, we work with companies that build data pipelines. Some start cloud-native on platforms like Amazon Redshift, while others migrate from on-premise or hybrid solutions. What they all have in common is the one question they ask us at the very beginning:
“How do other companies build their data pipelines?”
And so that’s why we decided to compile and publish a list of publicly available blog posts about how companies build their data pipelines. In those posts, the companies talk in detail about how they’re using data in their business and how they’ve become data-centric.
The 15 Companies we’ve looked at are:
Table of Contents
If we missed your post, we’re happy to include it. Just fill out this form, which will take you less than a minute. And with that – please meet the 15 examples of data pipelines from the world’s most data-centric companies.
Getting data-driven is the main goal for Simple. It’s important for the entire company to have access to data internally. Instead of the analytics and engineering teams to jump from one problem to another, a unified data architecture spreading across all departments in the company allows building a unified way of doing analytics.
The main problem then is how to ingest data from multiple sources, process it, store it in a central data warehouse, and present it to staff across the company. Similar to many solutions nowadays, data is ingested from multiple sources into Kafka before passing it to compute and storage systems.
The warehouse of choice is Redshift, selected because of its SQL interfaces and the ease with which it processes petabytes of data. Reports, analytics, and visualizations are powered using Periscope Data. In such a way, the data is easily spread across different teams, allowing them to make decisions based on data.
Clearbit was a rapidly growing, early-stage startup when it started thinking of expanding its data infrastructure and analytics. They tried out a few out-of-the-box analytics tools, each of which failed to satisfy the company’s demands.
After that, Clearbit took building the infrastructure in their own hands. Their efforts converged into a trio of providers: Segment, Redshift, and Mode. Segment is responsible for ingesting all kinds of data, combining it, and syncing it daily into a Redshift instance. The main data storage is obviously left to Redshift, with backups into AWS S3.
Finally, since Redshift supports SQL, Mode is perfectly suited for running queries (while using Redshift’s powerful data processing abilities) and creating data insights.
Mode makes it easy to explore, visualize, and share that data across your organization.
But as data volume grows, that’s when data warehouse performance goes down. With ever-increasing calls to your data from analysts, your cloud warehouse becomes the bottleneck.
That’s why we’ve built intermix.io to provide Mode users with all the tools they need to optimize their queries running on Amazon Redshift. Here one of our dashboards that shows you how you can track queries from Mode down to the single user:
The whole data architecture at 500px is mainly based on two tools: Redshift for data storage; and Periscope for analytics, reporting, and visualization. From a customer-facing side, the company’s web and mobile apps run on top of a few API servers, backed by several databases – mostly MySQL. Data from these DBs passes through a Luigi ETL, before moving to storage on S3 and Redshift.
Splunk here does a great job of querying and summarizing text-based logs. Periscope Data is responsible for building data insights and sharing them across different teams in the company. All in all, this infrastructure supports around 60 people distributed across a couple of teams within the company, prior to their acquisition by Visual China Group.
The data infrastructure at Netflix is one of the most sophisticated in the world. The video streaming company serves over 550 billion events per day, equaling roughly to 1.3 petabytes of data. In general, Netflix’s architecture is broken down into smaller systems, such as systems for data ingestion, analytics, and predictive modeling. The data stack employed in the core of Netflix is mainly based on Apache Kafka for real-time (sub-minute) processing of events and data. Data needed in the long-term is sent from Kafka to AWS’s S3 and EMR for persistent storage, but also to Redshift, Hive, Snowflake, RDS, and other services for storage regarding different sub-systems. Metacat is built to make sure the data platform can interoperate across these data sets as a one “single” data warehouse. Its task is to actually connect different data sources (RDS, Redshift, Hive, Snowflake, Druid) with different compute engines (Spark, Hive, Presto, Pig). Other Kafka outputs lead to a secondary Kafka sub-system, predictive modeling with Apache Spark, and Elasticsearch. Operational metrics don’t flow through the data pipeline but through a separate telemetry system named Atlas.
The tech world has seen dramatic changes since Yelp was launched back in 2004. By 2012, Yelp found themselves playing catch-up. It transformed from running a huge monolithic application on-premises to one built on microservices running in the AWS cloud. By the end of 2014, there were more than 150 production services running, with over 100 of them owning data. Its main part of the cloud stack is better known as PaSTA, based on Mesos and Docker, offloading data to a Redshift data warehouse, Salesforce CRM, and Marketo marketing automation. Data enters the pipeline through Kafka, which in turn receives it from multiple different “producer” sources.
Gusto, founded in 2011, is a company that provides a cloud-based payroll, benefits, and workers’ compensation solution for businesses. Their business has grown steadily over the years, currently topping to around 60 thousand customers. By early 2015, there was a growing demand within the company for access to data. Up until then, the engineering team and product managers were running their own ad-hoc SQL scripts on production databases. There was obviously a need to build a data-informed culture, both internally and for their customers. When coming to the crossroad to either build a data science or data engineering team, Gusto seems to have made the right choice: first, build a data infrastructure that can support analysts in generating insights and drawing prediction models.
The first step for Gusto was to replicate and pipe all of their major data sources into a single warehouse. The warehouse choice landed on an AWS Redshift cluster, with S3 as the underlying data lake. Moving data from production app databases into Redshift was then facilitated with Amazon’s Database Migration Service. On the other side of the pipeline, Looker is used as a BI front-end that teams throughout the company can use to explore data and build core dashboards. Aleph is a shared web-based tool for writing ad-hoc SQL queries. Finally, monitoring (in the form of event tracking) is done by Snowplow, which can easily integrate with Redshift. And, as usual, Airflow orchestrates the work through the pipeline.
Building this pipeline helped to simplify data access and manipulation across departments. For instance, analysts can simply build their own datasets as part of an Airflow task and expose it to Looker to use in dashboards and further analyses.
Teads is a video advertising marketplace, often ranked as the number one video platform in the world. Working with data-heavy videos must be supported by a powerful data infrastructure, but that’s not the end of the story. Teads’ business needs to log user interactions with their videos through the browser – functions like play, pause, resume, complete – which count up to 10 million events per day. Another source of data is video auctions with a real-time bidding process. These generate another 60 million events per day. To build their complex data infrastructure, Teads has turned to both Google and Amazon for help.
Originally the data stack at Teads was based on a lambda architecture, using Storm, Spark and Cassandra. This architecture couldn’t scale well, so the company turned toward Google’s BigQuery in 2016. They already had their Kafka clusters on AWS, which was also running some of their ad delivery components, so the company chose a multi-cloud infrastructure. Transferring data between different cloud providers can get expensive and slow. To address the second part of this issue, Teads placed their AWS and GCP clouds as close as possible and connected them with managed VPNs.
So how does their complex multi-cloud data stack look? Well, first of all, data coming from users’ browsers and data coming from ad auctions is enqueued in Kafka topics in AWS. Then using an inter-cloud link, data is passed over to GCP’s Dataflow, which is then well paired with BigQuery in the next step. Having all data in a single warehouse means half of the work is done. The next step would be to deliver data to consumers, and Analytics is one of them. The Analytics service at Teads is a Scala-based app that queries data from the warehouse and stores it to tailored data marts. Interestingly, the data marts are actually AWS Redshift servers. In the final step, data is presented into intra-company dashboards and on the user’s web apps.
Remind’s data engineering team provides the whole company with access to the data they need, as big as 10 million daily events, and empower them to make decisions directly. They initially started with Redshift as its source of truth resource for data, and AWS S3 to optimize for cost.
While S3 is used for long-term storage of historical data in JSON format, Redshift only stores the most valuable data, not older than three months. The company uses Interana to run custom queries on their JSON files on S3, but they’ve also recently started using AWS Athena as a fully managed Presto system to query both S3 and Redshift databases.
The move for Athena also triggered a change in the data format from JSON to Parquet, which they say was the hardest step in building up their data platform. An EMR/Hive system is responsible for doing the needed data transformations between S3 and Athena. In the data ingestion part of the story, Remind gathers data through their APIs from both mobile devices and personal computers, as the company business targets schools, parents, and students. This data is then passed to a streaming Kinesis Firehose system before streaming it out to S3 and Redshift.
Remind’s future plans are probably focused on facilitating data format conversions using AWS Glue. This step would allow them to replace EMR/Hive from their architecture and use Spark SQL instead of Athena for diverse ETL tasks.
Robinhood is a stock brokerage application that democratizes access to the financial markets, enabling customers to buy and sell stocks and ETFs with zero commission. The company debuted with a waiting list of nearly 1 million people, which means they had to pay attention to scale from the very beginning.
Robinhood’s data stack is hosted on AWS, and the core technology they use is ELK (Elasticsearch, Logstash, and Kibana), a tool for powering search and analytics. Logstash is responsible for collecting, parsing, and transforming logs before passing them on to Elasticsearch, while data is visualized through Kibana.
They grew from a single ELK cluster with a few GBs of data to three clusters with over 15 TBs. Before data goes to ELK clusters, it is buffered in Kafka, as the various data sources generate documents at differing rates.
Kafka also shields the system from failures and communicates its state with data producers and consumers. As with many other companies, Robinhood uses Airflow to schedule various jobs across the stack, beating competition such as Pinball, Azkaban and Luigi. Robinhood data science team uses Amazon Redshift to help identify possible instances of fraud and money laundering.
Dollar Shave Club (DSC) is a lifestyle brand and e-commerce company that’s revolutionizing the bathroom by inventing smart, affordable products. Don’t be fooled by their name. They have a pretty cool data architecture for a company in the shaving business. Their business model works with online sales through a subscription service. Currently, they serve around 3 million subscribed customers.
DSC’s web applications, internal services, and data infrastructure are 100% hosted on AWS. A Redshift cluster serves as the central data warehouse, receiving data from various systems. Data movement is facilitated with Apache Kafka and can move in different directions – from production DBs into the warehouse, between different apps, and between internal pipeline components.
There’s also Snowplow, which collects data from the web and mobile clients. Once data reaches Redshift, it is accessed through various analytics platforms for monitoring, visualization, and insights. The main tool for the job is, of course, Apache Spark, which is mainly used to build predictive models, such as recommender systems for future sales.
Coursera is an education company that partners with the top universities and organizations in the world to offer online courses. They started building their data architecture somewhere around 2013, as both numbers of users and available courses increased. As of late 2017, Coursera provides courses to 27 million worldwide users.
Coursera collects data from its users through API calls coming from mobile and web apps, their production DBs, and logs gathered from monitoring. A backend service called “eventing” periodically uploads all received events to S3 and continuously publishes events to Kafka. The engineering team has selected Redshift as its central warehouse, offering much lower operational cost when compared with Spark or Hadoop at the time.
On the analytics end, the engineering team created an internal web-based query page where people across the company can write SQL queries to the warehouse and get the information they need. Of course, there are company-wide analytics dashboards that are refreshed on a daily basis. Finally, many decisions made in Coursera are based on machine learning algorithms, such as A/B testing, course recommendations, and understanding student dropouts.
Wish is a mobile commerce platform. It provides online services that include media sharing and communication tools, personalized and other content, as well as e-commerce. During the last few years, it grew up to 500 million users, making their data architecture out of date.
Before they scaled up, Wish’s data architecture had two different production databases: a MongoDB NoSQL database storing user data; and a Hive/Presto cluster for logging data. Data engineers had to manually query both to respond to ad-hoc data requests, and this took weeks at some points. Another small pipeline, orchestrated by Python Cron jobs, also queried both DBs and generated email reports.
After rethinking their data architecture, Wish decided to build a single warehouse using Redshift. Data from both production DBs flowed through the data pipeline into Redshift. BigQuery is also used for some types of data. It feeds data into secondary tables needed for analytics. Finally, analytics and dashboards are created with Looker.
Blinkist transforms the big ideas from the world’s best nonfiction books into powerful little packs users can read or listen to in 15 minutes. At first, they started selling their services through a pretty basic website, and they monitored statistics through Google Analytics. Unfortunately, visitor statistics gathered from Google Analytics didn’t match the figures the engineers were computing. This is one of the reasons why Blinkist decided to move to the AWS cloud.
They choose a central Redshift warehouse where data flows in from user apps, backend, and web front-end (for visitors tracking). To get data to Redshift, they stream data with Kinesis Firehose, also using Amazon Cloudfront, Lambda, and Pinpoint. The engineering team at Blinkist is working on a newer pipeline where ingested data comes to Alchemist, before passing it to a central Kinesis system and onwards to the warehouse.
Healthcare platform Halodoc found themselves with a common startup problem: scalability. Their existing data pipeline worked on a batch processing model, with regularly scheduled extractions for each source. They performed extractions with various standard tools, including Pentaho, AWS Database Migration Service, and AWS Glue.
They would load each export to S3 as a CSV or JSON, and then replicate it on Redshift. At this point, they used a regular Pentaho job to transform and integrate data, which they would then load back into Redshift.
As Halodoc’s business grew, they found that they were handling massive volumes of sensitive patient data that had to get securely and quickly to healthcare providers. The Pentaho transformation job, installed on a single EC2 instance, was a worrying single point of failure.
Halodoc looked at a number of solutions and eventually settled on Apache Airflow as a single tool for every stage of their data migration process. They chose Airflow because it’s highly responsive and customizable, with excellent error control. It also supports machine learning use cases, which Halodoc requires for future phases.
The new data pipeline is much more streamlined. Halodoc uses Airflow to deliver both ELT and ETL. In their ETL model, Airflow extracts data from sources. It then passes through a transformation layer that converts everything into pandas data frames. The data frames are loaded to S3 and then copied to Redshift. Airflow can then move data back to S3 as required.
For ELT, the Airflow job loads data directly to S3. Halodoc then uses Redshift’s processing power to perform transformations as required.
iHeartRadio is a global streaming platform for music and podcasts. It runs on a sophisticated data structure, with over 130 data flows, all managed by Apache Airflow. These data pipelines were all running on a traditional ETL model: extracted from the source, transformed by Hive or Spark, and then loaded to multiple destinations, including Redshift and RDBMSs.
On reviewing this approach, the engineering team decided that ETL wasn’t the right approach for all data pipelines. Where possible, they moved some data flows to an ETL model. Data flows directly from source to destination – in this instance, Redshift – and the team applies any necessary transformations afterward. Redshift Spectrum is an invaluable tool here, as it allows you to use Redshift to query data directly on S3 via an external meta store, such as Hive.
However, this model still didn’t suit all use cases. The iHeartRadio team began experimenting with the ETLT model (Extract, Transform, Load, Transform) model, which combines aspects of ETL and ELT. In this approach, the team extracts data as normal, then uses Hive for munging and processing. They then load the data to the destination, where Redshift can aggregate the new data.
Now, the team uses a dynamic structure for each data pipeline, so data flows might pass through ETL, ELT, or ETLT, depending on requirements. This new approach has improved performance by up to 300% in some cases, while also simplifying and streamlining the entire data structure.
We hope the 15 examples in this post offer you the inspiration to build your own data pipelines in the cloud.
If you don’t have any data pipelines yet, it’s time to start building them. Begin with baby steps and focus on spinning up an Amazon Redshift cluster, ingest your first data set and run your first SQL queries.
After that, you can look at expanding by acquiring an ETL tool, adding a dashboard for data visualization, and scheduling a workflow, resulting in your first true data pipeline. And once data is flowing, it’s time to understand what’s happening in your data pipelines.
That’s why we built intermix.io. We give you a single dashboard to understand when & why data is slow, stuck, or unavailable.
With intermix.io you can:
Our customers have the confidence to handle all the raw data their companies need to be successful. What you get is a real-time analytics platform that collects metrics from your data infrastructure and transforms them into actionable insights about your data pipelines, apps, and users who touch your data.
Setting up intermix.io takes less than 10 minutes, and because you can leverage our intermix.io experts, you can say goodbye to paying for a team of experts with expensive and time-consuming consulting projects. We can help you plan your architecture, build your data lake and cloud warehouse, and verify that you’re doing the right things.
Monitoring your data apps with app tracing gives you better control and can help train new hires to use tools. Amazon Redshift training, for example, can become much easier when you use Intermix.io Data Tracing. If you don’t know how monitoring data apps benefit your organization and tools, the following article will give you a few ideas.
App Tracing surfaces important information about how apps & users interact with your data. It can help answer questions like:
Data apps typically fall into one or more of three categories:
Intermix can fulfill all three of these functions. It integrates with your favorite tools to show you which end-users connect to your data warehouse. It comes ready to integrate with popular tools like:
Whether you want to connect your data to more BI tools or you want insights that lead to better Amazon Redshift training, Intermix can help.
Intermix App Tracing thrives on visualization and analysis. If you want to know how much data an app uses, just click on the icon to get a graph that shows a timeline of the app’s data usage.
You can also get visualization analysis to learn:
Visualization makes information easier for everyone to understand. Whether you have years of experience in IT or you just want an overview of your data use, Intermix’s App Tracing gives you a straightforward approach to view information.
App Tracing requires the data app to annotate the executed SQL with a comment. The comment encodes metadata about the application which submitted this query.
Intermix.io will automatically index all data contained in the annotation, and make it accessible as first-class labels in our system. I.e. for Discover searches, Saved Searches, and aggregations in the Throughput Analysis page.
Out of the box, we support:
In the below example, a query spike in WLM 3 causes a bottleneck in query latency. The result is that queries which would otherwise take 13-14 seconds to execute are stuck in the queue for
> greater than 3 minutes.
App Tracing detects that the majority of these queries are from Looker. How do you know which user is causing this?
Click on the chart, and a widget will pinpoint the specific Looker user(s) who ran those queries. In this example, we see that user 248 is responsible.
Armed with this information, you can now:
See all the activity for this user by heading to Discover and use the new ‘App’ filter to search for Looker user 248.
To set up an alarm to get email notifications, save that search and stream the following metrics to CloudWatch:
The following Slack conversation took place the morning we soft-launched app tracing in June 2018:
Intermix.io makes Amazon Redshit more powerful. We’ve seen clients use Intermix.io to:
If you’re using Amazon Redshift in combination with Apache Airflow, and you’re trying to monitor your DAGs – we’d love to talk! We’re running a private beta for a new Airflow plug-in with a few select customers. Go ahead and click on the chat widget on the bottom right of this window. Answer three simple questions, schedule a call, and then mention “Airflow” at the end and we’ll get you set up! As a bonus, we’ll throw in an extended trial of 4 weeks instead of 2!
Amazon Redshift is an excellent choice for cloud data warehousing—but how do you move your data into Redshift in the first place, so that it can be used for queries and analysis? Redshift users have two main options:
In this post, we’ll discuss an optimization you can make when choosing the first option: improving performance when copying data into Amazon Redshift.
The Amazon Redshift COPY command loads data into a table. The files can be located in an Amazon S3 bucket, an Amazon EMR cluster, a remote host that is accessed using SSH, or an Amazon DynamoDB table.
There are a few things to note about using the Redshift COPY command:
Per this last note, the recommended way of deduplicating records in Amazon Redshift is to use an “upsert” operation. In the next section, we’ll take a closer look at upserts.
UPSERT is a method of deduplicating data when copying into Amazon Redshift or other databases. An “upsert” operation merges new records with existing records using primary keys.
While some relational database management systems support a single UPSERT command, Amazon Redshift does not. Instead, Redshift recommends the use of a staging table for merging records by joining the staging table with the target table.
Below is an example of an upsert operation for Amazon Redshift:
-- Start transaction BEGIN;
-- Create a temp table to load new customer data
CREATE TEMPORARY TABLE temp_customer (LIKE customer);
-- Load new customer data into the staging table
COPY temp_customer (custid, name, email)
-- Update customer email and name for existing rows
UPDATE customer c
SET name = t.name, email = t.email
FROM temp_customer t
WHERE c.custid = t.custid;
-- Insert records
INSERT INTO customer
SELECT t.* FROM temp_customer t LEFT JOIN customer c
ON t.custid = c.custid
WHERE c.custid IS NULL;
-- End transaction. Note that the temp table will automatically be dropped a the end of the session
By default, the Redshift COPY command automatically runs two commands as part of the COPY transaction:
Redshift runs these commands to determine the correct encoding for the data being copied, which may be useful when a table is empty. In the following cases, however, the extra queries are useless and should be eliminated:
In the below example, a single COPY command generates 18 “analyze compression” commands and a single “copy analyze” command:
Extra queries can create performance issues for other queries running on Amazon Redshift. For example, they may saturate the number of slots in a WLM queue, thus causing all other queries to have wait times.
The solution is to adjust the COPY command parameters to add “COMPUPDATE OFF” and “STATUPDATE OFF”, which will disable these features during upsert operations. Below is an example of a COPY command with these options set:
-- Load data into the staging table
COPY users_staging (id, name, city)
COMPUPDATE OFF STATUPDATE OFF;
Improving Redshift COPY performance is just one way to perform Redshift performance tuning. That’s why we’ve built intermix.io, a powerful Redshift analytics platform that provides a single user-friendly dashboard to easily monitor what’s going on in your AWS environment. Want to try it out for yourself? Sign up today for a free trial.
From Intermix’s Amazon Redshift Training series, learn how to build a stable data pipeline using these three simple steps.
When you’re building or improving your data pipeline, batch data processing seems simple. Pull data from a source, apply some business logic to it, and load it for later use. When done well, automating these jobs is a huge win. It saves time and empowers decision-makers with fresh and accurate data.
But this kind of ETL (Extract, Transform, and Load) process gets very tricky, very quickly. A failure or bug in a single step of an ETL process has cascading effects that can require hours of manual intervention and cleanup. These kinds of issues can be a huge time sink for the analysts or engineers that have to clean up the mess. They negatively impact data quality. They erode end-user confidence. And they can fuel anxiety when pushing out even the most innocuous changes to a data pipeline.
To avoid falling into this kind of ETL black hole, analysts and engineers need to make data pipelines that:
In short, they need to make data pipelines that are idempotent.
Idempotent is a mathematical term describing an operation that can be applied arbitrarily many times without changing the result. Such data pipelines are fault-tolerant, reliable, and easy to troubleshoot. And they’re much simpler to build than you might expect.
Here are three simple steps you can take to make your data pipeline idempotent.
Amazon recommends several best practices for loading data. The easiest way to be sure that a buggy or failed ETL run doesn’t dirty up your data is to isolate the data that you deal with in a given run.
If you’re working with time-series data, this is reasonably straightforward. When pulling data from your initial source, filter it by time. Operate on an hour, a day, or a month, depending on the volume of your data. By operating on data in time-windowed batches like this, you can be confident that any bugs or failures aren’t impacting your data quality beyond a single window.
If you’re working with cross-sectional data, try to deal with some explicitly-defined subset of that data in a given run. Knowing exactly what data has been impacted by a bug or failure makes clean up and remediation of data pipeline issues relatively straightforward. Just delete the affected data from your final destination and then run again.
If you isolate your data correctly, you never have to worry about duplicated or double-counted data points.
It’s important to build your pipeline so that it either loads everything or nothing at all. This will help to avoid time-consuming cleanup of failed or buggy pipeline runs. That is to say, your runs should be atomic.
If you’re loading into a SQL database, achieving atomicity for your data pipeline is as simple as wrapping all of your loading statements–inserts, copies, creates, or updates–in a transaction with a BEGIN statement at the beginning and an END or COMMIT statement at the end.
If the final destination for your data is not a SQL database–say an Elasticsearch cluster or an Amazon S3 bucket–building atomicity into your pipeline is a little more tricky. But it’s not impossible. You’ll simply need to implement your own “rollback” logic within the load step.
For instance, if you’re writing CSV files to an Amazon S3 bucket from a python script, maintain a list of the resource identifiers for the files you’ve written. Wrap your loading logic in a try-except-finally statement which deletes all of the objects you’ve written to if the script errors out.
By making your data pipeline atomic, you prevent the need for time-consuming cleanup after failed runs.
Related Reading: Troubleshooting Data Loading Errors in Amazon Redshift
Now that you’ve isolated the data your pipeline deals within a given run and implemented transactions, having your pipeline clean up after itself is a breeze. You just need to delete before you insert. This is the most efficient way to avoid loading duplicate data.
Say your data pipeline is loading all of your company’s sales data from the previous month into a SQL database. Before you load it, delete any data from last month that’s already in the database. Be sure to do this within the same transaction as your load statement. This may seem scary. But if you’ve implemented your transaction correctly, your pipeline will never delete data without also replacing it.
Many SQL dialects have native support for this type of operation, typically referred to as an upsert. For instance in PostgreSQL (which is also the SQL dialect used by Amazon Redshift) you could prevent loading duplicate data with the following:
INSERT INTO sales (employee, customer, contract_number, annual_revenue, date)
VALUES ('Janet', 5439, 50000, '2018-07-20')
ON CONFLICT (contract_number)
DO UPDATE SET employee = excluded.employee;
This SQL statement will attempt to insert a new row into the database. If a row already exists with contract number 5439, the existing row will be updated with the new employee name. In this way, duplicate rows for the same contract will never load into the database.
By having your pipeline clean up after itself and prevent the loading of duplicate data, backfilling data becomes as simple as rerunning. If you discover a bug in your business logic or you want to add a new dimension to your data, you can push out a change and backfill your data without any manual intervention whatsoever.
ETL pipelines can be fragile, difficult to troubleshoot, and labor-intensive to maintain. They’re going to have bugs. They’re going to fail once in a while.
The key to building a stable, reliable, fault-tolerant data pipeline is not to build it so that it always runs perfectly. The key is to build it in such a way that recovering from bugs and failures is quick and easy.
The best data pipelines can recover from bugs and failures without ever requiring manual intervention. By applying the three simple steps outlined above, you’re well on your way to achieving a more fault-tolerant, idempotent data pipeline.
We’ve built an ETL performance monitoring solution with end-to-end visibility that’s used by top analytics teams. Discover more at intermix.io.
Our Amazon Redshift training classes cover strategies and best practices for designing a data platform using Amazon Redshift. The concepts we cover focus on cluster stability, engineering productivity, and query/dashboard speeds. The sessions have hands-on demos and individual coaching, arming you with the right tools and knowledge critical to be more productive with your data. You will also connect with other data engineers who are facing the same challenges as you are.
When it comes to Redshift, there’s an elephant in the room. Everyone is struggling. Redshift is incredibly powerful, but it can also be painful to work with. Success is more guesswork than evidence-based, and you can spend a lot of time fighting fires. This Amazon Redshift training program fixes that. The concepts of this training will help you establish better control of the platform, so you can spend more time being creative with your data.
The class includes hands-on activities, for your own cluster, focused on teaching you how to process a massive volume of queries in short time. You will also learn how to discover and prevent problems with your cluster and run blazing fast dashboards for your data consumers.
There is ample time during the class for questions and discussion. The training also includes:
The Intermix team has people who have years of experience working with Amazon Redshift. We have professionals who have worked on technology that drives Redshift, from the ground up. Our team members hold patents for performance analytics and cloud computing, in addition to holding patents for enhancing data throughput for data warehouses.
Hungry for more knowledge? Head over to the Intermix blog for in-depth articles on best practices for ETL and data engineering. Learn fine-tuning tips for short-query acceleration, things to avoid when setting up your Redshift cluster, and a whole lot more.
We will see you at the training!