This page provides you with instructions on how to extract data from Amazon Aurora and load it into Snowflake. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Amazon Aurora?
Aurora is a MySQL-compatible relational database. It is used by those looking for better performance than a traditional MySQL database at cost-effective price points. As a result, Aurora is largely used as a transactional or operational database and is by no means optimized for analytics.
Snowflake is a data warehouse solution that is entirely cloud based. It's a managed service. If you don't want to deal with hardware, software, or upkeep for a data warehouse you're going to love Snowflake. It runs on the wicked fast Amazon Web Services architecture using EC2 and S3 instances. Snowflake is designed to be flexible and easy to work with where other relational databases are not. One example of this is the query execution. Snowflake creates virtual warehouses where query processing takes place. These virtual warehouses run on separate compute clusters, so querying one of these virtual warehouses doesn't slow down the others. If you have ever had to wait for a query to complete, you know the value of speed and efficiency for query processing.
Getting data out of Amazon Aurora
There are several methods for extracting data from Amazon Aurora, and the one you use will probably be dependent upon your needs (and skill set).
The most common way is simply writing queries. SELECT queries allow you to pull exactly the data you want by specifying filters, ordering, and limiting results. If you have a specific subset of data in mind or are looking to continuously monitor a subset of a specific table, SELECT queries may be a good fit.
If you’re just looking to export data in bulk, however, there may be an easier way. A handy command-line tool called mysqldump allows you to export entire tables and databases in a format you specify (i.e. delimited text, CSV, or SQL queries that would restore the database if run).
Preparing Amazon Aurora data
Preparing data for Snowflake
Depending on the structure that you data is in, you may need to prepare it for loading. Take a look at the supported data types for Snowflake and make sure that the data you've got will map neatly to them. If you have a lot of data, you should compress it. Gzip, bzip2, Brotli, Zstandard v0.8 and deflate/raw deflate compression types are all supported.
One important thing to note here is that you don't need to define a schema in advance when loading JSON data into Snowflake. Onward to loading!
Loading data into Snowflake
Keeping Amazon Aurora data up to date
So, now what? You’ve built a script that pulls data from Amazon Aurora and loads it into your warehouse, but what happens tomorrow when you have new and updated records in your Amazon Aurora database?
Depending on how you’ve built your script, you may be forced to load your entire database again. This might be slow and painful, or even have performance implications on your Amazon Aurora instance.
The key is to build your script in such a way that it can also identify incremental updates to your data. If your Amazon Aurora tables have fields like modified_at or auto-incrementing primary keys, you can build a script that can quickly identify records that are new or changed since your last update (or since the newest record you’ve copied into the destination). You can set your script up as a cron job or continuous loop to keep pulling down new data as it appears.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your Amazon Aurora data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Snowflake data warehouse.