This page provides you with instructions on how to extract data from Bronto and load it into Snowflake. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Bronto?
Oracle Bronto is an ecommerce email marketing platform. It integrates ecommerce and point-of-sale data with operational platforms, enabling brands to maximize the value of customer data and deliver relevant, personal messages.
What is Snowflake?
Snowflake is a cloud-based data warehouse implemented as a managed service. It runs on the Amazon Web Services architecture using EC2 and S3 instances. Snowflake is designed to be fast, flexible, and easy to work with. For instance, for query processing, Snowflake creates virtual warehouses that run on separate compute clusters, so querying one virtual warehouse doesn't slow down the others.
Getting data out of Bronto
You can use Bronto's API to get Bronto data into your data warehouse. The API was originally designed using the SOAP API protocol, but a new REST API lets you access and work with product and order data.
Bronto's API offers numerous endpoints that can provide information on orders, products, and campaigns. Using methods outlined in the API documentation, you can retrieve the data you need. For example, to get a list of all transactions for a given order object, you could GET /orders/{orderId}
.
Sample Bronto data
The Bronto REST API returns JSON-formatted data. Here's an example of the kind of response you might see when querying an objects endpoint.
{ emailAddress:validly formatted email address contactId:string orderDate:ISO-8601 datetime status:PENDING | PROCESSED hasTracking:boolean trackingCookieName:string trackingCookieValue:string deliveryId:string customerOrderId:string discountAmount:number grandTotal:number lineItems:[ { name:string other:string sku:string category:string imageUrl:string productUrl:string quantity:number salePrice:number totalPrice:number unitPrice:number description:string position:number } ] originIp:IPv4 or IPv6 address messageId:string originUserAgent:string shippingAmount:number shippingDate:ISO-8601 datetime shippingDetails:string shippingTrackingUrl:string subtotal:number taxAmount:number cartId:UUID createdDate:ISO-8601 datetime updatedDate:ISO-8601 datetime currency:ISO-4217 currency code states: { processed:boolean shipped:boolean } orderId:UUID }
Preparing data for Snowflake
Depending on the structure of your data, you may need to prepare it for loading. Look at the supported data types for Snowflake and make sure that the data you've got will map neatly to them.
Note that you don't need to define a schema in advance when loading JSON data into Snowflake.
Loading data into Snowflake
The Snowflake documentation's Data Loading Overview section can help you with the task of loading your data. If you're not loading a lot of data, you might be able to use the data loading wizard in the Snowflake web UI, but chances are the limitations on that tool will make it a non-starter as a reliable ETL solution. Alternatively, there are two main steps for getting data into Snowflake:
- Use the PUT command to stage files.
- Use the COPY INTO table command to load prepared data into an awaiting table.
You’ll have the option of copying from your local drive or from Amazon S3. One of Snowflake's slick features lets you make a virtual warehouse that can power the insertion process.
Keeping Bronto data up to date
Now what? You've built a script that pulls data from Bronto and loads it into your data warehouse, but what happens tomorrow when you have new transactions?
The key is to build your script in such a way that it can identify incremental updates to your data. Thankfully, Bronto's API results include fields like createdDate that allow you to identify records that are new since your last update (or since the newest record you've copied). Once you've take new data into account, you can set your script up as a cron job or continuous loop to keep pulling down new data as it appears.
Other data warehouse options
Snowflake is great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Amazon Redshift, Google BigQuery, PostgreSQL, or Microsoft Azure Synapse Analytics, which are RDBMSes that use similar SQL syntax, or Panoply, which works with Redshift instances. Others choose a data lake, like Amazon S3 or Delta Lake on Databricks. If you're interested in seeing the relevant steps for loading data into one of these platforms, check out To Redshift, To BigQuery, To Postgres, To Panoply, To Azure Synapse Analytics, To S3, and To Delta Lake.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to move data from Bronto to Snowflake automatically. With just a few clicks, Stitch starts extracting your Bronto data, structuring it in a way that's optimized for analysis, and inserting that data into your Snowflake data warehouse.