This page provides you with instructions on how to extract data from Heroku and load it into Google BigQuery. (If this manual process sounds onerous, check out Stitch, which can do all the heavy lifting for you in just a few clicks.)
What is Google BigQuery?
Google BigQuery is a data warehouse that delivers super-fast results from SQL queries, which it accomplishes using a powerful engine dubbed Dremel. With BigQuery, there's no spinning up (and down) clusters of machines as you work with your data. With all of that said, it's clear why some claim that BigQuery prioritizes querying over administration. It's super fast, and that's the reason why most folks use it.
Getting data out of Heroku
Let's start off by extracting the data you want from Heroku’s servers. You can do this using the Heroku API. Full API documentation can be found at this site.
A common use case for extracting Heroku data is retrieving server logs or other event logs. There are some API endpoints related to logs but also command-line tools like the logs command that allow you to retrieve this data.
Sample Heroku data
Here is an example set of commands and responses you might see when interacting with the logs command line tool.
heroku logs --ps router 2012-02-07T09:43:06.123456+00:00 heroku[router]: at=info method=GET path="/stylesheets/dev-center/library.css" host=devcenter.heroku.com fwd="18.104.22.168" dyno=web.5 connect=1ms service=18ms status=200 bytes=13 2012-02-07T09:43:06.123456+00:00 heroku[router]: at=info method=GET path="/articles/bundler" host=devcenter.heroku.com fwd="22.214.171.124" dyno=web.6 connect=1ms service=18ms status=200 bytes=20375 $ heroku logs --source app 2012-02-07T09:45:47.123456+00:00 app[web.1]: Rendered shared/_search.html.erb (1.0ms) 2012-02-07T09:45:47.123456+00:00 app[web.1]: Completed 200 OK in 83ms (Views: 48.7ms | ActiveRecord: 32.2ms) 2012-02-07T09:45:47.123456+00:00 app[worker.1]: [Worker(host:465cf64e-61c8-46d3-b480-362bfd4ecff9 pid:1)] 1 jobs processed at 23.0330 j/s, 0 failed ... 2012-02-07T09:46:01.123456+00:00 app[web.6]: Started GET "/articles/buildpacks" for 126.96.36.199 at 2012-02-07 09:46:01 +0000 $ heroku logs --source app --ps worker 2012-02-07T09:47:59.123456+00:00 app[worker.1]: [Worker(host:260cf64e-61c8-46d3-b480-362bfd4ecff9 pid:1)] Article#record_view_without_delay completed after 0.0221 2012-02-07T09:47:59.123456+00:00 app[worker.1]: [Worker(host:260cf64e-61c8-46d3-b480-362bfd4ecff9 pid:1)] 5 jobs processed at 31.6842 j/s, 0 failed ...
Preparing Heroku data
This part could be the trickiest: you need to map the data that comes out of each Heroku API endpoint or log extraction into a schema that can be inserted into your destination database. This means that, for each value in the response, you need to identify a predefined datatype (i.e. INTEGER, DATETIME, etc.) and build a table that can receive them. Depending on your log files, you may also opt to break those up into raw logs and more meaningful metadata or log portions.
The Heroku API documentation can give you a good sense of what fields will be provided by each endpoint, along with their corresponding datatypes.
Loading data into Google BigQuery
Google Cloud Platform offers a helpful guide for loading data into BigQuery. You can use the
bq command-line tool to upload the files to your awaiting datasets, adding the correct schema and data type information along the way. The
bq load command is your friend here. You can find the syntax in the bq command-line tool quickstart guide. Iterate through this process as many times as it takes to load all of your tables into BigQuery.
Keeping Heroku data up to date
Hooray! You've written a script to move Heroku data into your data warehouse. Wouldn't it be great if that was all there was to it? Consider what's going to happen in the event that new data is created in Heroku, and it needs to make its way into your data warehouse?
One scenario, depending on the design of your script, is to simply load the entire dataset all over again. This is almost guaranteed to be slow and painful. Delays can be costly if you've got deadlines to meet.
The best thing you can do is build your script to identify new an updated information and incrementally update in the destination. This can be accomplished by using primary keys in your logic. Some good examples would be modified_at, updated_at, or some other auto-incrementing field. When you've built in this functionality, you can set up your script as a cron job or continuous loop to grab new data as it appears.
Other data warehouse options
BigQuery is really great, but sometimes you need to optimize for different things when you're choosing a data warehouse. Some folks choose to go with Postgres or Redshift, which are two RDBMSes that use similar SQL syntax. If you're interested in seeing the relevant steps for loading this data into Postgres or Redshift, check out To Redshift and To Postgres.
Easier and faster alternatives
If all this sounds a bit overwhelming, don’t be alarmed. If you have all the skills necessary to go through this process, chances are building and maintaining a script like this isn’t a very high-leverage use of your time.
Thankfully, products like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your Heroku data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Google BigQuery data warehouse.