BigQuery: How to create a data stream ?
Prerequisite to create a Data stream:
Create a least one Source.
Create a Database.
What is a Data stream ?
A data stream is a link between a platform and a database. Therefore, you need at least one Source (the connection to the platform) and a database.
How do I create a new Data stream ?
Go to the Datastream menu on app.catchr.io and click Add Data Stream.
Name your data stream on the new page (for example: "Meta Ads Catchr to BQ Catchr" if I want to connect my Meta Ads account and my BigQuery).
Select your platform (Meta Ads, as in my previous example).
Select your destination (if this drop-down is empty, you must create a Database first).
Then click save.
Your Data stream has been created, we need to create a new Job to configure which fields and dimensions to send to BigQuery.
How to create a new Job for your Data stream ?
From the Data Stream list, click on View on the row of your new Data Stream. Then click Add Job.
Name your Job (It is the Job's name on Catchr. If I want to import my Meta Ads campaign, I can call it "Catchr Meta Ads campaign"). Click Save.
Select your accounts. You can select one or multiple accounts (if you have a lot of accounts, the account list can take a few moments to show up; If you do not have an account here, you need first to add a new Source on Catchr). Clicks Save.
Select your fields and dimensions (please remember that you cannot ask for all the fields simultaneously. Some can mix up with others, but some cannot. If you use fields that cannot be mixed up, you won't be able to pass these steps. You can ask for our help in the chat). Select a date field for the Partition by data (use the generic day or date fields for generic usage. For a post on the organic connector, use the Post created date). Click Preview data and select one day to preview the data. If the preview is working, click save. If it is not working, try another combination for the fields and dimensions or contact us on the chat).
Select an initial fetching if necessary, then schedule your data dump. For example, if you have a Meta Ads account with a seven-day attribution system and want the data from 2023 to now, use a custom initial fetching from January 1, 2023, to yesterday. And schedule a daily dump of the data from the last seven days. That way, the last seven days of data will be updated every day, and you will have access to all the data from this year and previous year. Name your table (it will be the table name on BigQuery) and click save.
The Job will launch automatically for the first time. Depending on the data volume of the initial fetching, the first run can take a few hours to complete.
You can follow this on the Job list page. The status of each Job is displayed.
If it is in a RUNNING status, it is OK to wait. If it has been running for more than 24 hours, please write to us. We will check that everything is OK.
SUCCESSFUL status indicates that your Database has been successfully populated. You can check your Big Query table on the Google Cloud Platform.
IN_ERROR Status (or any other message) indicates that something went wrong and the Job failed. Write to us so we can help you.
Updated on: 30/09/2024
Thank you!