The tap-bigquery Singer tap pulls data from BigQuery that can then be sent to a destination using a Singer target.

Alternative variants #

Multiple variants of tap-bigquery are available. This document describes the default anelendata variant, which is recommended for new users.

Alternative variants are:

Prerequisites #

Additionally you should follow the steps in the “Activate the Google BigQuery API” section of the repository’s README.

Standalone usage #

Install the package using pip:

pip install tap-bigquery

For additional instructions, refer to the README in the repository.

Usage with Meltano #

Meltano helps you manage your configuration, incremental replication, and scheduled pipelines.

View the Meltano-specific tap-bigquery instructions to learn more.

Capabilities #

These capabilities can also be overriden by specifying the capabilities key in your meltano.yml file.

Settings #

tap-bigquery requires the configuration of the following settings:

The settings for tap tap-bigquery that are known to Meltano are documented below. To quickly find the setting you're looking for, use the Table of Contents at the top of the page.

You can override these settings or specify additional ones in your meltano.yml by adding the settings key. Please consider adding any settings you have defined locally to this definition on MeltanoHub by making a pull request to the YAML file that defines the settings for this tap.

Streams (streams) #

Array of objects with name, table, columns, datetime_key, and filters keys:

  • name: The entity name, used by most loaders as the name of the table to be created.
  • table: Fully qualified table name in BigQuery, with format `<project>.<dataset>.<table>`. Since backticks have special meaning in YAML, values in meltano.yml should be wrapped in double quotes.
  • columns: Array of column names to select. Using ["*"] is not recommended as it can become very expensive for a table with a large number of columns.
  • datetime_key: Name of datetime column to use as replication key.
  • filters: Optional array of WHERE clauses to filter extracted data, e.g. "column='value'".

Credentials Path (credentials_path) #

  • Default: $MELTANO_PROJECT_ROOT/client_secrets.json

Fully qualified path to client_secrets.json for your service account.

See the “Activate the Google BigQuery API” section of the repository’s README and https://cloud.google.com/docs/authentication/production.

By default, this file is expected to be at the root of your project directory.

Start Datetime (start_datetime) #

Determines how much historical data will be extracted. Please be aware that the larger the time period and amount of data, the longer the initial extraction can be expected to take.

End Datetime (end_datetime) #

Date up to when historical data will be extracted.

Limit (limit) #

Limits the number of records returned in each stream, applied as a limit in the query.

Start Always Inclusive (start_always_inclusive) #

  • Default: true

When replicating incrementally, disable to only select records whose datetime_key is greater than the maximum value replicated in the last run, by excluding records whose timestamps match exactly. This could cause records to be missed that were created after the last run finished, but during the same second and with the same timestamp.

Looking for help? #

If you're having trouble getting the tap-bigquery tap to work, look for an existing issue in its repository, file a new issue, or join the Meltano Slack community and ask for help in the #plugins-general channel.

Found an issue on this page? #

This page is generated from a YAML file that you can contribute changes to. Edit it on GitHub!