The target-snowflake Singer target sends data into Snowflake after it was pulled from a source using a Singer tap.

Alternative variants #

Multiple variants of target-snowflake are available. This document describes the transferwise variant.

Alternative variants are:

Prerequisites #

Dependencies #

A Snowflake FILE FORMAT object must exist prior to execution and is a required config input. You can use the sample SQL provided below:

CREATE FILE FORMAT {database}.{schema}.{file_format_name}

See the documentation for more details on other optional objects and how to create them.

Standalone usage #

Install the package using pip:

pip install pipelinewise-target-snowflake

For additional instructions, refer to the README in the repository.

Usage with Meltano #

Meltano helps you manage your configuration, incremental replication, and scheduled pipelines.

View the Meltano-specific target-snowflake instructions to learn more.

Capabilities #

Settings #

target-snowflake requires the configuration of the following settings:

The settings for target target-snowflake that are known to Meltano are documented below. To quickly find the setting you're looking for, use the Table of Contents at the top of the page.

Account (account) #

Snowflake account name (i.e.

Database Name (dbname) #

Snowflake Database name

User (user) #

Snowflake User

Password (password) #

Snowflake Password

Warehouse (warehouse) #

Snowflake virtual warehouse name

File Format (file_format) #

The Snowflake file format object name which needs to be manually created as part of the requirements section of the docs. Has to be the fully qualified name including the schema. Refer to the Snowflake docs for more details.

Role (role) #

Snowflake role to use. If not defined then the user’s default role will be used.

AWS Access Key ID (aws_access_key_id) #

S3 Access Key Id. If not provided, AWS_ACCESS_KEY_ID environment variable or IAM role will be used

AWS Secret Access Key (aws_secret_access_key) #

S3 Secret Access Key. If not provided, AWS_SECRET_ACCESS_KEY environment variable or IAM role will be used

AWS Session Token (aws_session_token) #

AWS Session token. If not provided, AWS_SESSION_TOKEN environment variable will be used

AWS Profile (aws_profile) #

AWS profile name for profile based authentication. If not provided, AWS_PROFILE environment variable will be used.

Default Target Schema (default_target_schema) #


Note $MELTANO_EXTRACT__LOAD_SCHEMA will expand to the value of the load_schema extra for the extractor used in the pipeline, which defaults to the extractor’s namespace, e.g. tap_gitlab for tap-gitlab. Values are automatically converted to uppercase before they’re passed on to the plugin, so tap_gitlab becomes TAP_GITLAB.

Name of the schema where the tables will be created, without database prefix. If schema_mapping is not defined then every stream sent by the tap is loaded into this schema.

S3 Bucket (s3_bucket) #

S3 Bucket name

S3 Key Prefix (s3_key_prefix) #

A static prefix before the generated S3 key names. Using prefixes you can upload files into specific directories in the S3 bucket.

S3 Endpoint URL (s3_endpoint_url) #

The complete URL to use for the constructed client. This is allowing to use non-native s3 account.

S3 Region Name (s3_region_name) #

Default region when creating new connections

S3 ACL (s3_acl) #

S3 ACL name to set on the uploaded files

Stage (stage) #

Named external stage name created at pre-requirements section. Has to be a fully qualified name including the schema name

Batch Size Rows (batch_size_rows) #

  • Default: 100000

Maximum number of rows in each batch. At the end of each batch, the rows in the batch are loaded into Snowflake.

Batch Wait Limit Seconds (batch_wait_limit_seconds) #

Maximum time to wait for batch to reach batch_size_rows.

Flush All Streams (flush_all_streams) #

  • Default: false

Flush and load every stream into Snowflake when one batch is full. Warning: This may trigger the COPY command to use files with low number of records, and may cause performance problems.

Parallelism (parallelism) #

  • Default: 0

The number of threads used to flush tables. 0 will create a thread for each stream, up to parallelism_max. -1 will create a thread for each CPU core. Any other positive number will create that number of threads, up to parallelism_max.

Parallelism Max (parallelism_max) #

  • Default: 16

Max number of parallel threads to use when flushing tables.

Default Target Schema Select Permission (default_target_schema_select_permission) #

Grant USAGE privilege on newly created schemas and grant SELECT privilege on newly created tables to a specific role or a list of roles. If schema_mapping is not defined then every stream sent by the tap is granted accordingly.

Schema Mapping (schema_mapping) #

Useful if you want to load multiple streams from one tap to multiple Snowflake schemas.

If the tap sends the stream_id in <schema_name>-<table_name> format then this option overwrites the default_target_schema value.

Note, that using schema_mapping you can overwrite the default_target_schema_select_permission value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables.

Disable Table Cache (disable_table_cache) #

  • Default: false

By default the connector caches the available table structures in Snowflake at startup. In this way it doesn’t need to run additional queries when ingesting data to check if altering the target tables is required. With disable_table_cache option you can turn off this caching. You will always see the most recent table structures but will cause an extra query runtime.

Client Side Encryption Master Key (client_side_encryption_master_key) #

When this is defined, Client-Side Encryption is enabled. The data in S3 will be encrypted, No third parties, including Amazon AWS and any ISPs, can see data in the clear. Snowflake COPY command will decrypt the data once it’s in Snowflake. The master key must be 256-bit length and must be encoded as base64 string.

Client Side Encryption Stage Object (client_side_encryption_stage_object) #

Required when client_side_encryption_master_key is defined. The name of the encrypted stage object in Snowflake that created separately and using the same encryption master key.

Add Metadata Columns (add_metadata_columns) #

  • Default: false

Metadata columns add extra row level information about data ingestions, (i.e. when was the row read in source, when was inserted or deleted in snowflake etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix _SDC_. The column names are following the stitch naming conventions documented at Enabling metadata columns will flag the deleted rows by setting the _SDC_DELETED_AT metadata column. Without the add_metadata_columns option the deleted rows from singer taps will not be recongisable in Snowflake.

Hard Delete (hard_delete) #

  • Default: false

When hard_delete option is true then DELETE SQL commands will be performed in Snowflake to delete rows in tables. It’s achieved by continuously checking the _SDC_DELETED_AT metadata column sent by the singer tap. Due to deleting rows requires metadata columns, hard_delete option automatically enables the add_metadata_columns option as well.

Data Flattening Max Level (data_flattening_max_level) #

  • Default: 0

Object type RECORD items from taps can be loaded into VARIANT columns as JSON (default) or we can flatten the schema by creating columns automatically. When value is 0 (default) then flattening functionality is turned off.

Primary Key Required (primary_key_required) #

  • Default: true

Log based and Incremental replications on tables with no Primary Key cause duplicates when merging UPDATE events. When set to true, stop loading data if no Primary Key is defined.

Validate Records (validate_records) #

  • Default: false

Validate every single record message to the corresponding JSON schema. This option is disabled by default and invalid RECORD messages will fail only at load time by Snowflake. Enabling this option will detect invalid records earlier but could cause performance degradation.

Temporary Directory (temp_dir) #

(Default: platform-dependent) Directory of temporary CSV files with RECORD messages.

No Compression (no_compression) #

  • Default: false

Generate uncompressed CSV files when loading to Snowflake. Normally, by default GZIP compressed files are generated.

Query Tag (query_tag) #

Optional string to tag executed queries in Snowflake. Replaces tokens

schema and table with the appropriate values. The tags are displayed in the output of the Snowflake QUERY_HISTORY, QUERY_HISTORY_BY_* functions.

Archive Load Files (archive_load_files) #

  • Default: false

When enabled, the files loaded to Snowflake will also be stored in archive_load_files_s3_bucket under the key /{archive_load_files_s3_prefix}/{schema_name}/{table_name}/.

All archived files will have tap, schema, table and archived-by as S3 metadata keys.

When incremental replication is used, the archived files will also have the following S3 metadata keys - incremental-key, incremental-key-min and incremental-key-max.

Archive Load Files S3 Prefix (archive_load_files_s3_prefix) #

When archive_load_files is enabled, the archived files will be placed in the archive S3 bucket under this prefix.

Archive Load Files S3 Bucket (archive_load_files_s3_bucket) #

When archive_load_files is enabled, the archived files will be placed in this bucket.

Looking for help? #

If you're having trouble getting the target-snowflake target to work, look for an existing issue in its repository, file a new issue, or join the Meltano Slack community and ask for help in the #plugins-general channel.

Found an issue on this page? #

This page is generated from a YAML file that you can contribute changes to. Edit it on GitHub!