Snowflake
Table of Contents
- Alternative variants
- Getting Started
- Capabilities
- Settings
-
Account (
account
) -
Database Name (
dbname
) -
User (
user
) -
Password (
password
) -
Warehouse (
warehouse
) -
File Format (
file_format
) -
Role (
role
) -
AWS Access Key ID (
aws_access_key_id
) -
AWS Secret Access Key (
aws_secret_access_key
) -
AWS Session Token (
aws_session_token
) -
AWS Profile (
aws_profile
) -
Default Target Schema (
default_target_schema
) -
S3 Bucket (
s3_bucket
) -
S3 Key Prefix (
s3_key_prefix
) -
S3 Endpoint URL (
s3_endpoint_url
) -
S3 Region Name (
s3_region_name
) -
S3 ACL (
s3_acl
) -
Stage (
stage
) -
Batch Size Rows (
batch_size_rows
) -
Batch Wait Limit Seconds (
batch_wait_limit_seconds
) -
Flush All Streams (
flush_all_streams
) -
Parallelism (
parallelism
) -
Parallelism Max (
parallelism_max
) -
Default Target Schema Select Permission (
default_target_schema_select_permission
) -
Schema Mapping (
schema_mapping
) -
Disable Table Cache (
disable_table_cache
) -
Client Side Encryption Master Key (
client_side_encryption_master_key
) -
Client Side Encryption Stage Object (
client_side_encryption_stage_object
) -
Add Metadata Columns (
add_metadata_columns
) -
Hard Delete (
hard_delete
) -
Data Flattening Max Level (
data_flattening_max_level
) -
Primary Key Required (
primary_key_required
) -
Validate Records (
validate_records
) -
Temporary Directory (
temp_dir
) -
No Compression (
no_compression
) -
Query Tag (
query_tag
) -
Archive Load Files (
archive_load_files
) -
Archive Load Files S3 Prefix (
archive_load_files_s3_prefix
) -
Archive Load Files S3 Bucket (
archive_load_files_s3_bucket
)
-
Account (
- Looking for help?
The target-snowflake
Meltano loader sends data into Snowflake after it was pulled from a source using an extractor.
-
- Repository: https://github.com/transferwise/pipelinewise-target-snowflake
-
-
-
-
-
-
-
- Maintainer: Wise
- Meltano Stats:
-
-
-
-
Alternative variants #
Multiple
variants
of target-snowflake
are available.
This document describes the default transferwise
variant,
which is recommended for new users.
Alternative variants are:
Getting Started #
Prerequisites #
If you haven't already, follow the initial steps of the Getting Started guide:
Dependencies #
A Snowflake FILE FORMAT
object must exist prior to execution and is a required config input. You can use the sample SQL provided below:
CREATE FILE FORMAT {database}.{schema}.{file_format_name}
TYPE = 'CSV' ESCAPE='\\' FIELD_OPTIONALLY_ENCLOSED_BY='"';
See the documentation for more details on other optional objects and how to create them.
Installation and configuration #
-
Add the
target-snowflake
loader to your project usingmeltano add
:meltano add loader target-snowflake
-
Configure the settings below using
meltano config
.
Next steps #
Follow the remaining steps of the Getting Started guide:
If you run into any issues, learn how to get help.Capabilities #
Settings #
target-snowflake
requires the
configuration
of the following settings:
The settings for loader target-snowflake
that are known to Meltano are documented below.
To quickly find the
setting you're looking for, use the Table of Contents at
the top of the page.
Account (account
)
#
-
Environment variable:
TARGET_SNOWFLAKE_ACCOUNT
Snowflake account name (i.e. rtXXXXX.eu-central-1
)
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set account <account>
export TARGET_SNOWFLAKE_ACCOUNT=<account>
Database Name (dbname
)
#
-
Environment variable:
TARGET_SNOWFLAKE_DBNAME
Snowflake Database name
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set dbname <dbname>
export TARGET_SNOWFLAKE_DBNAME=<dbname>
User (user
)
#
-
Environment variable:
TARGET_SNOWFLAKE_USER
Snowflake User
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set user <user>
export TARGET_SNOWFLAKE_USER=<user>
Password (password
)
#
-
Environment variable:
TARGET_SNOWFLAKE_PASSWORD
Snowflake Password
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set password <password>
export TARGET_SNOWFLAKE_PASSWORD=<password>
Warehouse (warehouse
)
#
-
Environment variable:
TARGET_SNOWFLAKE_WAREHOUSE
Snowflake virtual warehouse name
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set warehouse <warehouse>
export TARGET_SNOWFLAKE_WAREHOUSE=<warehouse>
File Format (file_format
)
#
-
Environment variable:
TARGET_SNOWFLAKE_FILE_FORMAT
The Snowflake file format object name which needs to be manually created as part of the requirements section of the docs. Has to be the fully qualified name including the schema. Refer to the Snowflake docs for more details.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set file_format <file_format>
export TARGET_SNOWFLAKE_FILE_FORMAT=<file_format>
Role (role
)
#
-
Environment variable:
TARGET_SNOWFLAKE_ROLE
Snowflake role to use. If not defined then the user’s default role will be used.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set role <role>
export TARGET_SNOWFLAKE_ROLE=<role>
AWS Access Key ID (aws_access_key_id
)
#
-
Environment variable:
TARGET_SNOWFLAKE_AWS_ACCESS_KEY_ID
S3 Access Key Id. If not provided, AWS_ACCESS_KEY_ID
environment variable or IAM role will be used
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set aws_access_key_id <aws_access_key_id>
export TARGET_SNOWFLAKE_AWS_ACCESS_KEY_ID=<aws_access_key_id>
AWS Secret Access Key (aws_secret_access_key
)
#
-
Environment variable:
TARGET_SNOWFLAKE_AWS_SECRET_ACCESS_KEY
S3 Secret Access Key. If not provided, AWS_SECRET_ACCESS_KEY
environment variable or IAM role will be used
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set aws_secret_access_key <aws_secret_access_key>
export TARGET_SNOWFLAKE_AWS_SECRET_ACCESS_KEY=<aws_secret_access_key>
AWS Session Token (aws_session_token
)
#
-
Environment variable:
TARGET_SNOWFLAKE_AWS_SESSION_TOKEN
AWS Session token. If not provided, AWS_SESSION_TOKEN
environment variable will be used
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set aws_session_token <aws_session_token>
export TARGET_SNOWFLAKE_AWS_SESSION_TOKEN=<aws_session_token>
AWS Profile (aws_profile
)
#
-
Environment variable:
TARGET_SNOWFLAKE_AWS_PROFILE
AWS profile name for profile based authentication. If not provided, AWS_PROFILE
environment variable will be used.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set aws_profile <aws_profile>
export TARGET_SNOWFLAKE_AWS_PROFILE=<aws_profile>
Default Target Schema (default_target_schema
)
#
-
Environment variable:
TARGET_SNOWFLAKE_DEFAULT_TARGET_SCHEMA
- Default:
$MELTANO_EXTRACT__LOAD_SCHEMA
Note $MELTANO_EXTRACT__LOAD_SCHEMA
will expand to the value of the load_schema
extra for the extractor used in the pipeline, which defaults to the extractor’s namespace, e.g. tap_gitlab
for tap-gitlab
. Values are automatically converted to uppercase before they’re passed on to the plugin, so tap_gitlab
becomes TAP_GITLAB
.
Name of the schema where the tables will be created, without database
prefix. If schema_mapping
is not defined then every stream sent by the tap is
loaded into this schema.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set default_target_schema <default_target_schema>
export TARGET_SNOWFLAKE_DEFAULT_TARGET_SCHEMA=<default_target_schema>
S3 Bucket (s3_bucket
)
#
-
Environment variable:
TARGET_SNOWFLAKE_S3_BUCKET
S3 Bucket name
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set s3_bucket <s3_bucket>
export TARGET_SNOWFLAKE_S3_BUCKET=<s3_bucket>
S3 Key Prefix (s3_key_prefix
)
#
-
Environment variable:
TARGET_SNOWFLAKE_S3_KEY_PREFIX
A static prefix before the generated S3 key names. Using prefixes you can upload files into specific directories in the S3 bucket.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set s3_key_prefix <s3_key_prefix>
export TARGET_SNOWFLAKE_S3_KEY_PREFIX=<s3_key_prefix>
S3 Endpoint URL (s3_endpoint_url
)
#
-
Environment variable:
TARGET_SNOWFLAKE_S3_ENDPOINT_URL
The complete URL to use for the constructed client. This is allowing to use non-native s3 account.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set s3_endpoint_url <s3_endpoint_url>
export TARGET_SNOWFLAKE_S3_ENDPOINT_URL=<s3_endpoint_url>
S3 Region Name (s3_region_name
)
#
-
Environment variable:
TARGET_SNOWFLAKE_S3_REGION_NAME
Default region when creating new connections
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set s3_region_name <s3_region_name>
export TARGET_SNOWFLAKE_S3_REGION_NAME=<s3_region_name>
S3 ACL (s3_acl
)
#
-
Environment variable:
TARGET_SNOWFLAKE_S3_ACL
S3 ACL name to set on the uploaded files
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set s3_acl <s3_acl>
export TARGET_SNOWFLAKE_S3_ACL=<s3_acl>
Stage (stage
)
#
-
Environment variable:
TARGET_SNOWFLAKE_STAGE
Named external stage name created at pre-requirements section. Has to be a fully qualified name including the schema name
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set stage <stage>
export TARGET_SNOWFLAKE_STAGE=<stage>
Batch Size Rows (batch_size_rows
)
#
-
Environment variable:
TARGET_SNOWFLAKE_BATCH_SIZE_ROWS
- Default:
100000
Maximum number of rows in each batch. At the end of each batch, the rows in the batch are loaded into Snowflake.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set batch_size_rows 100000
export TARGET_SNOWFLAKE_BATCH_SIZE_ROWS=100000
Batch Wait Limit Seconds (batch_wait_limit_seconds
)
#
-
Environment variable:
TARGET_SNOWFLAKE_BATCH_WAIT_LIMIT_SECONDS
Maximum time to wait for batch to reach batch_size_rows.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set batch_wait_limit_seconds 1234
export TARGET_SNOWFLAKE_BATCH_WAIT_LIMIT_SECONDS=1234
Flush All Streams (flush_all_streams
)
#
-
Environment variable:
TARGET_SNOWFLAKE_FLUSH_ALL_STREAMS
- Default:
false
Flush and load every stream into Snowflake when one batch is full. Warning: This may trigger the COPY command to use files with low number of records, and may cause performance problems.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set flush_all_streams true
export TARGET_SNOWFLAKE_FLUSH_ALL_STREAMS=true
Parallelism (parallelism
)
#
-
Environment variable:
TARGET_SNOWFLAKE_PARALLELISM
- Default:
0
The number of threads used to flush tables. 0 will create a thread for each stream, up to parallelism_max. -1 will create a thread for each CPU core. Any other positive number will create that number of threads, up to parallelism_max.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set parallelism 0
export TARGET_SNOWFLAKE_PARALLELISM=0
Parallelism Max (parallelism_max
)
#
-
Environment variable:
TARGET_SNOWFLAKE_PARALLELISM_MAX
- Default:
16
Max number of parallel threads to use when flushing tables.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set parallelism_max 16
export TARGET_SNOWFLAKE_PARALLELISM_MAX=16
Default Target Schema Select Permission (default_target_schema_select_permission
)
#
-
Environment variable:
TARGET_SNOWFLAKE_DEFAULT_TARGET_SCHEMA_SELECT_PERMISSION
Grant USAGE privilege on newly created schemas and grant SELECT privilege on newly created tables to a specific role or a list of roles. If schema_mapping
is not defined then every stream sent by the tap is granted accordingly.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set default_target_schema_select_permission <default_target_schema_select_permission>
export TARGET_SNOWFLAKE_DEFAULT_TARGET_SCHEMA_SELECT_PERMISSION=<default_target_schema_select_permission>
Schema Mapping (schema_mapping
)
#
-
Environment variable:
TARGET_SNOWFLAKE_SCHEMA_MAPPING
Useful if you want to load multiple streams from one tap to multiple Snowflake schemas.
If the tap sends the stream_id
in <schema_name>-<table_name>
format then this option overwrites the default_target_schema
value.
Note, that using schema_mapping
you can overwrite the default_target_schema_select_permission
value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set schema_mapping '{...}'
export TARGET_SNOWFLAKE_SCHEMA_MAPPING='{...}'
Disable Table Cache (disable_table_cache
)
#
-
Environment variable:
TARGET_SNOWFLAKE_DISABLE_TABLE_CACHE
- Default:
false
By default the connector caches the available table structures in Snowflake at startup. In this way it doesn’t need to run additional queries when ingesting data to check if altering the target tables is required. With disable_table_cache
option you can turn off this caching. You will always see the most recent table structures but will cause an extra query runtime.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set disable_table_cache true
export TARGET_SNOWFLAKE_DISABLE_TABLE_CACHE=true
Client Side Encryption Master Key (client_side_encryption_master_key
)
#
-
Environment variable:
TARGET_SNOWFLAKE_CLIENT_SIDE_ENCRYPTION_MASTER_KEY
When this is defined, Client-Side Encryption is enabled. The data in S3 will be encrypted, No third parties, including Amazon AWS and any ISPs, can see data in the clear. Snowflake COPY command will decrypt the data once it’s in Snowflake. The master key must be 256-bit length and must be encoded as base64 string.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set client_side_encryption_master_key <client_side_encryption_master_key>
export TARGET_SNOWFLAKE_CLIENT_SIDE_ENCRYPTION_MASTER_KEY=<client_side_encryption_master_key>
Client Side Encryption Stage Object (client_side_encryption_stage_object
)
#
-
Environment variable:
TARGET_SNOWFLAKE_CLIENT_SIDE_ENCRYPTION_STAGE_OBJECT
Required when client_side_encryption_master_key
is defined. The name of the encrypted stage object in Snowflake that created separately and using the same encryption master key.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set client_side_encryption_stage_object <client_side_encryption_stage_object>
export TARGET_SNOWFLAKE_CLIENT_SIDE_ENCRYPTION_STAGE_OBJECT=<client_side_encryption_stage_object>
Add Metadata Columns (add_metadata_columns
)
#
-
Environment variable:
TARGET_SNOWFLAKE_ADD_METADATA_COLUMNS
- Default:
false
Metadata columns add extra row level information about data ingestions, (i.e. when was the row read in source, when was inserted or deleted in snowflake etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix _SDC_
. The column names are following the stitch naming conventions documented at https://www.stitchdata.com/docs/data-structure/integration-schemas#sdc-columns. Enabling metadata columns will flag the deleted rows by setting the _SDC_DELETED_AT
metadata column. Without the add_metadata_columns
option the deleted rows from singer taps will not be recongisable in Snowflake.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set add_metadata_columns true
export TARGET_SNOWFLAKE_ADD_METADATA_COLUMNS=true
Hard Delete (hard_delete
)
#
-
Environment variable:
TARGET_SNOWFLAKE_HARD_DELETE
- Default:
false
When hard_delete
option is true then DELETE SQL commands will be performed in Snowflake to delete rows in tables. It’s achieved by continuously checking the _SDC_DELETED_AT
metadata column sent by the singer tap. Due to deleting rows requires metadata columns, hard_delete
option automatically enables the add_metadata_columns
option as well.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set hard_delete true
export TARGET_SNOWFLAKE_HARD_DELETE=true
Data Flattening Max Level (data_flattening_max_level
)
#
-
Environment variable:
TARGET_SNOWFLAKE_DATA_FLATTENING_MAX_LEVEL
- Default:
0
Object type RECORD items from taps can be loaded into VARIANT columns as JSON (default) or we can flatten the schema by creating columns automatically. When value is 0 (default) then flattening functionality is turned off.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set data_flattening_max_level 0
export TARGET_SNOWFLAKE_DATA_FLATTENING_MAX_LEVEL=0
Primary Key Required (primary_key_required
)
#
-
Environment variable:
TARGET_SNOWFLAKE_PRIMARY_KEY_REQUIRED
- Default:
true
Log based and Incremental replications on tables with no Primary Key cause duplicates when merging UPDATE events. When set to true, stop loading data if no Primary Key is defined.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set primary_key_required false
export TARGET_SNOWFLAKE_PRIMARY_KEY_REQUIRED=false
Validate Records (validate_records
)
#
-
Environment variable:
TARGET_SNOWFLAKE_VALIDATE_RECORDS
- Default:
false
Validate every single record message to the corresponding JSON schema. This option is disabled by default and invalid RECORD messages will fail only at load time by Snowflake. Enabling this option will detect invalid records earlier but could cause performance degradation.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set validate_records true
export TARGET_SNOWFLAKE_VALIDATE_RECORDS=true
Temporary Directory (temp_dir
)
#
-
Environment variable:
TARGET_SNOWFLAKE_TEMP_DIR
(Default: platform-dependent) Directory of temporary CSV files with RECORD messages.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set temp_dir <temp_dir>
export TARGET_SNOWFLAKE_TEMP_DIR=<temp_dir>
No Compression (no_compression
)
#
-
Environment variable:
TARGET_SNOWFLAKE_NO_COMPRESSION
- Default:
false
Generate uncompressed CSV files when loading to Snowflake. Normally, by default GZIP compressed files are generated.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set no_compression true
export TARGET_SNOWFLAKE_NO_COMPRESSION=true
Query Tag (query_tag
)
#
-
Environment variable:
TARGET_SNOWFLAKE_QUERY_TAG
Optional string to tag executed queries in Snowflake. Replaces tokens
schema
and table
with the appropriate values. The tags are displayed in the
output of the Snowflake QUERY_HISTORY
, QUERY_HISTORY_BY_*
functions.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set query_tag <query_tag>
export TARGET_SNOWFLAKE_QUERY_TAG=<query_tag>
Archive Load Files (archive_load_files
)
#
-
Environment variable:
TARGET_SNOWFLAKE_ARCHIVE_LOAD_FILES
- Default:
false
When enabled, the files loaded to Snowflake will also be stored in
archive_load_files_s3_bucket
under the key /{archive_load_files_s3_prefix}/{schema_name}/{table_name}/.
All archived files will have tap, schema, table and archived-by as S3 metadata keys.
When incremental replication is used, the archived files will also have the following S3 metadata keys - incremental-key, incremental-key-min and incremental-key-max.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set archive_load_files true
export TARGET_SNOWFLAKE_ARCHIVE_LOAD_FILES=true
Archive Load Files S3 Prefix (archive_load_files_s3_prefix
)
#
-
Environment variable:
TARGET_SNOWFLAKE_ARCHIVE_LOAD_FILES_S3_PREFIX
When archive_load_files
is enabled, the archived files will be placed in the archive S3 bucket under this prefix.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set archive_load_files_s3_prefix <archive_load_files_s3_prefix>
export TARGET_SNOWFLAKE_ARCHIVE_LOAD_FILES_S3_PREFIX=<archive_load_files_s3_prefix>
Archive Load Files S3 Bucket (archive_load_files_s3_bucket
)
#
-
Environment variable:
TARGET_SNOWFLAKE_ARCHIVE_LOAD_FILES_S3_BUCKET
When archive_load_files is enabled, the archived files will be placed in this bucket.
How to use #
Manage this setting using
meltano config
or an
environment variable:
meltano config target-snowflake set archive_load_files_s3_bucket <archive_load_files_s3_bucket>
export TARGET_SNOWFLAKE_ARCHIVE_LOAD_FILES_S3_BUCKET=<archive_load_files_s3_bucket>
Looking for help? #
If you're having trouble getting the
target-snowflake
loader to work, look for an
existing issue in its repository, file a new issue,
or
join the Meltano Slack community
and ask for help in the #plugins-general
channel.
Found an issue on this page? #
This page is generated from a YAML file that you can contribute changes to. Edit it on GitHub!