The target-snowflake loader sends data into Snowflake after it was pulled from a source using an extractor
Alternate Implementations
Getting Started
Prerequisites
If you haven't already, follow the initial steps of the Getting Started guide:
Dependencies
A Snowflake FILE FORMAT
object must exist prior to execution and is a required config input. You can use the sample SQL provided below:
CREATE FILE FORMAT {database}.{schema}.{file_format_name}
TYPE = 'CSV' ESCAPE='\\' FIELD_OPTIONALLY_ENCLOSED_BY='"';
See the documentation for more details on other optional objects and how to create them.
Installation and configuration
-
Add the target-snowflake loader to your
project using
:meltano add
-
Configure the target-snowflake
settings using
:meltano config
meltano add loader target-snowflake --variant transferwise
meltano config target-snowflake set --interactive
Next steps
Follow the remaining steps of the Getting Started guide:
If you run into any issues, learn how to get help.
Capabilities
This plugin currently has no capabilities defined. If you know the capabilities required by this plugin, please contribute!Settings
The
target-snowflake
settings that are known to Meltano are documented below. To quickly
find the setting you're looking for, click on any setting name from the list:
account
add_metadata_columns
archive_load_files
archive_load_files_s3_bucket
archive_load_files_s3_prefix
aws_access_key_id
aws_profile
aws_secret_access_key
aws_session_token
batch_size_rows
batch_wait_limit_seconds
client_side_encryption_master_key
client_side_encryption_stage_object
data_flattening_max_level
dbname
default_target_schema
default_target_schema_select_permission
disable_table_cache
file_format
flush_all_streams
hard_delete
no_compression
parallelism
parallelism_max
password
primary_key_required
query_tag
role
s3_acl
s3_bucket
s3_endpoint_url
s3_key_prefix
s3_region_name
schema_mapping
stage
temp_dir
user
validate_records
warehouse
You can also list these settings using
with the meltano config
list
subcommand:
meltano config target-snowflake list
You can
override these settings or specify additional ones
in your meltano.yml
by adding the settings
key.
Please consider adding any settings you have defined locally to this definition on MeltanoHub by making a pull request to the YAML file that defines the settings for this plugin.
Account (account)
-
Environment variable:
TARGET_SNOWFLAKE_ACCOUNT
Snowflake account name (i.e. rtXXXXX.eu-central-1
)
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set account [value]
Add Metadata Columns (add_metadata_columns)
-
Environment variable:
TARGET_SNOWFLAKE_ADD_METADATA_COLUMNS
-
Default Value:
false
Metadata columns add extra row level information about data ingestions, (i.e. when was the row read in source, when was inserted or deleted in snowflake etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix _SDC_
. The column names are following the stitch naming conventions documented at https://www.stitchdata.com/docs/data-structure/integration-schemas#sdc-columns. Enabling metadata columns will flag the deleted rows by setting the _SDC_DELETED_AT
metadata column. Without the add_metadata_columns
option the deleted rows from singer taps will not be recongisable in Snowflake.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set add_metadata_columns [value]
Archive Load Files (archive_load_files)
-
Environment variable:
TARGET_SNOWFLAKE_ARCHIVE_LOAD_FILES
-
Default Value:
false
When enabled, the files loaded to Snowflake will also be stored in
archive_load_files_s3_bucket
under the key /{archive_load_files_s3_prefix}/{schema_name}/{table_name}/.
All archived files will have tap, schema, table and archived-by as S3 metadata keys.
When incremental replication is used, the archived files will also have the following S3 metadata keys - incremental-key, incremental-key-min and incremental-key-max.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set archive_load_files [value]
Archive Load Files S3 Bucket (archive_load_files_s3_bucket)
-
Environment variable:
TARGET_SNOWFLAKE_ARCHIVE_LOAD_FILES_S3_BUCKET
When archive_load_files is enabled, the archived files will be placed in this bucket.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set archive_load_files_s3_bucket [value]
Archive Load Files S3 Prefix (archive_load_files_s3_prefix)
-
Environment variable:
TARGET_SNOWFLAKE_ARCHIVE_LOAD_FILES_S3_PREFIX
When archive_load_files
is enabled, the archived files will be placed in the archive S3 bucket under this prefix.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set archive_load_files_s3_prefix [value]
AWS Access Key ID (aws_access_key_id)
-
Environment variable:
TARGET_SNOWFLAKE_AWS_ACCESS_KEY_ID
S3 Access Key Id. If not provided, AWS_ACCESS_KEY_ID
environment variable or IAM role will be used
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set aws_access_key_id [value]
AWS Profile (aws_profile)
-
Environment variable:
TARGET_SNOWFLAKE_AWS_PROFILE
AWS profile name for profile based authentication. If not provided, AWS_PROFILE
environment variable will be used.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set aws_profile [value]
AWS Secret Access Key (aws_secret_access_key)
-
Environment variable:
TARGET_SNOWFLAKE_AWS_SECRET_ACCESS_KEY
S3 Secret Access Key. If not provided, AWS_SECRET_ACCESS_KEY
environment variable or IAM role will be used
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set aws_secret_access_key [value]
AWS Session Token (aws_session_token)
-
Environment variable:
TARGET_SNOWFLAKE_AWS_SESSION_TOKEN
AWS Session token. If not provided, AWS_SESSION_TOKEN
environment variable will be used
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set aws_session_token [value]
Batch Size Rows (batch_size_rows)
-
Environment variable:
TARGET_SNOWFLAKE_BATCH_SIZE_ROWS
-
Default Value:
100000
Maximum number of rows in each batch. At the end of each batch, the rows in the batch are loaded into Snowflake.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set batch_size_rows [value]
Batch Wait Limit Seconds (batch_wait_limit_seconds)
-
Environment variable:
TARGET_SNOWFLAKE_BATCH_WAIT_LIMIT_SECONDS
Maximum time to wait for batch to reach batch_size_rows.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set batch_wait_limit_seconds [value]
Client Side Encryption Master Key (client_side_encryption_master_key)
-
Environment variable:
TARGET_SNOWFLAKE_CLIENT_SIDE_ENCRYPTION_MASTER_KEY
When this is defined, Client-Side Encryption is enabled. The data in S3 will be encrypted, No third parties, including Amazon AWS and any ISPs, can see data in the clear. Snowflake COPY command will decrypt the data once it's in Snowflake. The master key must be 256-bit length and must be encoded as base64 string.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set client_side_encryption_master_key [value]
Client Side Encryption Stage Object (client_side_encryption_stage_object)
-
Environment variable:
TARGET_SNOWFLAKE_CLIENT_SIDE_ENCRYPTION_STAGE_OBJECT
Required when client_side_encryption_master_key
is defined. The name of the encrypted stage object in Snowflake that created separately and using the same encryption master key.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set client_side_encryption_stage_object [value]
Data Flattening Max Level (data_flattening_max_level)
-
Environment variable:
TARGET_SNOWFLAKE_DATA_FLATTENING_MAX_LEVEL
-
Default Value:
0
Object type RECORD items from taps can be loaded into VARIANT columns as JSON (default) or we can flatten the schema by creating columns automatically. When value is 0 (default) then flattening functionality is turned off.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set data_flattening_max_level [value]
Database Name (dbname)
-
Environment variable:
TARGET_SNOWFLAKE_DBNAME
Snowflake Database name
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set dbname [value]
Default Target Schema (default_target_schema)
-
Environment variable:
TARGET_SNOWFLAKE_DEFAULT_TARGET_SCHEMA
-
Default Value:
$MELTANO_EXTRACT__LOAD_SCHEMA
Note $MELTANO_EXTRACT__LOAD_SCHEMA
will expand to the value of the load_schema
extra for the extractor used in the pipeline, which defaults to the extractor's namespace, e.g. tap_gitlab
for tap-gitlab
. Values are automatically converted to uppercase before they're passed on to the plugin, so tap_gitlab
becomes TAP_GITLAB
.
Name of the schema where the tables will be created, without database
prefix. If schema_mapping
is not defined then every stream sent by the tap is
loaded into this schema.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set default_target_schema [value]
Default Target Schema Select Permission (default_target_schema_select_permission)
-
Environment variable:
TARGET_SNOWFLAKE_DEFAULT_TARGET_SCHEMA_SELECT_PERMISSION
Grant USAGE privilege on newly created schemas and grant SELECT privilege on newly created tables to a specific role or a list of roles. If schema_mapping
is not defined then every stream sent by the tap is granted accordingly.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set default_target_schema_select_permission [value]
Disable Table Cache (disable_table_cache)
-
Environment variable:
TARGET_SNOWFLAKE_DISABLE_TABLE_CACHE
-
Default Value:
false
By default the connector caches the available table structures in Snowflake at startup. In this way it doesn't need to run additional queries when ingesting data to check if altering the target tables is required. With disable_table_cache
option you can turn off this caching. You will always see the most recent table structures but will cause an extra query runtime.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set disable_table_cache [value]
File Format (file_format)
-
Environment variable:
TARGET_SNOWFLAKE_FILE_FORMAT
The Snowflake file format object name which needs to be manually created as part of the requirements section of the docs. Has to be the fully qualified name including the schema. Refer to the Snowflake docs for more details.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set file_format [value]
Flush All Streams (flush_all_streams)
-
Environment variable:
TARGET_SNOWFLAKE_FLUSH_ALL_STREAMS
-
Default Value:
false
Flush and load every stream into Snowflake when one batch is full. Warning: This may trigger the COPY command to use files with low number of records, and may cause performance problems.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set flush_all_streams [value]
Hard Delete (hard_delete)
-
Environment variable:
TARGET_SNOWFLAKE_HARD_DELETE
-
Default Value:
false
When hard_delete
option is true then DELETE SQL commands will be performed in Snowflake to delete rows in tables. It's achieved by continuously checking the _SDC_DELETED_AT
metadata column sent by the singer tap. Due to deleting rows requires metadata columns, hard_delete
option automatically enables the add_metadata_columns
option as well.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set hard_delete [value]
No Compression (no_compression)
-
Environment variable:
TARGET_SNOWFLAKE_NO_COMPRESSION
-
Default Value:
false
Generate uncompressed CSV files when loading to Snowflake. Normally, by default GZIP compressed files are generated.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set no_compression [value]
Parallelism (parallelism)
-
Environment variable:
TARGET_SNOWFLAKE_PARALLELISM
-
Default Value:
0
The number of threads used to flush tables. 0 will create a thread for each stream, up to parallelism_max. -1 will create a thread for each CPU core. Any other positive number will create that number of threads, up to parallelism_max.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set parallelism [value]
Parallelism Max (parallelism_max)
-
Environment variable:
TARGET_SNOWFLAKE_PARALLELISM_MAX
-
Default Value:
16
Max number of parallel threads to use when flushing tables.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set parallelism_max [value]
Password (password)
-
Environment variable:
TARGET_SNOWFLAKE_PASSWORD
Snowflake Password
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set password [value]
Primary Key Required (primary_key_required)
-
Environment variable:
TARGET_SNOWFLAKE_PRIMARY_KEY_REQUIRED
-
Default Value:
true
Log based and Incremental replications on tables with no Primary Key cause duplicates when merging UPDATE events. When set to true, stop loading data if no Primary Key is defined.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set primary_key_required [value]
Query Tag (query_tag)
-
Environment variable:
TARGET_SNOWFLAKE_QUERY_TAG
Optional string to tag executed queries in Snowflake. Replaces tokens
schema
and table
with the appropriate values. The tags are displayed in the
output of the Snowflake QUERY_HISTORY
, QUERY_HISTORY_BY_*
functions.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set query_tag [value]
Role (role)
-
Environment variable:
TARGET_SNOWFLAKE_ROLE
Snowflake role to use. If not defined then the user's default role will be used.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set role [value]
S3 ACL (s3_acl)
-
Environment variable:
TARGET_SNOWFLAKE_S3_ACL
S3 ACL name to set on the uploaded files
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set s3_acl [value]
S3 Bucket (s3_bucket)
-
Environment variable:
TARGET_SNOWFLAKE_S3_BUCKET
S3 Bucket name
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set s3_bucket [value]
S3 Endpoint URL (s3_endpoint_url)
-
Environment variable:
TARGET_SNOWFLAKE_S3_ENDPOINT_URL
The complete URL to use for the constructed client. This is allowing to use non-native s3 account.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set s3_endpoint_url [value]
S3 Key Prefix (s3_key_prefix)
-
Environment variable:
TARGET_SNOWFLAKE_S3_KEY_PREFIX
A static prefix before the generated S3 key names. Using prefixes you can upload files into specific directories in the S3 bucket.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set s3_key_prefix [value]
S3 Region Name (s3_region_name)
-
Environment variable:
TARGET_SNOWFLAKE_S3_REGION_NAME
Default region when creating new connections
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set s3_region_name [value]
Schema Mapping (schema_mapping)
-
Environment variable:
TARGET_SNOWFLAKE_SCHEMA_MAPPING
Useful if you want to load multiple streams from one tap to multiple Snowflake schemas.
If the tap sends the stream_id
in <schema_name>-<table_name>
format then this option overwrites the default_target_schema
value.
Note, that using schema_mapping
you can overwrite the default_target_schema_select_permission
value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set schema_mapping [value]
Stage (stage)
-
Environment variable:
TARGET_SNOWFLAKE_STAGE
Named external stage name created at pre-requirements section. Has to be a fully qualified name including the schema name
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set stage [value]
Temporary Directory (temp_dir)
-
Environment variable:
TARGET_SNOWFLAKE_TEMP_DIR
(Default: platform-dependent) Directory of temporary CSV files with RECORD messages.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set temp_dir [value]
User (user)
-
Environment variable:
TARGET_SNOWFLAKE_USER
Snowflake User
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set user [value]
Validate Records (validate_records)
-
Environment variable:
TARGET_SNOWFLAKE_VALIDATE_RECORDS
-
Default Value:
false
Validate every single record message to the corresponding JSON schema. This option is disabled by default and invalid RECORD messages will fail only at load time by Snowflake. Enabling this option will detect invalid records earlier but could cause performance degradation.
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set validate_records [value]
Warehouse (warehouse)
-
Environment variable:
TARGET_SNOWFLAKE_WAREHOUSE
Snowflake virtual warehouse name
Configure this setting directly using the following Meltano command:
meltano config target-snowflake set warehouse [value]
Advanced Topics
How Schema Changes Are Handled
See the pipelinewise for the full documentation. Here are some of the important details:
1. New Column Added
Target connectors add the new column to the destination table with the same name using a compatible data type.
2. Column Dropped
Target connectors DO NOT drop columns. Old column remains in the table in case you need to do historical analysis on the column.
3. Column Data Type Changed
Target connectors version the columns. See the versioning columns docs for syntax.
Something missing?
This page is generated from a YAML file that you can contribute changes to.
Edit it on GitHub!Looking for help?
#plugins-general
channel.