Connecting your database
IP Restrictions
dbt Cloud will always connect to your warehouse from the following IP addresses. Be sure to allow traffic from these IPs in your firewall, and include them in any database grants.
- 52.45.144.63
- 54.81.134.249
- 52.22.161.231
Connecting to Redshift and Postgres
The following fields are required when creating a Redshift connection:
Field | Description | Examples |
---|---|---|
Host Name | The hostname of the Postgres or Redshift database to connect to. This can either be a hostname an IP address. | xxx.us-east-1.amazonaws.com |
Port | Usually 5432 (Postgres) or 5439 (Redshift) | 5439 |
Database | The logical database to connect to and run queries against. | analytics |
Connecting via an SSH Tunnel
To connect to a Postgres or Redshift instance via an SSH tunnel, check the "Use SSH Tunnel" option when creating your connection. When configuring the tunnel, you'll need to supply the hostname, username, and port for the bastion server.
Once the connection is saved, a public key will be generated and displayed for the Connection. You can copy this public key to the bastion server to authorize dbt Cloud to connect to your database via the bastion server.
Connecting to Snowflake
The following fields are required when creating a Snowflake connection:
Field | Description | Examples |
---|---|---|
Account | The Snowflake account to connect to. Take a look here to determine what the account field should look like based on your region. | db5261993 ,db5261993.east-us-2.azure |
Role | An optional field indicating what role should be assumed after connecting to Snowflake | transformer |
Database | The logical database to connect to and run queries against. | analytics |
Warehouse | The virtual warehouse to use for running queries. | transforming |
Username / Password
Available in: Development environments, Deployment environments
The Username / Password
auth method is the simplest way to authenticate
Development or Deployment credentials in a dbt project. Simply enter your Snowflake
username (specifically, the login_name
) and the corresponding user's Snowflake password
to authenticate dbt Cloud to run queries against Snowflake on behalf of a Snowflake user.
Key Pair
Available in: Development environments, Deployment environments
The Keypair
auth method uses Snowflake's Key Pair Authentication to authenticate Development or Deployment credentials for a dbt Cloud project.
After generating an encrypted key pair, be sure to set the rsa_public_key
for the Snowflake user to authenticate in dbt Cloud:
alter user jsmith set rsa_public_key='MIIBIjANBgkqh...';
Finally, set the "Private Key" and "Private Key Passphrase" fields in the "Edit Credentials" page to finish configuring dbt Cloud to authenticate with Snowflake using a key pair.
Note: At this time ONLY Encrypted Private Keys are supported by dbt Cloud -- you must add the passphrase that will be used to decrypt the and will receive an error if the PRIVATE KEY PASSPHRASE
field is empty.
Snowflake OAuth
Available in: Development environments, Enterprise plans only
The OAuth auth method permits dbt Cloud to run development queries on behalf of a Snowflake user without the configuration of Snowflake password in dbt Cloud. For more information on configuring a Snowflake OAuth connection in dbt Cloud, please see the docs on setting up Snowflake OAuth.
Connecting to BigQuery
JSON keyfile
Uploading a service account JSON keyfile
While the fields in a BigQuery connection can be specified manually, we recommend uploading a service account JSON keyfile to quickly and accurately configure a connection to BigQuery.
Uploading a JSON keyfile should populate the following fields:
- Project id
- Private key id
- Private key
- Client email
- Client id
- Auth uri
- Token uri
- Auth provider x509 cert url
- Client x509 cert url
In addition to these fields, there are two other optional fields that can be configured in a BigQuery connection:
Field | Description | Examples |
---|---|---|
Timeout | Deprecated; exists for backwards compatibility with older versions of dbt and will be removed in the future. | 300 |
Location | The location where dbt should create datasets. | US , EU |
BigQuery OAuth
Available in: Development environments, Enterprise plans only
The OAuth auth method permits dbt Cloud to run development queries on behalf of a BigQuery user without the configuration of BigQuery service account keyfile in dbt Cloud. For more information on the initial configuration of a BigQuery OAuth connection in dbt Cloud, please see the docs on setting up BigQuery OAuth.
As an end user, if your organization has set up BigQuery OAuth, you can link a project with your personal BigQuery account in your personal Profile in dbt Cloud, like so:
Connecting to Databricks
ODBC
dbt Cloud supports connecting to Databricks using
a Cluster or
a SQL Endpoint.
Depending on how you connect to Databricks, either one of the Cluster
or
Endpoint
configurations must be provided, but setting both values is not
allowed.
The following fields are available when creating a Databricks connection:
Field | Description | Examples |
---|---|---|
Host Name | The hostname of the Databricks account to connect to | avc-def1234ghi-9999.cloud.databricks.com |
Port | The port to connect to Databricks for this connection | 443 |
Organization | Optional (default: 0) | 0123456789 |
Cluster | The ID of the cluster to connect to (required if using a cluster) | 1234-567890-abc12345 |
Endpoint | The ID of the endpoint to connect to (required if using Databricks SQL) | 0123456789 |
User | Optional | dbt_cloud_user |
Connecting to Apache Spark
HTTP and Thrift
dbt Cloud supports connecting to an Apache Spark cluster using the HTTP method or the Thrift method. Note: While the HTTP method can be used to connect to an all-purpose Databricks cluster, the ODBC method is recommended for all Databricks connections. For further details on configuring these connection parameters, please see the dbt-spark documentation
The following fields are available when creating an Apache Spark connection using the HTTP and Thrift connection methods:
Field | Description | Examples |
---|---|---|
Host Name | The hostname of the Spark cluster to connect to | yourorg.sparkhost.com |
Port | The port to connect to Spark on | 443 |
Organization | Optional (default: 0) | 0123456789 |
Cluster | The ID of the cluster to connect to | 1234-567890-abc12345 |
Connection Timeout | Number of seconds after which to timeout a connection | 10 |
Connection Retries | Number of times to attempt connecting to cluster before failing | 10 |
User | Optional | dbt_cloud_user |
Auth | Optional, supply if using Kerberos | KERBEROS |
Kerberos Service Name | Optional, supply if using Kerberos | hive |