Create Amazon Redshift Connector
Connection Properties
Using S3 Staging = No
Â
Using S3 Staging = Yes
Â
Â
Name | Mandatory | Description |
---|---|---|
Connection Name | Yes | Name of the connection |
Description | No | Description of connection |
JDBC Url | Yes |
  |
Username | Yes | The Redshift username |
Password | Yes | The Redshift password |
Use S3 for Staging | Yes | Valid options (Yes, No) Indicates if connection will use S3 for staging when performing COPY or UNLOAD. Helpful for large data sets |
S3 Bucket name | No (Required if Use S3 for Staging is Yes) | A writeable bucket in Amazon S3. Used for unloading data when reading and Avro data to be loaded into Redshift when writing. If you're using a Redshift data source for EazyDI as part of a regular ETL pipeline, it can be useful to set a lifecycle policy on a bucket and use that as a temp location for this data. |
Region Name | No (Required if Use S3 for Staging is Yes) | Region of the s3 Bucket |
S3 Access Key ID | No (Required if Use S3 for Staging is Yes) | Â |
S3 Secret Access Key | No (Required if Use S3 for Staging is Yes) | Â |
IAM Role ARN | No | The IAM Role ARN for temporary credentials for credentials that has no direct access to s3 bucket but can assume a role |
External Id | No | Indicates the External id used for the IAM Role ARN (If necessary) |
Â