Skip to content

Commit

Permalink
Adds more job resource documentation (#9)
Browse files Browse the repository at this point in the history
  • Loading branch information
nickzelei authored Feb 7, 2024
1 parent 1980ad8 commit b76a30e
Show file tree
Hide file tree
Showing 11 changed files with 301 additions and 106 deletions.
14 changes: 14 additions & 0 deletions docs/data-sources/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,21 @@ description: |-

Neosync Job data source

## Example Usage

```terraform
data "neosync_job" "my_job" {
id = "3b83d1d3-5ffe-48c6-ac11-7a2e60802864"
}
output "job_name" {
value = data.neosync_job.my_job.name
}
output "job_id" {
value = data.neosync_job.my_job.id
}
```

<!-- schema generated by tfplugindocs -->
## Schema
Expand Down
10 changes: 10 additions & 0 deletions docs/data-sources/system_transformer.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,17 @@ description: |-

Neosync System Transformer data source

## Example Usage

```terraform
data "neosync_system_transformer" "generate_cc" {
source = "generate_card_number"
}
output "cc_config" {
value = data.neosync_system_transformer.generate_cc.config
}
```

<!-- schema generated by tfplugindocs -->
## Schema
Expand Down
10 changes: 10 additions & 0 deletions docs/data-sources/user_defined_transformer.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,17 @@ description: |-

Neosync User Defined Transformer data source

## Example Usage

```terraform
data "neosync_user_defined_transformer" "my_transformer" {
id = "3b83d1d3-5ffe-48c6-ac11-7a2e60802864"
}
output "my_id" {
value = data.neosync_user_defined_transformer.my_transformer.id
}
```

<!-- schema generated by tfplugindocs -->
## Schema
Expand Down
145 changes: 92 additions & 53 deletions docs/resources/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,24 +10,63 @@ description: |-

Job resource


## Example Usage

```terraform
resource "neosync_job" "prod_to_stage" {
name = "prod-to-stage"
source = {
postgres = {
halt_on_new_column_additon = false
connection_id = var.prod_connection_id
}
}
destinations = [
{
connection_id = var.stage_connection_id
postgres = {
init_table_schema = false
truncate_table = {
truncate_before_insert = true
cascade = true
}
}
}
]
mappings = [
{
schema = "public"
table = "users"
column = "id"
transformer = {
source = "passthrough"
config = {
passthrough = {}
}
}
}
]
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- `destinations` (Attributes List) (see [below for nested schema](#nestedatt--destinations))
- `destinations` (Attributes List) A list of destination connections and any relevant configurations that are available to them dependent on type (see [below for nested schema](#nestedatt--destinations))
- `name` (String) The unique friendly name of the job
- `source` (Attributes) (see [below for nested schema](#nestedatt--source))
- `source` (Attributes) Configuration details about the source data connection (see [below for nested schema](#nestedatt--source))

### Optional

- `account_id` (String) The unique identifier of the account. Can be pulled from the API Key if present, or must be specified if using a user access token
- `cron_schedule` (String) A cron string for how often it's desired to schedule the job to run
- `mappings` (Attributes List) (see [below for nested schema](#nestedatt--mappings))
- `sync_options` (Attributes) (see [below for nested schema](#nestedatt--sync_options))
- `workflow_options` (Attributes) (see [below for nested schema](#nestedatt--workflow_options))
- `mappings` (Attributes List) Details each schema,table,column along with the transformation that will be executed (see [below for nested schema](#nestedatt--mappings))
- `sync_options` (Attributes) Advanced settings and other options specific to a table sync (see [below for nested schema](#nestedatt--sync_options))
- `workflow_options` (Attributes) Advanced settings and other options specific to a job run (see [below for nested schema](#nestedatt--workflow_options))

### Read-Only

Expand All @@ -38,11 +77,11 @@ Job resource

Optional:

- `aws_s3` (Attributes) (see [below for nested schema](#nestedatt--destinations--aws_s3))
- `connection_id` (String)
- `id` (String)
- `mysql` (Attributes) (see [below for nested schema](#nestedatt--destinations--mysql))
- `postgres` (Attributes) (see [below for nested schema](#nestedatt--destinations--postgres))
- `aws_s3` (Attributes) AWS S3 connection specific options (see [below for nested schema](#nestedatt--destinations--aws_s3))
- `connection_id` (String) The unique identifier of the connection that will be used during the synchronization process
- `id` (String) The unique identifier of the destination resource. This is set after creation
- `mysql` (Attributes) Mysql connection specific options (see [below for nested schema](#nestedatt--destinations--mysql))
- `postgres` (Attributes) Postgres connection specific options (see [below for nested schema](#nestedatt--destinations--postgres))

<a id="nestedatt--destinations--aws_s3"></a>
### Nested Schema for `destinations.aws_s3`
Expand All @@ -53,18 +92,18 @@ Optional:

Required:

- `init_table_schema` (Boolean)
- `init_table_schema` (Boolean) Whether or not to have neosync init the table schema and constraints it pulled from the source connection

Optional:

- `truncate_table` (Attributes) (see [below for nested schema](#nestedatt--destinations--mysql--truncate_table))
- `truncate_table` (Attributes) Details about what truncation should occur (see [below for nested schema](#nestedatt--destinations--mysql--truncate_table))

<a id="nestedatt--destinations--mysql--truncate_table"></a>
### Nested Schema for `destinations.mysql.truncate_table`

Optional:

- `truncate_before_insert` (Boolean)
- `truncate_before_insert` (Boolean) Will truncate the table prior to insertion of any records



Expand All @@ -73,19 +112,19 @@ Optional:

Required:

- `init_table_schema` (Boolean)
- `init_table_schema` (Boolean) Whether or not to have neosync init the table schema and constraints it pulled from the source connection

Optional:

- `truncate_table` (Attributes) (see [below for nested schema](#nestedatt--destinations--postgres--truncate_table))
- `truncate_table` (Attributes) Details about what truncation should occur (see [below for nested schema](#nestedatt--destinations--postgres--truncate_table))

<a id="nestedatt--destinations--postgres--truncate_table"></a>
### Nested Schema for `destinations.postgres.truncate_table`

Optional:

- `cascade` (Boolean)
- `truncate_before_insert` (Boolean)
- `cascade` (Boolean) Will truncate cascade. This is required if the table holds any foreign key constraints. If this is true, truncate_before_insert must also be true
- `truncate_before_insert` (Boolean) Will truncate the table prior to insertion of any records



Expand All @@ -95,45 +134,45 @@ Optional:

Optional:

- `aws_s3` (Attributes) (see [below for nested schema](#nestedatt--source--aws_s3))
- `generate` (Attributes) (see [below for nested schema](#nestedatt--source--generate))
- `mysql` (Attributes) (see [below for nested schema](#nestedatt--source--mysql))
- `postgres` (Attributes) (see [below for nested schema](#nestedatt--source--postgres))
- `aws_s3` (Attributes) AWS S3 specific connection configurations (see [below for nested schema](#nestedatt--source--aws_s3))
- `generate` (Attributes) Generate specific connection configurations. Currently only supports single table generation (see [below for nested schema](#nestedatt--source--generate))
- `mysql` (Attributes) Mysql specific connection configurations (see [below for nested schema](#nestedatt--source--mysql))
- `postgres` (Attributes) Postgres specific connection configurations (see [below for nested schema](#nestedatt--source--postgres))

<a id="nestedatt--source--aws_s3"></a>
### Nested Schema for `source.aws_s3`

Required:

- `connection_id` (String)
- `connection_id` (String) The unique identifier of the connection that is to be used as the source


<a id="nestedatt--source--generate"></a>
### Nested Schema for `source.generate`

Required:

- `fk_source_connection_id` (String)
- `schemas` (Attributes List) (see [below for nested schema](#nestedatt--source--generate--schemas))
- `fk_source_connection_id` (String) The unique connection identifier that is used to generate schema specific details. This is usually set to the destination connectio id if it has been upserted with the schema already
- `schemas` (Attributes List) A list of schemas and table specific options (see [below for nested schema](#nestedatt--source--generate--schemas))

<a id="nestedatt--source--generate--schemas"></a>
### Nested Schema for `source.generate.schemas`

Required:

- `schema` (String)
- `tables` (Attributes List) (see [below for nested schema](#nestedatt--source--generate--schemas--tables))
- `schema` (String) The name of the schema
- `tables` (Attributes List) A list of tables and their specific options within the defined schema (see [below for nested schema](#nestedatt--source--generate--schemas--tables))

<a id="nestedatt--source--generate--schemas--tables"></a>
### Nested Schema for `source.generate.schemas.tables`

Required:

- `table` (String)
- `table` (String) The name of the table

Optional:

- `row_count` (Number)
- `row_count` (Number) The number of rows to generate into the table



Expand All @@ -143,31 +182,31 @@ Optional:

Required:

- `connection_id` (String)
- `halt_on_new_column_addition` (Boolean)
- `connection_id` (String) The unique identifier of the connection that is to be used as the source
- `halt_on_new_column_addition` (Boolean) Whether or not to halt the job if it detects a new column that has been added in the source that has not been defined in the mappings schema

Optional:

- `schemas` (Attributes List) (see [below for nested schema](#nestedatt--source--mysql--schemas))
- `schemas` (Attributes List) A list of schemas and table specific options (see [below for nested schema](#nestedatt--source--mysql--schemas))

<a id="nestedatt--source--mysql--schemas"></a>
### Nested Schema for `source.mysql.schemas`

Required:

- `schema` (String)
- `tables` (Attributes List) (see [below for nested schema](#nestedatt--source--mysql--schemas--tables))
- `schema` (String) The name of the schema
- `tables` (Attributes List) A list of tables and their specific options within the defined schema (see [below for nested schema](#nestedatt--source--mysql--schemas--tables))

<a id="nestedatt--source--mysql--schemas--tables"></a>
### Nested Schema for `source.mysql.schemas.tables`

Required:

- `table` (String)
- `table` (String) The name of the table

Optional:

- `where_clause` (String)
- `where_clause` (String) A where clause that will be used to subset the table during sync



Expand All @@ -177,31 +216,31 @@ Optional:

Required:

- `connection_id` (String)
- `halt_on_new_column_addition` (Boolean)
- `connection_id` (String) The unique identifier of the connection that is to be used as the source
- `halt_on_new_column_addition` (Boolean) Whether or not to halt the job if it detects a new column that has been added in the source that has not been defined in the mappings schema

Optional:

- `schemas` (Attributes List) (see [below for nested schema](#nestedatt--source--postgres--schemas))
- `schemas` (Attributes List) A list of schemas and table specific options (see [below for nested schema](#nestedatt--source--postgres--schemas))

<a id="nestedatt--source--postgres--schemas"></a>
### Nested Schema for `source.postgres.schemas`

Required:

- `schema` (String)
- `tables` (Attributes List) (see [below for nested schema](#nestedatt--source--postgres--schemas--tables))
- `schema` (String) The name of the schema
- `tables` (Attributes List) A list of tables and their specific options within the defined schema (see [below for nested schema](#nestedatt--source--postgres--schemas--tables))

<a id="nestedatt--source--postgres--schemas--tables"></a>
### Nested Schema for `source.postgres.schemas.tables`

Required:

- `table` (String)
- `table` (String) The name of the table

Optional:

- `where_clause` (String)
- `where_clause` (String) A where clause that will be used to subset the table during sync



Expand All @@ -212,18 +251,18 @@ Optional:

Required:

- `column` (String)
- `schema` (String)
- `table` (String)
- `transformer` (Attributes) (see [below for nested schema](#nestedatt--mappings--transformer))
- `column` (String) The column in the specified table
- `schema` (String) The database schema
- `table` (String) The database table
- `transformer` (Attributes) The transformer that will be performed on the column (see [below for nested schema](#nestedatt--mappings--transformer))

<a id="nestedatt--mappings--transformer"></a>
### Nested Schema for `mappings.transformer`

Required:

- `config` (Attributes) This config object consists of the matching configuration defined with the source specified. (see [below for nested schema](#nestedatt--mappings--transformer--config))
- `source` (String)
- `source` (String) The source of the transformer that will be used

<a id="nestedatt--mappings--transformer--config"></a>
### Nested Schema for `mappings.transformer.config`
Expand Down Expand Up @@ -539,16 +578,16 @@ Required:

Optional:

- `retry_policy` (Attributes) (see [below for nested schema](#nestedatt--sync_options--retry_policy))
- `schedule_to_close_timeout` (Number)
- `start_to_close_timeout` (Number)
- `retry_policy` (Attributes) The table sync retry policy (see [below for nested schema](#nestedatt--sync_options--retry_policy))
- `schedule_to_close_timeout` (Number) The maximum amount of time allotted for a table sync with retries
- `start_to_close_timeout` (Number) The amount of time allotted for a table sync

<a id="nestedatt--sync_options--retry_policy"></a>
### Nested Schema for `sync_options.retry_policy`

Optional:

- `maximum_attempts` (Number)
- `maximum_attempts` (Number) The maximum number of times to retry if there is a failure or timeout



Expand All @@ -557,4 +596,4 @@ Optional:

Optional:

- `run_timeout` (Number)
- `run_timeout` (Number) The max amount of time a job run is allotted
Loading

0 comments on commit b76a30e

Please sign in to comment.