diff --git a/docs/resources/job.md b/docs/resources/job.md index efc6bd8ca7..32759fa931 100644 --- a/docs/resources/job.md +++ b/docs/resources/job.md @@ -372,7 +372,6 @@ This block describes the queue settings of the job: * `periodic` - (Optional) configuration block to define a trigger for Periodic Triggers consisting of the following attributes: * `interval` - (Required) Specifies the interval at which the job should run. This value is required. * `unit` - (Required) Options are {"DAYS", "HOURS", "WEEKS"}. - * `file_arrival` - (Optional) configuration block to define a trigger for [File Arrival events](https://learn.microsoft.com/en-us/azure/databricks/workflows/jobs/file-arrival-triggers) consisting of following attributes: * `url` - (Required) URL to be monitored for file arrivals. The path must point to the root or a subpath of the external location. Please note that the URL must have a trailing slash character (`/`). * `min_time_between_triggers_seconds` - (Optional) If set, the trigger starts a run only after the specified amount of time passed since the last time the trigger fired. The minimum allowed value is 60 seconds. diff --git a/docs/resources/pipeline.md b/docs/resources/pipeline.md index 76a60d75db..28ea211616 100644 --- a/docs/resources/pipeline.md +++ b/docs/resources/pipeline.md @@ -80,7 +80,8 @@ The following arguments are supported: * `photon` - A flag indicating whether to use Photon engine. The default value is `false`. * `serverless` - An optional flag indicating if serverless compute should be used for this DLT pipeline. Requires `catalog` to be set, as it could be used only with Unity Catalog. * `catalog` - The name of catalog in Unity Catalog. *Change of this parameter forces recreation of the pipeline.* (Conflicts with `storage`). -* `target` - The name of a database (in either the Hive metastore or in a UC catalog) for persisting pipeline output data. Configuring the target setting allows you to view and query the pipeline output data from the Databricks UI. +* `target` - (Optional, String, Conflicts with `schema`) The name of a database (in either the Hive metastore or in a UC catalog) for persisting pipeline output data. Configuring the target setting allows you to view and query the pipeline output data from the Databricks UI. +* `schema` - (Optional, String, Conflicts with `target`) The default schema (database) where tables are read from or published to. The presence of this attribute implies that the pipeline is in direct publishing mode. * `edition` - optional name of the [product edition](https://docs.databricks.com/data-engineering/delta-live-tables/delta-live-tables-concepts.html#editions). Supported values are: `CORE`, `PRO`, `ADVANCED` (default). Not required when `serverless` is set to `true`. * `channel` - optional name of the release channel for Spark version used by DLT pipeline. Supported values are: `CURRENT` (default) and `PREVIEW`. * `budget_policy_id` - optional string specifying ID of the budget policy for this DLT pipeline. diff --git a/pipelines/resource_pipeline.go b/pipelines/resource_pipeline.go index d187e43336..ac18eef8ff 100644 --- a/pipelines/resource_pipeline.go +++ b/pipelines/resource_pipeline.go @@ -246,6 +246,8 @@ func (Pipeline) CustomizeSchema(s *common.CustomizableSchema) *common.Customizab s.SchemaPath("storage").SetConflictsWith([]string{"catalog"}) s.SchemaPath("catalog").SetConflictsWith([]string{"storage"}) s.SchemaPath("ingestion_definition", "connection_name").SetConflictsWith([]string{"ingestion_definition.0.ingestion_gateway_id"}) + s.SchemaPath("target").SetConflictsWith([]string{"schema"}) + s.SchemaPath("schema").SetConflictsWith([]string{"target"}) // MinItems fields s.SchemaPath("library").SetMinItems(1)